I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.
So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.
The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.
Mostly infrastructure as code with folks installing software natively on their windows host (terraform, ansible, powershell modules, but we also do some NPM stuff too). I’m trying to get people used to running a container instead of installing things on their host so I don’t have to chase people down when they run commands using the wrong version or something.
Agreed there – it’s good for onboarding devs and ensuring consistent build environment.
Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…
But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.
And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.
The container should always be updated to march production. In a non-container environment every developer has to do this independently but with containers it only has to be done once and then the developers pull the update which is a git style diff.
Best practice is to have the people who update the production servers be responsible for updating the containers, assuming they aren’t deploying the containers directly.
It’s essentinally no different than updating multiple servers, except one of those servers is then committed to a local container respository.
This also means there are snapshots of each update which can be useful in its own way.
I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.
So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.
The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.
…what do you mean by using dev containers? Are your people doing development on their host machine?
Mostly infrastructure as code with folks installing software natively on their windows host (terraform, ansible, powershell modules, but we also do some NPM stuff too). I’m trying to get people used to running a container instead of installing things on their host so I don’t have to chase people down when they run commands using the wrong version or something.
Agreed there – it’s good for onboarding devs and ensuring consistent build environment.
Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…
But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.
And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.
The container should always be updated to march production. In a non-container environment every developer has to do this independently but with containers it only has to be done once and then the developers pull the update which is a git style diff.
Best practice is to have the people who update the production servers be responsible for updating the containers, assuming they aren’t deploying the containers directly.
It’s essentinally no different than updating multiple servers, except one of those servers is then committed to a local container respository.
This also means there are snapshots of each update which can be useful in its own way.