Retry, Reboot, Reprovision
When it comes to computers, it’s not Reduce, Reuse, Recycle, it’s Retry, Reboot, Reprovision
Automation exists. A great secret, I know. When it comes to automation the focus is typically on how you can make things faster. Making things faster or eliminating them altogether as tasks has obvious value. Time is the ultimate scarce resource. Opportunity cost is why scarcity is the basis of economics.
What’s covered less in my experience is how automation makes your life easier not by saving time on the specific task itself but through second order effects.
Mop Vac Robots
When you subsidize something, you get more of it. This is true everywhere and always. For awhile now I’d been considering getting a robot wet vac for cleaning my floors but I never pulled the trigger. At first they weren’t good enough. Then they were, but were really expensive. Then they were worth it for a house, but not an apartment. Finally at some point I bought one on sale.
Obviously having the device saves me time. I almost never vacuum anymore and I only mop a floor when something catastrophic has occurred like cat vomit that I’m not interested in running through my expensive machine. Even then, it’s just a quick once over and you let the robot handle the true final cleanup. But this raises a question, was the robot “worth it”.
In simple terms, yes. I can show over the course of its life it saves X hours and my time is expensive so it easily pays for itself. However, that entirely misses the point in the same way that comparing a car to a horse in cost per mile of goods delivered does. When you don’t have to actually clean a floor yourself and the marginal cost is effectively 0, you clean it more often.
That is where the robot has made a big difference. It cleans the kitchen daily, all other wood flooring every other day, and carpets 3x a week. In a modest sized apartment with 2 people and 2 animals, it is remarkable how much cleaner things are. It’s very noticeable. It also means I’m less uptight about taking my shoes off for chores or when stepping in and out frequently, because the floor is about to get cleaned anyway.
Automated Dev Environments
If you have a complicated dev setup and large teams, the time savings argument for automating environment set up is pretty obvious. We have better tools than ever with docker and dev containers, but even older solutions like vagrant still check out. The point is, you should be able to stand everything up with one click. If you can’t, you’ve failed to meet table stakes to the same degree as not having source control or CI validations on pull requests.
But just like before, there’s so much more that can be accomplished. When setup is automated, environments are cheap and disposable. The marginal cost of creating one is effectively zero. The closest you get to incurring a cost is the latency. In practice though that’s not a real issue. You can have multiple environments, and there’s never a shortage of other work to do for the 5-40* minutes it takes to spin one up.
40* - Plus if it’s that slow, you should look into it! There’s no reason for that to be the case. Perhaps even investigate while waiting for it to provision?
The upshot here is that I frequently find engineers get sucked into fixing and debugging when things go awry. There’s no point to this effort. I appreciate the mentality, but to go down those rabbit holes is to confuse effort for progress. Computers are complicated. Dev environments even more so. There are millions of little pieces. They fray and atrophy with time. Updates are applied, stale files from test runs left over.
Eventually a machine will end up in some weird state. It’s far older than normal, it went through a specific borked upgrade path, it was used heavily and ran out of space, a unique problem caused by the exact combination of all of those.
When this happens, don’t get sucked in fixing it. Just make a new one.
Reduce Reuse Recycle for Devs
You’re certainly familiar with Reduce, Reuse, Recycle
. So familiar in fact the words are probably meaningless or at best distill down to “Recycle more”. They’re in that order for a reason though, they’re sorted in descending effectiveness. You ought to begin with the first 2 R’s because that cuts more waste.
That’s the exact framing I suggest adopting when your machine breaks.
Retry
Try the operation again. Rerun the build, rerun the environment init script. Git operation failed? Probably just the network dipping for a second or a credential being rejected. Caches often don’t clear into a retry until you yourself retry the original command. Give it a smack.
Reboot (/Reconnect)
Still not working? Restart your IDE, make a new SSH session, reboot the container and maybe the machine too. Sometimes hitting it isn’t good enough and we have to unplug it and plug it back in!
Reprovision
Machine’s still messed up? Not a known issue with a workaround? No one else is having it or it happens extremely infrequently? Don’t. Waste. Even. A. Second. Recycle the environment. Delete it, provision a new one. Go do something else productive with your time. Take a break even. That would be a far greater return on your time investment.
Don’t go chasing the ghosts in the machine.