Shai-Hulud, Shai-Hulud 2.0, Trivy, LiteLLM, and now Axios, and many smaller compromises bring us to the realization that existing supply chains are highly vulnerable.

A common thread across of these attacks is that once you download and install a compromised package, the usual behavior of the malicious code inside is to steal tokens and other credentials from your machines.
There are a number of ways development machines may be protected, and I discussed some of them in relation to npm recently. However, it is important to point out that no solution is fullproof.
Here is a non-exhaustive list of things that are recommended, and why they may fail:
- Use only packages at least x (i.e., 7) days old. This absolutely should be done, but the problem with that is that we’d likely soon see an attack involving a time bomb that may be set to an arbitrary date in the future and stay under the radar until then.
- Use –ignore-scripts (or similar approaches for non-npm tools). Again, this absolutely should be done and should even be the default behaviour for all tools. But one can imagine malicious logic embedded in the application itself rather than in install-time scripts.
- Use version pinning. Again, great advice overall, but how do you upgrade? And what if the software you pin downloads or uses other dependencies that are unpinned (I’m thinking about you, npm and GitHub Actions).
- Egress control. Great idea conceptually, but what if exfiltration is done using a reputable website that you use yourself?
This list can go on. The important point is that no realistic control set can fully eliminate risk in development environments. Possibly with the exception sandboxing. Sandboxing is when you force developers to use sandboxed environments (i.e., VMs) to do all development work. Those would have no access to sensitive data stored on the main machine.
The problem here is that such an approach kills productivity. Yes, there are startups and open source projects that try to solve this, but at the end of the day, as a developer I’m much more productive when I have all the tools on my machine instead of spending hours trying to figure out how to debug something running in a VM and make my IDE performance bearable in this workflow.
The practical implication is that enforcement of sandboxing becomes nearly impossible. Tight deadlines usually win over sandboxing, and the security posture deterioratess with them.
So my take on this is: instead of putting maximum effort into protecting dev environments, treat them as untrusted.
Instead,
- Protect pipelines – and we just described in the ReARM blog how to use an Evidence Platform as a CI/CD security layer.
- Sandbox anything sensitive away from Development machines.
I’ll expand on the second point. Yes, instead of sandboxing development environments into something, I believe the correct way is to move anything sensitive out. Here are practical things that we do on top of the ReARM blog post mentioned above and the controls that I mentioned in the post about npm:
- Don’t store elevated source control tokens on a development machine (you would still need write-scoped tokens, however)
- Sign commits with YubiKeys (or other hardware keys) and set your VCS (Version Control System) to only accept signed commits. For YubiKey specifically, require touch protection for signing. This way, an attacker won’t be able to sign commits on your behalf and any unsigned commits will be rejected.
- Use YubiKeys for SSH. For modern YubiKeys, refer to Yubico’s documentation. For older ones, see my posts here and here. Add touch protection, I recently added instructions how to do it in the post regarding Windows, but same command works for all platforms
- Make sure no release tokens are present on Development machines.
- The same goes for crypto or any other credentials.
Last but not least is how to configure those sandboxes. One option is a separate laptop. Another option is VMs with encrypted file systems. I just dusted off Vagrant and am using that with gocryptfs, but there are other tools as well. And I know the Vagrant license is not open source anymore, but it works for me since it’s not something I resell.
Essentially, if there is nothing to steal, the breach does not matter as much.