> The tools we use to build software are not secure by default, and almost all of the time, the companies that provide them are not held to account for the security of their products.
The companies? More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
> More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
People who work on permissively licensed software are donating their time to these Fortune 500 companies. It hardly seems fair to call the companies leeches for accepting these freely given donations.
It's not just time. A lot of devs simply don't have the experience of dogging into third party sourcing code or understanding how one contributed to open source.
By "a lot of devs" do you mean devs at these companies?
If so I think this is a good point. It's easy to see from any one open source project's perspective how a little help would go a long way. But it's really hard to see from the perspective of a company with a massive code base how you could possibly contribute to the ten gajillion dependencies you use, even if you wanted to.
People will say things like "Why doesn't Foo company contribute when they have the resources?" But from what I've seen, the engineers at Foo would often love to contribute, but no team has the headcount to do it. And acquiring the headcount would require making a case to management that contributing to that open source project is worth the cost of devoting a team to it.
Author of the article here - holistically this isn't just about NPM dependencies, it's the entire stacks we work with. Cloud vendors provide security, but out of the box they don't provide secure platforms - a lot of this is left up to developers, without security experts - this is dangerous - I have 25 years of experience and I wouldn't want to touch the depths of RBAC.
SaaS products don't enforce good security - I've seen some internally that don't have MFA or EntraID integration because they simply don't have those as features (mostly legacy systems these days, but they still exist).
I'm also an open-source author (I have the most used bit.ly library on npm - and have had demands and requests too), and I'm the only person you can publicly see on our [company github](https://github.com/ikea) - there's reasons for this - but not every company is leeching, rather there is simply no other alternative.
> Cloud vendors provide security, but out of the box they don't provide secure platforms - a lot of this is left up to developers, without security experts -
A lot of the spread of Shai-Hulud is due to s having overly broad credentials on NPM, GitHub and elsewhere. It's not that NPM doesn't support scoped credentials, it's that developers don't want to deal with it so it's not the default. There's no reason why, for example, a developer needs a live credential to publish their package when they're just hacking on code.
This is related to the `curl | bash` pattern. Projects like NPM want to make it easy to get started and hard to reach a failure case so they sacrifice well-known security practices during the growth phase.
I mean quite often access based errors are very opaque, I mean it is for good reason, but when you're new to something it's one of those things that leads you to give up. You want to write code, not spend 3 hours figuring out why your token doesn't work.
Security things will get hacked on later, but again it will cause all kinds of problems because the ecosystem wasn't built for it.
Yes they are, and it's hard to design good scopes especially when the project is new.
A better default might just be to have the write permission expire much more quickly than the read permission. E.g. the write token might be valid for an hour and the read token might be valid for 90 days.
That's interesting. I take issue with companies that claim a level of security that doesn't match what they ship, but I never expect them to tell me how to do my job well.
I expect a company to put their current product in as good of a light as they can. They're going to over promise what it can do and show me the easiest "Getting Started" steps as they can. Its up to me to dig deeper and understand what they actually do and what the right solution is for my project.
> a lot of this is left up to developers, without security experts - this is dangerous
Although I see where you are coming from, dismissing unaudited libs as dangerous is slightly missing the point. In fact, the world is a safer place for their existence- the value lost by security exploits is insignificant compared to the value protected by the existence of the libs they exploit. Also, I suspect that you could replace "value" with "lives" in the previous sentence.
I remember joining my company right out of college. In the interview we started talking about open source since I had some open source Android apps. I asked if the company contributed back to the projects it used. The answer was no, but that they were planning to. Over a decade later... they finally created a policy to allow commits to open source projects. It's been used maybe 3 times in it's first year or so. Nobody has the time and the management culture doesnt want to waste budget on it.
I'd erase that part entirely, as it is not true, from my point of view. My day, as has every other person's day, has exactly 24 hours. As an employee, part of that time is dedicated to my employer. In return, I receive financial compensation. It's up to them to decide how they want to spend the resources they acquired. So yes, each and every company could, in theory, contribute back to Open Source.
But as there is no price tag attached to Open Source, there is also no incentive. In a highly capitalized world, where share holder value is more worth than anything else, there are only a few companies that do the right call and act responsible.
If your finite time at work is filled with business work, then there is no time left to do the open-source work. Seems true to me from an IC and delivery perspective. Company staffing and resource allocation could create the time to do it, but they don't.
> In a highly capitalized world, where share holder value is more worth than anything else, there are only a few companies that do the right call and act responsible.
It is not just that. In a well functioning theoretical free market, no one is going to have time either. The margins are supposed to end up being tight and the competition is supposed to weed out economic inefficiency. Voluntary pro-social behavior is a competitive disadvantage and an economic inefficiency. So, by design, the companies end up not "having time for that".
You need a world that allows for inefficiency and rewards pro-social behavior. That is not the world where we are living in currently.
Working an honest job is pro-social behavior, and it is rewarded. So is quitting your job to work on a side project that ends up being valuable enough for others to pay for. It's just that giving code away for free operates outside that reward structure.
That's such a self-harmful policy. I have a small business and I've been really supportive to both open source and small, paid-for commercial libraries and building blocks that I rely on. Also advocated this successfully at clients I've consulted with. We do a lot of technical vetting before adopting any particular dependency (vs. building out our own) and it just makes sense that we strive to foster the continued existence and excellence of our tools. Considering the incredible value companies get from open source, I have trouble understanding why they wouldn't throw some cash or idle cycles their way. Seemed to work out for the likes of Google while they were undergoing rapid growth.
That's fine. There's no requirement to "contribute back". Respect the license terms and don't go demanding anything unless you have a support contract and don't expect that you can get a support contract. It's fine to just use something as long as you also don't harass the maintainer as if they owed you something.
The annoying part is that often without a corporate policy for contributing, you are doing the work anyway because you need XYZ from the software, just that it lives in a private fork that will never get upstreamed as a result of policy.
Most developers don’t work for software companies. So when you are not shipping software as a product you and your department are usually a liability. This is important to understand because it helps you frame your approach to upper management as a developer or c-suite as a director of engineering when it comes to talk about budgets. In my experience, most non tech corporations will be ok with allocating budget for open source projects, they already do it in other types of non profit domains. But you need to make a case that goes beyond the ethical reasons or personal motivations.
Technology is insecure all the way down to the hardware. The structural cause of this is that companies aren’t held liable for insecure products, which are cheaper to build.
So companies’ profit motives contribute to this mess not just through the exploitation of open source labor (as you describe) but through externalizing security costs as well.
Isn’t all this stuff with Secure Enclave supposed to address these kind of things?
It’s my take that over the past ~ decade a lot of these companies have been making things a lot better, Windows even requires secure boot these days as well.
They’re not the same problems. The Secure Enclave protects things like your biometrics, hardware-backed keys (e.g. on a Mac, WebAuthn and iCloud Keychain), and the integrity of the operating system but not every bit of code running as your account. That means that an NPM install can’t compromise your OS to the point that you can’t recover control, but it means the attacker can get everything you haven’t protected using sandbox features.
That’s the path out of this mess: not just trying to catch it on NPM but moving sensitive data into OS-enforced sandboxes (e.g. Mac containers) so every process you start can’t just read a file and get keys, and using sandboxing features in package managers themselves to restrict when new installs can run code and what they can do (e.g. changing the granularity from “can read any file accessible to the user” to “can read a configuration file at this location and data files selected by the user”), and tracking capability changes (“the leftpad update says it needs ~/.aws in this update?”).
We need to do that long-term but it’s a ton of work since it breaks the general model of how programs work we’ve used for most the last century.
Is there such a thing as secure hardware that can prevent supply chain attacks (by enabling higher layers to guarantee security) and secure hardware that prevents copying data (by enabling higher layers to guarantee security)?
Sure. Malware tends not to have physical hands that can touch the machine and any buttons attached to it. Physical ownership should be true ownership, but they're taking that away from you.
I find this perspective harmful to OSS as a whole. It is completely fine to release free software that other companies can use without restrictions, if you desire to do so. It is not meant to be a transaction. You share some, you take some.
It’s also ok to release paid free software, or closed software, restrictive licenses, commercial licenses, and sell support contracts. It’s a choice.
Just because you can do something doesn’t mean you should.
There’s also lot of pressure for devs not to use licenses that restrict use by large companies. Try adding something to your license that says companies making over $10 million per year in revenue have to pay, and half of the comments on show HN will be open source warriors either asking why you didn’t use a standard license or telling you that this isn’t open source and you have brought dishonor to your family.
> Just because you can do something doesn’t mean you should.
This implies some kind of fairness/moral contract in a license like MIT. There is none. It’s the closest thing to donating code to the public domain, and entirely voluntary.
There are plenty of standard licenses with similar clauses restricting commercial use, no need to create a custom one.
But indeed, the truth is that a restrictive license will massively reduce the project’s audience. And that is a perfectly fine choice to make.
It is not “their” work anymore (IP rights discussions aside) once they published with an unrestricted license. That’s the point. You do it expecting nothing in return, and do it willingly. Expecting “fairness” is a misunderstanding of the whole spirit of it.
Semantic games with “their work”. An artist who sells a painting can still call it their work, even if someone else owns it. And I suppose the collector who bought it could also call it their work, though that phrasing isn’t usually used.
It comes about because “work” is overloaded to mean both the activity of creating and the product/result of that activity.
Let’s ignore that no one contributes to open source expecting nothing in return.
I can help someone out expecting nothing in return. Then if my situation changes and I need help, but they look at me and say “sorry your help was a gift so I’m not going to return the favor even though I can”. That person is a dick.
The problem is you are taking the act of applying a permissive license as some kind of ceremony that severs open source software from all normal human ideas of fairness. You may view it that way. Most people don’t.
It’s perfectly reasonable to put something out in the world for other people to enjoy and use. And yet still think that if someone makes a billion dollars of it and doesn’t return anything they are displaying bad manners.
> I can help someone out expecting nothing in return. Then if my situation changes
It sounds like you did expect something in return, conditional on your circumstances. Maybe it's good-will or something, but some kind of social insurance in any case.
This is partly getting into questions about whether “pure” altruism is even possible, e.g., is an anonymous donation truly selfless if you do it because it makes you feel good.
But in the example above it’s entirely possible that you helped someone out with no expectation of being paid back. Let’s say you’re rich and the person you helped is a chronic drug addict. You have no expectation of every needing help and no expectation that the person you helped will ever be in a position to help you.
Let’s say I give a homeless person a dollar. He turns around and uses that dollar to buy a lottery ticket and wins 100 million dollars. Years later, I’m homeless and the former homeless guy walks past me and gives me a lecture about how I should have put conditions on my donation.
In that situation there was no reasonable expectation for anything except as you said maybe good will.
But of course open source developers also expect good will.
Sidestep this debate with one trick - use the GPLv3. No company large enough to have a legal team will be able to use it, you're still squarely within the various definitions, and the FSF basically has to approve.
As a bonus maybe you can get some proprietary software open sourced too.
Are you talking about promoting some software as open source when it's in fact not? Because yes, there's something wrong with that, you shouldn't do it, and people will rightfully react loudly if you try.
People don't complain about proprietary software honestly communicated as that.
This is exactly the kind of thing, I’m talking about. Open source has mostly been captured by large corporations because purists refuse to recognize the gradient between proprietary and completely free.
If I license my software as MIT but with an exception that you can’t use it for commercial purposes if you make more than $100 million a year in revenue, that’s a lot closer to open source than proprietary.
We should be normalizing licenses that place restrictions on large corporations.
I think the world would be a much better place if we just changed the definition of open source to include such licenses. We don’t even really need to change the definition because normal everyday use of the term would already include them.
In the case of npm though it is run by a very wealthy company: Microsoft.
But also, most OSS Software is provided without warranty. Commercial companies should either be held accountable for ensuring the open source components are secure or paying someone (either the maintainer directly, or a third party distributor) to verify the security of the component.
Npm is owned by Github, which is owned by Microsoft. They could have put more tooling into making npm better. For example; pnpm require you to "approve-builds" so that its only running scripts from dependencies you decide on, and Deno have a bunch of security capabilities to restrict what scripts can and can't do. There is always going to be supply chain attacks, and the biggest package repositories are going to be hit the most. But that doesn't mean that Microsoft couldn't have spent more on building better tooling with better security settings on by default.
Well? If you license software the way most FOSS products are licensed, that's a natural result. It is literally putting up a sign saying "free beer."
You can't give permission for them to use the stuff for free and then accuse them of "leeching." If the expectation is contribution in kind, that needs to be in the license agreement.
Consider how many JavaScript developers are completely unemployable without that free software. It might be greater than 95%. That’s why business needs this stuff, because otherwise they might actually have to invest in training.
How many JavaScript developers in the workforce can write original applications without some colossal framework and an army of NPM packages? In 15 years of doing that work those people do not exist, at least statistically, and hiring does not encourage their selection.
Most people doing this work, both in person and online, are extremely sensitive about this. It’s a hard reality to accept that if this free software went away most people doing the work wouldn’t be able to qualify their income in any significant way to their employer.
This sounds like blaming the victim. How do you on one hand call these people engineers, as if they are engineering something, and then on the other hand blame everything else for their inability to perform? That is weird.
Its just a software platform. Would you really blame society for being too harsh if doctors, lawyers, police, teachers cannot do their jobs? It is weird to see so many people blame the web platform for hostility when its so much less challenging than it used to be.
The most common cause of these frustrations I encountered while working in JavaScript is that developers are educated in something that looks like A, but JavaScript is not A, there is no training for JavaScript/Web, so therefore JavaScript/Web is hostile. As a self-taught developer that never made sense to me.
I mean, you're right about that - but how many construction workers could build a house without having access to pre-cut lumber, pre-sharpened tools, nailguns, power equipment, pre-cast nails, etc., etc?
My neighbor works construction and my son did for a while. They were working on the new Texas Instruments silicon prefab. The people that do the actual work with their hands are expected to do just about everything. We are talking about advanced metal work in a place with liquid nitrogen and harmful chemical agents.
The actual engineers just walk around to validate the work conforms to the written plans. That is why these large engineering companies prefer to hire only from labor unions in a location that is extremely anti-union, because the union has the social resources to validate the candidates in ways the employer does not.
Even in that environment there are more experienced people who are 10x producers.
Actually, its the opposite. When you are no longer compatible to the workforce because you don't want waste all your time on the same basic literacy things over and over you start to feel extremely inferior when you cannot get a job.
But the fact the concerns of superiority come up so repeatedly just serves to illustrate how much insecurity is baked into the workforce. Confident people don’t worry about high confidence in other people.
Per survey I read, majority of open source is created by people who are paid for it. The unpaid volunteer working full time on something is effectively a myth.
I’ve contributed a huge amount of opensource code over my career - almost all of it entirely unpaid. I don’t know the statistics, but I know many other people who have done the same.
I think there are a lot of high profile opensource projects which are either run by corpos (like React) or have a lot of full time employees submitting code (Linux). But there’s an insanely long tail of opensource projects on npm, cargo, homebrew etc which are created by volunteers. Or by people scraping by on the occasional donation.
npm was a company for years now. It was initially created as a volunteer one person project, then they create company 10 years ago and eventually sold to Github which was sold to Microsoft. It has spent more time being developed as a paid thing then by unpaid volunteers doing it on the side.
I'm not talking about npm. I'm talking about the 3.1 million libraries hosted on npm. And the ~150k libraries available in rust's cargo, 187k ruby gems, 667k pip packages, and so on. For every React ("brought to you by facebook") there are thousands of tiny projects made for free by volunteers.
There are some mammoth projects where that's true, but the FOSS ecosystem has a very long tail where quite important and powerful libraries are maintained by individuals in their free time.
"unpaid volunteer working full time" also doesn't sound like something that someone would believe. Full time and unpaid rarely go together.
I don’t think that is correct. VS Code developers and the TypeScript team is paid by MS. Core of React is paid by Meta, or was. Java language is paid by Oracle as is the LiberaSuite and MySQL.
Most of the Linux foundation projects, which includes Node are volunteers. Most of the Apache foundation software is from volunteers. Most NPM packages are from volunteers. OpenSSL is volunteers.
There is also a big difference between the developers who are employees on salary versus those that receive enough donations to work in open source full time.
> Linux foundation projects, which includes Node are volunteers.
The survey found that specifically linux code is dominated by people who are paid for it.
> Most of the Apache foundation software is from volunteers.
Large Apache project specifically are backed by companies per Apache rules. Each project must have at least three active backing companies. They contribute the most of the code.
It depends on the domain. There are a lot of critical utilities in the systems space maintained by volunteers. The “xz” compression library was one recent infamous example where an exhausted volunteer maintainer was social engineered into a supply chain attack that briefly compromised OpenSSH.
Not a lot of applications being maintained by altruists, but look under the hood in Linux/GNU/BSD and you fill find a lot of volunteers motivated by something other than money.
Yes, but even in those domains those projects are minorities and in many examples they make it effectively impossible to legally fund or contribute to them from the side of corporations.
Why is it legally impossible to fund or contribute? Do they turn down contributions from paid developers? Do they refuse donations or just have no no mechanism for accepting them? Do they not have any form of commercial services or licence?
I think there are very few projects that do not accept support in any form.
In most cases they need to be able to issue a commercial invoice in a region compatible with company accounting.
For a lot of single developers that's not a thing they're ready or able to do. Those that can, usually have companies established as a revenue source for their OSS project.
> In most cases they need to be able to issue a commercial invoice in a region compatible with company accounting.
The need for this invoice is because companies cannot justify irrational spending. The have no process for gift-giving. There is almost nothing that will make spending on OSS not irrational, unless you're paying for specific bugfixes or customization work. You can't issue an invoice for nothing. How much would the invoice be for?
edit: that being said, please continue to make up any pretense to get OSS contributors paid if that's working for anyone.
A lot of FOSS is developed by people who do it as part of their paid employment, that is what the GP is referring to, not Github sponsorship (which is tiny by comparison).
Question for tanepiper: what would you have Microsoft do to improve things here?
My read of your article is that you don't like postinstall scripts and npx.
I'm not convinced that removing those would have a particularly major impact on supply chain attacks. The nature of npm is that it distributes code that is then executed. Even without npx, an attacker could still release an updated package which, when executed as a dependency, steals environment variables or similar.
And in the meantime, discarding box would break the existing workflows of practically every JavaScript-using developer in the world!
You also mention code signing. I like that too, but do you think that would have a material impact on supply chain attacks given they start with compromised accounts?
The investment I most want to see around this topic is in sandboxing: I think it should be the default that all code runs in a robust sandbox unless there is as very convincing reason not to. That requires investment that goes beyond a single language and package management platform - it's something that needs to be available and trustworthy for multiple operating systems.
The biggest problem with npm is that it is too popular. Nothing else. Even if you "mitigate" some of the risks by removing features like postinstall, it barely does anything at all -- if you actually use the package in any way, the threat is still there. And most of what we see recently could happen to crates.io, pypi etc as well.
It is almost frustrating to see people who don't understand security talk about security. They think they have the best, smartest ideas. No, they don't, otherwise they would have been done a long time ago. Security is hard, really hard.
There's multiple security firms by now that constantly scan updated npm packages for malware. Obviously those companies can only do this after a new package has been published.
Npm could add this as an automated step during publishing.
Sure, there's a manual review needed for anything flagged, but you can easily fix this as well by having smth like a trusted contributor program where let's say you'd need 5 votes to overrule a package being flagged as malware
Oh I agree - it's far too late to make major changes. When they took over, they had the opportunity to drive a new roadmap towards a more secure solution.
2FA isn't a solution to security, it's a solution to hinder and dissuade low-effort hackers from compromising accounts - it's still subject to social engineering (like spearphishing).
I tend to agree with your broader point - sandboxing will be the way to go, I've been having that very discussion today. we're also now enforcing CI pipelines with pinned dependencies (which we do with our helm charts, but npm by default will install with ^ semver and putting that on the developer to disable isn't good enough - the problem of course is that requires the OS vendors to agree on what is common.
This is a riff - not sure how possible this is, but it's not coming from nowhere, it's based on work I did 8 years back (https://github.com/takeoff-env/takeoff) - using a headless OS container image with a volume pointing to the source folder, run the install within the container (so far so good, this is any multi-stage docker build)
The key part would be to then copy the node_modules in the volume _data folder back to the host - this would of likely require the OS vendors to provide timely images with each release of their OS to handle binary dependencies, so is likely a non-starter for OSX.
I don't think pinning deps will help you much, as these incidents often affect transitive dependencies not listed in package.json. package-lock.json is there to protect against automatic upgrades.
I know there are some reports about the lockfile not always working as expected. Some of those reports are outdated info from like 2018 that is simply not true anymore, some of that is due to edge cases like somebody on team having outdated version of npm or installing a package but not committing the changes to lockfile right away. Whatever the reason, pinned version ranges wouldn't protect against that. Using npm ci instead of npm install would.
No, it doesn't solve it - but it might minimise the blast radius - there are so many unmaintained libraries of code that indeed one compromised minor patch on any dependency can become a risk.
That's sort of the thing - all of these measures are just patches on the fundamental problem that npm has just become too unsafe
I have come to using a multi stage Docker build. One to install dependencies and build whatever it is. I then might have a second clean docker image where the dependencies are copied to and run.
This helps with localized risk, and some production risk - but not all of it.
NPM packages have become a huge nuisance security wise.
Yes, also using multi-stage container - we output signed OCI to our repository and have Rekor and GitHub for SBOM and attestation.
This is another huge pet peeve of mine is how hard it is to have a good container pipeline to build containers without running root - we tried some of the alternatives but they all had drawbacks - easiest is to just use GitHub Ubuntu images and hope for the best (although I recently saw some improvement in this area we want to investigate)
For a start, maintainers of dependencies with more than 1000 weekly downloads should be forced to use phishing-resistant 2FA like WebAuthN to authenticate updates (ideally hardware security keys, but not strictly necessary), or sign the code using a registered PGP key (with significant cooldown and warnings when enrolling new keys, e.g. 72h).
What about if pw or 2fa change, your tokens go on a 24hr cooldown? I think the debug package maintainer even provided his 2fa to the phishing site. Obviously doesn't fix the case where they just exfiltrate and use tokens, but there's no fix that solves all of this, there needs to be layers. I also think npm should be scanning package updates for malicious code and pumping the brakes on potentially harmful updates for large packages.
Every day I feel more and more like Go mod's decision to use the lowest common version of a dependency rather than the highest was pure wisdom. Not only does it prevent code breaking at rest from poor semantic versioning, it's also served to prevent automatic inclusion of supply chain attacks.
npm as designed really aggressively likes to upgrade things, and the culture is basically to always blindly upgrade all dependencies as high as possible.
It's sold as being safer by patching vulnerabilities, but most "vulnerabilities" are very minor or niche, whereas a lot of risk is inherent in a shifting foundation.
Like it or not it's kind of a cultural problem. Recursively including thousands of dependencies, all largely updating with no review is a problem.
The thing I find particularly frightful and distinctive from the other package managers I regularly use is there is zero guarantee that the code a library presents on GitHub has anything to do with it's actual content in NPM. You can easily believe you've reviewed an items code by looking at it on GitHub, but that can have absolutely zero relation to what was actually uploaded to npm. You have to actually review what's been uploaded to npm as its entirety disconnected.
It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all (TFA admits as much). In hindsight it's likely that all they wanted from the purchase was AI training material. At worst they're guilty of apathy, but that's no worse than the majority of npm ecosystem participants.
"In hindsight it's likely that all they wanted from the purchase was AI training material."
Microsoft already owned GitHub. I don't see how acquiring npm would make a meaningful difference with respect to training material, especially since npm was already an open package repository which anyone could download without first buying the company.
It’s NOT a stretch to blame Microsoft. How many billions have we spent chasing “AI”? These issues could have been easily solved if we spent the consideration on them. This has been going on well over a decade.
Microsoft isn’t any better steward than the original teams.
This issue has happened Plenty under Microsoft’s ownership.
Seriously? This is is extremely low hanging fruit that's not being taken care of. You shouldn't be able to take over a software dependency with a phishing email. Requiring simple PGP code signing or even just passkey authentication would eradicate that entire attack vector.
Future attacks would then require a level access of access that's already synonymous with "game over" for all intents and purposes (e.g. physical access, malware, or inside job). It's not bulletproof but it would be many orders of magnitude better than the current situation.
i would contend that they are no worse than the original teams, who also clearly didn't care. their motivations may have been growth rather than AI training data but the outcomes were the same
TBF it does happen to other package managers, too. There were similar attacks on PyPI and Rubygems (and maybe others). However, since npm is the largest one and has the most packages released, updated, and downloaded, it became the primary target. Similar to how computer viruses used to target Windows first and foremost due to its popularity.
Also, smaller package managers tend to learn from these attacks on npm, and by the time the malware authors try to use similar types of attacks on them the registries already have mitigations in place.
Trusted Publishing is a marketing term—a fancy name for OIDC support and temporary auth token issuance. It delegates authenticating the uploader to their identity provider, nothing more.
In a very real sense, it shifts responsibility to someone else. For example, if the uploader was using Google as their identity provider and their Google account was popped, the attacker would be able to impersonate the uploader. So I wouldn’t describe it as establishing a strong trust relationship with the uploader.
This is funny but ultimately a mischaracterization of a popularity contest. Node culture is extreme–perhaps pathological–about using many dependencies to work around the limited standard library but the same kind of attacks happen everywhere people are releasing code. The underlying problem is that once you release something it takes only seconds before someone else can be running your code with full privileges to access their account.
That’s why the joke doesn’t really work: America is a huge outlier for gun violence because we lack structural protections. Australia doesn’t have fewer attacks in proportion to a smaller population, they have a lower rate of those attacks per-capita because they have put rules in place to be less of a soft target.
I think everything you're saying about the difference between school shootings and NPM supply chain attacks is correct, but at the same time "You made a joke about why A is like B, but here's why A and B are actually different, therefore the joke is not funny" is not persuasive. Comedy does not need to be rigorous, the person you're replying to is not arguing that supply chain attacks are like school shootings, therefore open source programmers should do active shooter drills. That would be fallacious reasoning.
It's literally just a joke. If it tickles your fancy, it works for you. If you get lost in the weeds of comparing the socio-political mechanisms of open source to guns, or note that supply chain attacks happen to other package managers, the joke won't work for you.
I assure you, it works just fine for me even though yes I think it would be ridiculous to claim there's anything more to the comparison than, "This thing keeps happening, nobody thinks doing anything about it is worth the bother, so look at that, it keeps happening."
I chuckled, too, but I’m a Python developer and it’s not like this doesn’t happen there either. If you want the shorter version: “laugh after you’ve hardened your update process”.
Among other things, the attack space for npm is just so much larger. We run a large C# codebase (1M+ LOC) and a somewhat smaller TypeScript codebase (~200K LOC). I did a check the other day, and we have one potentially vulnerable nuget dependency for every 10,000 lines of C# code, but one potentially vulnerable npm dependency for about every 115 lines of TS code.
Anyone familiar with The Onion knows this, The Onion themselves repost the exact same thing every time there's a school shooting. Which emphasizes how regularly it happens, and therefore in turn I have no objection to a joke like this becoming copy pasta every time an NPM supply chain attack takes place.
"The issue is actually lack of guns, the way way to prevent this is by having more guns" kind of doubling down.
The issue is the people that use npm, and choose to have 48 layers of dependencies without paying for anything. Blaming microsoft, which is a company which pays engineers and audits all of its code and dependencies, is a step in the wrong direction in the necessary self reflection path off npm vulns.
It's a popularity issue; npm is an easy target. I don't see why it wouldn't happen to golang for example. You just need take over the git repo it's over for all users upgrading like npm
"go get" doesn't execute downloaded code automatically; there's no "postinstall" script (there can be a manual "go generate" or "go tool" the user may run)
Go doesn't upgrade existing dependencies automatically, even when adding a new dependency: you need an explicit "go get -u"
You don't use the same tool to both fetch and publish ("go get" vs "git push") so it's less likely a module publisher would get pwned while working on something unrelated.
The Go community tends not to "micropublish" so fewer people have or need commit rights to published go modules.
Go has a decent standard library so there are fewer "missing core functionality" third-party packages that world + dog depends on.
Npm is easier to pwn than Go, Maven, RubyGems, PyPI, CPAN, etc. because the design has more footguns and its community likes it that way
postinstall is a liability for sure, but as soon as you execute untrusted code, it's the same no matter the language. Last week, npm pawn was working like this without a postinstall, which could be the same with Go. Nothing prevents me from pushing a code that would read all your files as soon as you load the library in your code.
I notice you didn't address the other 4 differences. All 5 are about "defence in depth", making things less likely - and conversely, not doing them makes pwning more likely.
I'll add a 6th difference: "go get" downloads source code, not maintainer-provided tarballs. You can't sneak extra things in there that aren't in the code repo.
When your only dependencies are Spring and Apache Commons, which requires legal approval in your corporation to use, and each update requires scrutiny, it's hard to get any supply chain attacks, right?
I think the cooldown approach would make this type of attack have practically no impact anymore, if nobody ever updates to a newly published package version until, say, 2-3 days have gone by, surely there will be enough time for owner of the package to notice he got pwnd.
I've never heard of this. It sounds like a solid default to me. If you _really_ need an update you can override it, but it should remain the default and not allow opting out.
Here’s a one-liner for node devs on MacOS, pin your versions and manually update your supply chain until your tooling supports supply chain vetting, or at least some level of protection against instantly-updated malicious upstream packages.
Would love to see some default-secure package management / repo options. Even a 24 hour delayed mirror would be better than than what we have today.
The expected secure workflow should not require an elaborate bash incantation, it should be the workflow the tools naturally encourage you to use organically. "You're holding it wrong" cannot be possible.
Accidentally installing a malicious package in your dev environment, the concern isn’t “what’s already installed”, it’s what’s potentially going to be installed in the future by you or your colleagues.
So, you pin the version and update periodically when security issues arise in your dependencies.
Anyone have a good solution to scan all code in our Github org for uses of the affected packages? Many of the methods we've tried have dead ended. Inability to reliably search branches is quite annoying here.
npm audit - will tell you if there's any packages with known vulnerabilities.
https://docs.npmjs.com/cli/v11/commands/npm-audit
I'd imagine it's considerably slower than search, but hopefully more reliable.
Have you tried Dependency Track from OWASP? Generate SBOM from each repo/projects and post it with API to DT and you have full overview. You have to hook it up so it is done automatically because of course stuff will always move.
Here's a short recap of what you can do right now, because changing the ecosystem will take years, even if "we" bother to try doing it.
1. Switch to pnpm, it's not only faster and more space efficient, but also disables post-install scripts by default. Very few packages actually need those to function, most use it for spam and analytics. When you install packages into the project for the first time, it tells you what post-install scripts were skipped, and tells you how to whitelist only those you need. In most projects I don't enable any, and everything works fine. The "worst" projects required allowing two scripts, out of a couple dozen or so.
They also added this recently, which lets you introduce delays for new versions when updating packages. Combined with `pnpm audit`, I think it can replace the last suggestion of setting up a helper dependency bot with zero reliance on additional services, commercial or not:
2. If you're on Linux, wrap your package managers into bubblewrap, which is a lightweight sandbox that will block access to almost all of your system, including sensitive files like ~/.ssh, and prevent anything running under it from escalating privileges. It's used by flatpak and Steam. A fully working & slightly improved version was posted here:
I posted the original here, but it was somewhat broken because some flags were sorted incorrectly (mea culpa). I still prefer using a separate cache directory instead of sharing the "global" ~/.cache because sensitive information might also end up there.
3. Setup renovate or any similar bot to introduce artificial delays into your supply chain, but also to fast-track fixes for publicly known vulnerabilities. This suggestion caused some unhappiness in the previous discussion for some reason — I really don't care which service you're using, this is not an ad, just setup something to track your dependencies because you will forget it. You can fully self-host it, I don't use their commercial offering — never has, don't plan to.
4. For those truly paranoid or working on very juicy targets, you can always stick your work into a virtual machine, keeping secrets out of there, maybe with one virtual machine per project.
You can also use tools like safe-chain which connects to malware databases and blocks installations of malicious packages. In this case it would have blocked installs around 20 minutes after the malware was added as this was how long it took to be added into the malware databases.
https://www.npmjs.com/package/@aikidosec/safe-chain
When we close and reopen VSCode (and some other IDEs), it updates the NPM packages for the installed plugins. Would these mitigations steps (e.g. pnpm) also take care of that?
Bubblewrap seems excellent for Linux uses - on macOS, it seems like sandbox-exec could do some (all?) of what bubblewrap does on Linux. There's no official documentation for SBPL, but there are examples, and I found sandboxtron[0] which was a helpful base for writing a policy to try to contain npm
sandbox-exec is so frustrating. It could be a genuinely excellent solution to a whole bunch of sandboxing problems, except...
1. Documentation is virtually nonexistent. I think that is inexcusable for a security tool!
2. The man page says that it's deprecated, and has done for around a decade. No news on when they will actually remove it, maybe they never will? Hard to recommend it with that axe hanging over it though.
Absolutely agreed on the lack of documentation, it seems completely insane (I assume this is because they want to reinforce that only Apple should be writing policies - but still no excuse for it)
>Hard to recommend it with that axe hanging over it though.
Given the alternative being no way to limit untrusted tooling at all today, it seems worthwhile using it despite these problems?
There's also a (very slim) chance that if it became central to the security of developers on macOS that Apple would give slightly more consideration to it
Yes definitely worth using it, but I don't know how much time I want to spend integrating it deeply into my own open source projects given its uncertain status.
Yeah I know what you mean... one positive is it looks like Google use it in Chromium[0], so at least Google think the API will stick around for a while (and provides a big platform Apple would break if they discontinued it)
It seems to me like one obvious improvement is for npm to require 2fa to submit packages. The fact that malware can just automatically publish packages without a human having to go through an MFA step is crazy.
My non-solution years ago was to use as little dependencies as possible. And vendor node_modules then review every line of code changed when I update dependencies.
Not every project and team can do that. But when feasible, it's a strong mitigation layer.
What worked was splitting dependency diff review among the team so it's less of a burden. We pin exact versions and update judiciously.
And as long as you stay some versions behind bleeding edge, you can use time in your favor to catch supply chain attacks before they reach your codebase.
Or you're talking about an approach you utilized in some side projects rather than moderately sized commercial web applications? I don't imagine there's many out there that have less than hundreds of dependencies.
The NPM monoculture is the problem. It would be absurd to suggest that all backend engineers use the same build tooling and dependency library, but here we are with frontend. It's just too big of an attack surface.
It would be absurd to make such a suggestion. However, the comparison is not correct. Not all front-end development uses the same build tooling or dependency libraries, or programming language for that matter. Even if you narrow to the typescript ecosystem, it's still not true.
Here is an issue from 2013 where developers are asking to fix the package signing issue. Gone fully ignored because doing so was “too hard”: https://github.com/npm/npm/pull/4016
I think if somebody wants to see library distribution channels tightened up they need to be very specific about what they would like to see changed and why it would be better, since it would appear that the status quo is serving what people actually want - being able to create and upload packages and update them when you want.
> But right now there are still no signed dependencies and nothing stopping people using AI agents, or just plain old scripts, from creating thousands of junk or namesquatting repositories.
This is as close as we get in this particular piece. So what's the alternative here exactly - do we want uploaders to sign up with Microsoft accounts? Some sort of developer vetting process? A curated lib store? I'm sure everybody will be thrilled if Microsoft does that to the JS ecosystem. (/s) I'm not seeing a great deal of difference between having someone's NPM creds and having someone's signing key. Let's make things better but let's also be precise, please.
> But right now there are still no signed dependencies
Considering these attacks are stealing API tokens by running code on developer's machines; I don't see how signing helps, attackers will just steal the private keys and sign their malware with those.
postinstall is running on the developer's machine, from an endpoint security perspective, it's the actual developer performing the malicious actions, their machine, their IP address and their location.
We treat code repositories as public infrastructure, but we don't want to pay for it, so corporations run them, with their profit interest in mind. This is the fundamental conflict, that I see.
And one solution, more non profits as organisations behind them.
funny how npm is the exact same model as maven, gopkg, cpan, pip, mix, cargo, and a million others.
but only npm started with a desire to monetize it (well, npm and docker hub) and in its desire for control didn't implement (or allowed the community to implement) basic higiene.
But, nevermind. It's been 2 years since Jia Tan and the amount of such 'occurrences' in the npm ecosystem in the past 10 years are bordering on uncountable at this point.
And yet this hack got through? This amateuristic and extremely obvious attempt? The injected function was literally named something like 'raidBitcoinEthWallet' or whatnot.
npm has clearly done absolutely nothing in this regard.
We haven't even gotten to the argument of '... but then hackers will simply use the automated tools themselves and only release stuff that doesn't get flagged'. There's nothing to talk about; npm has done nothing.
Which gets us to:
* Web of trust
This seems to me to be a near perfect way for the big companies that have earned so, so much money using the free FOSS they rely on, to contribute.
They spend the cash to hire a team that reviews FOSS stuff. Entire libraries, sure, but also updates. I think most of them _already do this_, and many will even openly publish issues they found. But they do not publish negative results (they do not publish 'our internal team had a quick look at update XYZ of project ABC and didn't see anything immediately suspicious').
They should start doing that. And repos like npm, maven, CPAN, etcetera should allow either the official maintainer of a library, or anybody, to attach 'assertments of no malicious intent'.
Imagine that npm hosts the following blob of text for NPM hosted projects in addition to the javascript artefacts:
> "I, google dev/security team, hereby vouch for this update in the senses: not-malicious. We have looked at it and did not see anything that we think is suspicious or worse. We make absolutely no promises whatsoever that this library is any good or that this update's changelog accurately represents the changes in it; we merely vouch for the fact that we do not think it was written with malicious intent. We explicitly disavow any legal or financial liability with this statement; we merely stake our good name. We have done this analysis on 2025-09-17 and did it for the artefact with SHA512 hash 1238498123abac. Signed, [public/private key infrastructure based signature, google.com].
And a general rule that google.com/.well-known/vouch-public-key or whatever contains the public key so anybody can check.
Aside from Jia Tan/xz (which always stops any attempt; Jia Tan/xz was so legendary, exactly how the fuck THIS still happens given that massive wakeup call boggles my mind!), every supply chain attack was pretty dang easy to spot; the problem was: Nobody was looking, and everybody configures their build scripts to pick up point updates immediately.
We should update these to 'pick them up after a certain 'vouch' score is reached'. Where everybody can mess with their scoring tables (don't trust google? reduce the value of their vouch to 0).
I think security-crucial 0day fixing updates will not be significantly hampered by this; generally big 0-days are big news and any update that comes out gets analysed to pieces. The vouch signatures would roll in within the hour after posting them.
> The tools we use to build software are not secure by default, and almost all of the time, the companies that provide them are not held to account for the security of their products.
The companies? More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
> More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
People who work on permissively licensed software are donating their time to these Fortune 500 companies. It hardly seems fair to call the companies leeches for accepting these freely given donations.
It's not just time. A lot of devs simply don't have the experience of dogging into third party sourcing code or understanding how one contributed to open source.
By "a lot of devs" do you mean devs at these companies?
If so I think this is a good point. It's easy to see from any one open source project's perspective how a little help would go a long way. But it's really hard to see from the perspective of a company with a massive code base how you could possibly contribute to the ten gajillion dependencies you use, even if you wanted to.
People will say things like "Why doesn't Foo company contribute when they have the resources?" But from what I've seen, the engineers at Foo would often love to contribute, but no team has the headcount to do it. And acquiring the headcount would require making a case to management that contributing to that open source project is worth the cost of devoting a team to it.
Author of the article here - holistically this isn't just about NPM dependencies, it's the entire stacks we work with. Cloud vendors provide security, but out of the box they don't provide secure platforms - a lot of this is left up to developers, without security experts - this is dangerous - I have 25 years of experience and I wouldn't want to touch the depths of RBAC.
SaaS products don't enforce good security - I've seen some internally that don't have MFA or EntraID integration because they simply don't have those as features (mostly legacy systems these days, but they still exist).
I'm also an open-source author (I have the most used bit.ly library on npm - and have had demands and requests too), and I'm the only person you can publicly see on our [company github](https://github.com/ikea) - there's reasons for this - but not every company is leeching, rather there is simply no other alternative.
> Cloud vendors provide security, but out of the box they don't provide secure platforms - a lot of this is left up to developers, without security experts -
A lot of the spread of Shai-Hulud is due to s having overly broad credentials on NPM, GitHub and elsewhere. It's not that NPM doesn't support scoped credentials, it's that developers don't want to deal with it so it's not the default. There's no reason why, for example, a developer needs a live credential to publish their package when they're just hacking on code.
This is related to the `curl | bash` pattern. Projects like NPM want to make it easy to get started and hard to reach a failure case so they sacrifice well-known security practices during the growth phase.
I mean quite often access based errors are very opaque, I mean it is for good reason, but when you're new to something it's one of those things that leads you to give up. You want to write code, not spend 3 hours figuring out why your token doesn't work.
Security things will get hacked on later, but again it will cause all kinds of problems because the ecosystem wasn't built for it.
> quite often access based errors are very opaque
Yes they are, and it's hard to design good scopes especially when the project is new.
A better default might just be to have the write permission expire much more quickly than the read permission. E.g. the write token might be valid for an hour and the read token might be valid for 90 days.
That's interesting. I take issue with companies that claim a level of security that doesn't match what they ship, but I never expect them to tell me how to do my job well.
I expect a company to put their current product in as good of a light as they can. They're going to over promise what it can do and show me the easiest "Getting Started" steps as they can. Its up to me to dig deeper and understand what they actually do and what the right solution is for my project.
> a lot of this is left up to developers, without security experts - this is dangerous
Although I see where you are coming from, dismissing unaudited libs as dangerous is slightly missing the point. In fact, the world is a safer place for their existence- the value lost by security exploits is insignificant compared to the value protected by the existence of the libs they exploit. Also, I suspect that you could replace "value" with "lives" in the previous sentence.
I remember joining my company right out of college. In the interview we started talking about open source since I had some open source Android apps. I asked if the company contributed back to the projects it used. The answer was no, but that they were planning to. Over a decade later... they finally created a policy to allow commits to open source projects. It's been used maybe 3 times in it's first year or so. Nobody has the time and the management culture doesnt want to waste budget on it.
> Nobody has the time
I'd erase that part entirely, as it is not true, from my point of view. My day, as has every other person's day, has exactly 24 hours. As an employee, part of that time is dedicated to my employer. In return, I receive financial compensation. It's up to them to decide how they want to spend the resources they acquired. So yes, each and every company could, in theory, contribute back to Open Source.
But as there is no price tag attached to Open Source, there is also no incentive. In a highly capitalized world, where share holder value is more worth than anything else, there are only a few companies that do the right call and act responsible.
If your finite time at work is filled with business work, then there is no time left to do the open-source work. Seems true to me from an IC and delivery perspective. Company staffing and resource allocation could create the time to do it, but they don't.
> In a highly capitalized world, where share holder value is more worth than anything else, there are only a few companies that do the right call and act responsible.
It is not just that. In a well functioning theoretical free market, no one is going to have time either. The margins are supposed to end up being tight and the competition is supposed to weed out economic inefficiency. Voluntary pro-social behavior is a competitive disadvantage and an economic inefficiency. So, by design, the companies end up not "having time for that".
You need a world that allows for inefficiency and rewards pro-social behavior. That is not the world where we are living in currently.
Working an honest job is pro-social behavior, and it is rewarded. So is quitting your job to work on a side project that ends up being valuable enough for others to pay for. It's just that giving code away for free operates outside that reward structure.
That's such a self-harmful policy. I have a small business and I've been really supportive to both open source and small, paid-for commercial libraries and building blocks that I rely on. Also advocated this successfully at clients I've consulted with. We do a lot of technical vetting before adopting any particular dependency (vs. building out our own) and it just makes sense that we strive to foster the continued existence and excellence of our tools. Considering the incredible value companies get from open source, I have trouble understanding why they wouldn't throw some cash or idle cycles their way. Seemed to work out for the likes of Google while they were undergoing rapid growth.
That's fine. There's no requirement to "contribute back". Respect the license terms and don't go demanding anything unless you have a support contract and don't expect that you can get a support contract. It's fine to just use something as long as you also don't harass the maintainer as if they owed you something.
The annoying part is that often without a corporate policy for contributing, you are doing the work anyway because you need XYZ from the software, just that it lives in a private fork that will never get upstreamed as a result of policy.
Most developers don’t work for software companies. So when you are not shipping software as a product you and your department are usually a liability. This is important to understand because it helps you frame your approach to upper management as a developer or c-suite as a director of engineering when it comes to talk about budgets. In my experience, most non tech corporations will be ok with allocating budget for open source projects, they already do it in other types of non profit domains. But you need to make a case that goes beyond the ethical reasons or personal motivations.
Technology is insecure all the way down to the hardware. The structural cause of this is that companies aren’t held liable for insecure products, which are cheaper to build.
So companies’ profit motives contribute to this mess not just through the exploitation of open source labor (as you describe) but through externalizing security costs as well.
Isn’t all this stuff with Secure Enclave supposed to address these kind of things?
It’s my take that over the past ~ decade a lot of these companies have been making things a lot better, Windows even requires secure boot these days as well.
They’re not the same problems. The Secure Enclave protects things like your biometrics, hardware-backed keys (e.g. on a Mac, WebAuthn and iCloud Keychain), and the integrity of the operating system but not every bit of code running as your account. That means that an NPM install can’t compromise your OS to the point that you can’t recover control, but it means the attacker can get everything you haven’t protected using sandbox features.
That’s the path out of this mess: not just trying to catch it on NPM but moving sensitive data into OS-enforced sandboxes (e.g. Mac containers) so every process you start can’t just read a file and get keys, and using sandboxing features in package managers themselves to restrict when new installs can run code and what they can do (e.g. changing the granularity from “can read any file accessible to the user” to “can read a configuration file at this location and data files selected by the user”), and tracking capability changes (“the leftpad update says it needs ~/.aws in this update?”).
We need to do that long-term but it’s a ton of work since it breaks the general model of how programs work we’ve used for most the last century.
it's not clear that the solution to this problem is to create several additional layers of barn doors.
Not really, those technologies are basically designed to be able to enforce DRM remotely.
Secure Enclave = store the encryption keys to media in a place where you can't get them
Secure Boot = first step towards remote attestation so they can remotely verify you haven't modified your system to bypass the above
Advertising rules the world.
How is that different?
Is there such a thing as secure hardware that can prevent supply chain attacks (by enabling higher layers to guarantee security) and secure hardware that prevents copying data (by enabling higher layers to guarantee security)?
Sure. Malware tends not to have physical hands that can touch the machine and any buttons attached to it. Physical ownership should be true ownership, but they're taking that away from you.
I find this perspective harmful to OSS as a whole. It is completely fine to release free software that other companies can use without restrictions, if you desire to do so. It is not meant to be a transaction. You share some, you take some.
It’s also ok to release paid free software, or closed software, restrictive licenses, commercial licenses, and sell support contracts. It’s a choice.
Just because you can do something doesn’t mean you should.
There’s also lot of pressure for devs not to use licenses that restrict use by large companies. Try adding something to your license that says companies making over $10 million per year in revenue have to pay, and half of the comments on show HN will be open source warriors either asking why you didn’t use a standard license or telling you that this isn’t open source and you have brought dishonor to your family.
> Just because you can do something doesn’t mean you should.
This implies some kind of fairness/moral contract in a license like MIT. There is none. It’s the closest thing to donating code to the public domain, and entirely voluntary.
There are plenty of standard licenses with similar clauses restricting commercial use, no need to create a custom one.
But indeed, the truth is that a restrictive license will massively reduce the project’s audience. And that is a perfectly fine choice to make.
> This implies some kind of fairness/moral contract in a license like MIT.
The license tells you what you are legally allowed to do. It doesn’t supersede basic concepts of fairness.
The average person would say that if you directly make millions of someone else’s work, the fair thing to do is to pay that person back in some way.
Calling someone a leech is just saying that they aren’t following the the accusers model of fairness. That’s all. There’s no legal definition.
We say things like “my company screwed me over when they fired me right before my RSUs vested” despite that being perfectly legal.
> someone else’s work
It is not “their” work anymore (IP rights discussions aside) once they published with an unrestricted license. That’s the point. You do it expecting nothing in return, and do it willingly. Expecting “fairness” is a misunderstanding of the whole spirit of it.
Semantic games with “their work”. An artist who sells a painting can still call it their work, even if someone else owns it. And I suppose the collector who bought it could also call it their work, though that phrasing isn’t usually used.
It comes about because “work” is overloaded to mean both the activity of creating and the product/result of that activity.
>expecting nothing in return
Let’s ignore that no one contributes to open source expecting nothing in return.
I can help someone out expecting nothing in return. Then if my situation changes and I need help, but they look at me and say “sorry your help was a gift so I’m not going to return the favor even though I can”. That person is a dick.
The problem is you are taking the act of applying a permissive license as some kind of ceremony that severs open source software from all normal human ideas of fairness. You may view it that way. Most people don’t.
It’s perfectly reasonable to put something out in the world for other people to enjoy and use. And yet still think that if someone makes a billion dollars of it and doesn’t return anything they are displaying bad manners.
> I can help someone out expecting nothing in return. Then if my situation changes
It sounds like you did expect something in return, conditional on your circumstances. Maybe it's good-will or something, but some kind of social insurance in any case.
This is partly getting into questions about whether “pure” altruism is even possible, e.g., is an anonymous donation truly selfless if you do it because it makes you feel good.
But in the example above it’s entirely possible that you helped someone out with no expectation of being paid back. Let’s say you’re rich and the person you helped is a chronic drug addict. You have no expectation of every needing help and no expectation that the person you helped will ever be in a position to help you.
Let’s say I give a homeless person a dollar. He turns around and uses that dollar to buy a lottery ticket and wins 100 million dollars. Years later, I’m homeless and the former homeless guy walks past me and gives me a lecture about how I should have put conditions on my donation.
In that situation there was no reasonable expectation for anything except as you said maybe good will. But of course open source developers also expect good will.
Sidestep this debate with one trick - use the GPLv3. No company large enough to have a legal team will be able to use it, you're still squarely within the various definitions, and the FSF basically has to approve.
As a bonus maybe you can get some proprietary software open sourced too.
Is there a real reason not to use AGPL? The fact that it makes Google very uncomfortable[1] is a great selling point to me.
[1]: https://opensource.google/documentation/reference/using/agpl...
For the purposes of me being facetious, it's less infectious than v3. but yeah it would have the same impact on large corps I think
> telling you that this isn’t open source
Are you talking about promoting some software as open source when it's in fact not? Because yes, there's something wrong with that, you shouldn't do it, and people will rightfully react loudly if you try.
People don't complain about proprietary software honestly communicated as that.
This is exactly the kind of thing, I’m talking about. Open source has mostly been captured by large corporations because purists refuse to recognize the gradient between proprietary and completely free.
If I license my software as MIT but with an exception that you can’t use it for commercial purposes if you make more than $100 million a year in revenue, that’s a lot closer to open source than proprietary.
We should be normalizing licenses that place restrictions on large corporations.
I think the world would be a much better place if we just changed the definition of open source to include such licenses. We don’t even really need to change the definition because normal everyday use of the term would already include them.
In the case of npm though it is run by a very wealthy company: Microsoft.
But also, most OSS Software is provided without warranty. Commercial companies should either be held accountable for ensuring the open source components are secure or paying someone (either the maintainer directly, or a third party distributor) to verify the security of the component.
Npm is owned by Github, which is owned by Microsoft. They could have put more tooling into making npm better. For example; pnpm require you to "approve-builds" so that its only running scripts from dependencies you decide on, and Deno have a bunch of security capabilities to restrict what scripts can and can't do. There is always going to be supply chain attacks, and the biggest package repositories are going to be hit the most. But that doesn't mean that Microsoft couldn't have spent more on building better tooling with better security settings on by default.
20 of the packages were from Crowdstrike
Well? If you license software the way most FOSS products are licensed, that's a natural result. It is literally putting up a sign saying "free beer."
You can't give permission for them to use the stuff for free and then accuse them of "leeching." If the expectation is contribution in kind, that needs to be in the license agreement.
Consider how many JavaScript developers are completely unemployable without that free software. It might be greater than 95%. That’s why business needs this stuff, because otherwise they might actually have to invest in training.
> Consider how many JavaScript developers are completely unemployable without that free software.
Can you say more about this?
How many JavaScript developers in the workforce can write original applications without some colossal framework and an army of NPM packages? In 15 years of doing that work those people do not exist, at least statistically, and hiring does not encourage their selection.
Most people doing this work, both in person and online, are extremely sensitive about this. It’s a hard reality to accept that if this free software went away most people doing the work wouldn’t be able to qualify their income in any significant way to their employer.
I think that ultimately it’s the fault of the web platform.
With just a bit of retraining those engineers that could not be productive without a ton of npm packages could ship an iPhone app written in Swift.
JS’ standard library is abysmal.
This sounds like blaming the victim. How do you on one hand call these people engineers, as if they are engineering something, and then on the other hand blame everything else for their inability to perform? That is weird.
Its just a software platform. Would you really blame society for being too harsh if doctors, lawyers, police, teachers cannot do their jobs? It is weird to see so many people blame the web platform for hostility when its so much less challenging than it used to be.
The most common cause of these frustrations I encountered while working in JavaScript is that developers are educated in something that looks like A, but JavaScript is not A, there is no training for JavaScript/Web, so therefore JavaScript/Web is hostile. As a self-taught developer that never made sense to me.
I dunno. Would you blame doctors if they were unable to perform in a single hospital and had a verifiably good track record anywhere else?
I mean, you're right about that - but how many construction workers could build a house without having access to pre-cut lumber, pre-sharpened tools, nailguns, power equipment, pre-cast nails, etc., etc?
My neighbor works construction and my son did for a while. They were working on the new Texas Instruments silicon prefab. The people that do the actual work with their hands are expected to do just about everything. We are talking about advanced metal work in a place with liquid nitrogen and harmful chemical agents.
The actual engineers just walk around to validate the work conforms to the written plans. That is why these large engineering companies prefer to hire only from labor unions in a location that is extremely anti-union, because the union has the social resources to validate the candidates in ways the employer does not.
Even in that environment there are more experienced people who are 10x producers.
It boils down to him feeling superior to web developers, who are far beneath him and couldn't possibly program with other tools.
Actually, its the opposite. When you are no longer compatible to the workforce because you don't want waste all your time on the same basic literacy things over and over you start to feel extremely inferior when you cannot get a job.
But the fact the concerns of superiority come up so repeatedly just serves to illustrate how much insecurity is baked into the workforce. Confident people don’t worry about high confidence in other people.
Per survey I read, majority of open source is created by people who are paid for it. The unpaid volunteer working full time on something is effectively a myth.
I’ve contributed a huge amount of opensource code over my career - almost all of it entirely unpaid. I don’t know the statistics, but I know many other people who have done the same.
I think there are a lot of high profile opensource projects which are either run by corpos (like React) or have a lot of full time employees submitting code (Linux). But there’s an insanely long tail of opensource projects on npm, cargo, homebrew etc which are created by volunteers. Or by people scraping by on the occasional donation.
npm was a company for years now. It was initially created as a volunteer one person project, then they create company 10 years ago and eventually sold to Github which was sold to Microsoft. It has spent more time being developed as a paid thing then by unpaid volunteers doing it on the side.
I'm not talking about npm. I'm talking about the 3.1 million libraries hosted on npm. And the ~150k libraries available in rust's cargo, 187k ruby gems, 667k pip packages, and so on. For every React ("brought to you by facebook") there are thousands of tiny projects made for free by volunteers.
There are some mammoth projects where that's true, but the FOSS ecosystem has a very long tail where quite important and powerful libraries are maintained by individuals in their free time.
"unpaid volunteer working full time" also doesn't sound like something that someone would believe. Full time and unpaid rarely go together.
I don’t think that is correct. VS Code developers and the TypeScript team is paid by MS. Core of React is paid by Meta, or was. Java language is paid by Oracle as is the LiberaSuite and MySQL.
Most of the Linux foundation projects, which includes Node are volunteers. Most of the Apache foundation software is from volunteers. Most NPM packages are from volunteers. OpenSSL is volunteers.
There is also a big difference between the developers who are employees on salary versus those that receive enough donations to work in open source full time.
Most of those "voluneers" are also developing those projects as part of their paid job in a form of company contribution back to OSS though.
> Linux foundation projects, which includes Node are volunteers.
The survey found that specifically linux code is dominated by people who are paid for it.
> Most of the Apache foundation software is from volunteers.
Large Apache project specifically are backed by companies per Apache rules. Each project must have at least three active backing companies. They contribute the most of the code.
> The survey found that specifically linux code is dominated by people who are paid for it.
Yes the kernel code, but the Linux Foundation projects (mentioned in the comment you quote) are MUCH more than the kernel.
See the list on https://www.linuxfoundation.org/projects
It depends on the domain. There are a lot of critical utilities in the systems space maintained by volunteers. The “xz” compression library was one recent infamous example where an exhausted volunteer maintainer was social engineered into a supply chain attack that briefly compromised OpenSSH.
Not a lot of applications being maintained by altruists, but look under the hood in Linux/GNU/BSD and you fill find a lot of volunteers motivated by something other than money.
It briefly compromised the custom patched Debian version of OpenSSH. The issue had nothing to do with OpenSSH itself.
Yes, but even in those domains those projects are minorities and in many examples they make it effectively impossible to legally fund or contribute to them from the side of corporations.
Why is it legally impossible to fund or contribute? Do they turn down contributions from paid developers? Do they refuse donations or just have no no mechanism for accepting them? Do they not have any form of commercial services or licence?
I think there are very few projects that do not accept support in any form.
In most cases they need to be able to issue a commercial invoice in a region compatible with company accounting.
For a lot of single developers that's not a thing they're ready or able to do. Those that can, usually have companies established as a revenue source for their OSS project.
> In most cases they need to be able to issue a commercial invoice in a region compatible with company accounting.
The need for this invoice is because companies cannot justify irrational spending. The have no process for gift-giving. There is almost nothing that will make spending on OSS not irrational, unless you're paying for specific bugfixes or customization work. You can't issue an invoice for nothing. How much would the invoice be for?
edit: that being said, please continue to make up any pretense to get OSS contributors paid if that's working for anyone.
Yeah I’m not buying it. If the corporations wanted to, they would.
I'd be keen to see that survey given how many projects I see with so few GitHub sponsors that I can't see how you'd derive a full time wage.
A lot of FOSS is developed by people who do it as part of their paid employment, that is what the GP is referring to, not Github sponsorship (which is tiny by comparison).
Which survey?
Post the survey please, that's an extraordinary claim
Fool me once, shame on you. Fool me repeatedly again and again, then?
[flagged]
This.
Question for tanepiper: what would you have Microsoft do to improve things here?
My read of your article is that you don't like postinstall scripts and npx.
I'm not convinced that removing those would have a particularly major impact on supply chain attacks. The nature of npm is that it distributes code that is then executed. Even without npx, an attacker could still release an updated package which, when executed as a dependency, steals environment variables or similar.
And in the meantime, discarding box would break the existing workflows of practically every JavaScript-using developer in the world!
You mention 2FA. npm requires that for maintainers of the top 100 packages (since 2022), would you like to see that policy increased to the top 1,000/10,000/everyone? https://github.blog/security/supply-chain-security/top-100-n...
You also mention code signing. I like that too, but do you think that would have a material impact on supply chain attacks given they start with compromised accounts?
The investment I most want to see around this topic is in sandboxing: I think it should be the default that all code runs in a robust sandbox unless there is as very convincing reason not to. That requires investment that goes beyond a single language and package management platform - it's something that needs to be available and trustworthy for multiple operating systems.
Exactly.
The biggest problem with npm is that it is too popular. Nothing else. Even if you "mitigate" some of the risks by removing features like postinstall, it barely does anything at all -- if you actually use the package in any way, the threat is still there. And most of what we see recently could happen to crates.io, pypi etc as well.
It is almost frustrating to see people who don't understand security talk about security. They think they have the best, smartest ideas. No, they don't, otherwise they would have been done a long time ago. Security is hard, really hard.
There's multiple security firms by now that constantly scan updated npm packages for malware. Obviously those companies can only do this after a new package has been published.
Npm could add this as an automated step during publishing. Sure, there's a manual review needed for anything flagged, but you can easily fix this as well by having smth like a trusted contributor program where let's say you'd need 5 votes to overrule a package being flagged as malware
Oh I agree - it's far too late to make major changes. When they took over, they had the opportunity to drive a new roadmap towards a more secure solution.
2FA isn't a solution to security, it's a solution to hinder and dissuade low-effort hackers from compromising accounts - it's still subject to social engineering (like spearphishing).
I tend to agree with your broader point - sandboxing will be the way to go, I've been having that very discussion today. we're also now enforcing CI pipelines with pinned dependencies (which we do with our helm charts, but npm by default will install with ^ semver and putting that on the developer to disable isn't good enough - the problem of course is that requires the OS vendors to agree on what is common.
This is a riff - not sure how possible this is, but it's not coming from nowhere, it's based on work I did 8 years back (https://github.com/takeoff-env/takeoff) - using a headless OS container image with a volume pointing to the source folder, run the install within the container (so far so good, this is any multi-stage docker build)
The key part would be to then copy the node_modules in the volume _data folder back to the host - this would of likely require the OS vendors to provide timely images with each release of their OS to handle binary dependencies, so is likely a non-starter for OSX.
I don't think pinning deps will help you much, as these incidents often affect transitive dependencies not listed in package.json. package-lock.json is there to protect against automatic upgrades.
I know there are some reports about the lockfile not always working as expected. Some of those reports are outdated info from like 2018 that is simply not true anymore, some of that is due to edge cases like somebody on team having outdated version of npm or installing a package but not committing the changes to lockfile right away. Whatever the reason, pinned version ranges wouldn't protect against that. Using npm ci instead of npm install would.
No, it doesn't solve it - but it might minimise the blast radius - there are so many unmaintained libraries of code that indeed one compromised minor patch on any dependency can become a risk.
That's sort of the thing - all of these measures are just patches on the fundamental problem that npm has just become too unsafe
I have come to using a multi stage Docker build. One to install dependencies and build whatever it is. I then might have a second clean docker image where the dependencies are copied to and run.
This helps with localized risk, and some production risk - but not all of it.
NPM packages have become a huge nuisance security wise.
Yes, also using multi-stage container - we output signed OCI to our repository and have Rekor and GitHub for SBOM and attestation.
This is another huge pet peeve of mine is how hard it is to have a good container pipeline to build containers without running root - we tried some of the alternatives but they all had drawbacks - easiest is to just use GitHub Ubuntu images and hope for the best (although I recently saw some improvement in this area we want to investigate)
For a start, maintainers of dependencies with more than 1000 weekly downloads should be forced to use phishing-resistant 2FA like WebAuthN to authenticate updates (ideally hardware security keys, but not strictly necessary), or sign the code using a registered PGP key (with significant cooldown and warnings when enrolling new keys, e.g. 72h).
What about if pw or 2fa change, your tokens go on a 24hr cooldown? I think the debug package maintainer even provided his 2fa to the phishing site. Obviously doesn't fix the case where they just exfiltrate and use tokens, but there's no fix that solves all of this, there needs to be layers. I also think npm should be scanning package updates for malicious code and pumping the brakes on potentially harmful updates for large packages.
I made a list a few years back: https://news.ycombinator.com/item?id=29266992
At the time, I was focusing more on the approach of reducing the number of people you have to trust when you depend on a particular package.
Code signing would help with stolen authentication tokens.
Every day I feel more and more like Go mod's decision to use the lowest common version of a dependency rather than the highest was pure wisdom. Not only does it prevent code breaking at rest from poor semantic versioning, it's also served to prevent automatic inclusion of supply chain attacks.
npm as designed really aggressively likes to upgrade things, and the culture is basically to always blindly upgrade all dependencies as high as possible.
It's sold as being safer by patching vulnerabilities, but most "vulnerabilities" are very minor or niche, whereas a lot of risk is inherent in a shifting foundation.
Like it or not it's kind of a cultural problem. Recursively including thousands of dependencies, all largely updating with no review is a problem.
The thing I find particularly frightful and distinctive from the other package managers I regularly use is there is zero guarantee that the code a library presents on GitHub has anything to do with it's actual content in NPM. You can easily believe you've reviewed an items code by looking at it on GitHub, but that can have absolutely zero relation to what was actually uploaded to npm. You have to actually review what's been uploaded to npm as its entirety disconnected.
It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all (TFA admits as much). In hindsight it's likely that all they wanted from the purchase was AI training material. At worst they're guilty of apathy, but that's no worse than the majority of npm ecosystem participants.
"In hindsight it's likely that all they wanted from the purchase was AI training material."
Microsoft already owned GitHub. I don't see how acquiring npm would make a meaningful difference with respect to training material, especially since npm was already an open package repository which anyone could download without first buying the company.
It’s NOT a stretch to blame Microsoft. How many billions have we spent chasing “AI”? These issues could have been easily solved if we spent the consideration on them. This has been going on well over a decade.
Microsoft isn’t any better steward than the original teams.
This issue has happened Plenty under Microsoft’s ownership.
Yeah, easily solved.
Would love to hear your genius solutions right here that Microsoft is too dumb to come up with and implement.
Seriously? This is is extremely low hanging fruit that's not being taken care of. You shouldn't be able to take over a software dependency with a phishing email. Requiring simple PGP code signing or even just passkey authentication would eradicate that entire attack vector.
Future attacks would then require a level access of access that's already synonymous with "game over" for all intents and purposes (e.g. physical access, malware, or inside job). It's not bulletproof but it would be many orders of magnitude better than the current situation.
i would contend that they are no worse than the original teams, who also clearly didn't care. their motivations may have been growth rather than AI training data but the outcomes were the same
> It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all.
I reckon that the ecosystem would have been much healthier if NPM had not been kept running without the care it requires.
I did wonder about that. Maybe yeah. It's likely that several no-better forks would have sprung up right away.
By removing ellipsis in submission title, the sentiment feels more like "not another meditation" instead of the intent "oh no!: a meditation"
"No Way To Prevent This" Says Only Package Manager Where This Regularly Happens
TBF it does happen to other package managers, too. There were similar attacks on PyPI and Rubygems (and maybe others). However, since npm is the largest one and has the most packages released, updated, and downloaded, it became the primary target. Similar to how computer viruses used to target Windows first and foremost due to its popularity.
Also, smaller package managers tend to learn from these attacks on npm, and by the time the malware authors try to use similar types of attacks on them the registries already have mitigations in place.
PyPI is working towards attestation [0], and already has "Trusted Publisher" [1].
Ruby has had signed gems since v2 [2].
These aren't a panacea. But they do mean an effort has been made.
npm has been talking about maybe doing something since 2013 [3], but ended up doing... Nothing. [4]
I don't think it's fair to compare npm to the others.
[0] https://docs.pypi.org/attestations/producing-attestations/
[1] https://docs.pypi.org/trusted-publishers/
[2] https://docs.ruby-lang.org/en/master/Gem/Security.html
[3] https://github.com/npm/npm/pull/4016
[4] https://github.com/node-forward/discussions/issues/29
NPM has both Trusted Publishing and provenance claims for where packages are built.
https://docs.npmjs.com/trusted-publishers
https://docs.npmjs.com/generating-provenance-statements
Trusted Publishing is relatively new - GA-ed in July https://github.blog/changelog/2025-07-31-npm-trusted-publish...
Trusted Publishing is a marketing term—a fancy name for OIDC support and temporary auth token issuance. It delegates authenticating the uploader to their identity provider, nothing more.
In a very real sense, it shifts responsibility to someone else. For example, if the uploader was using Google as their identity provider and their Google account was popped, the attacker would be able to impersonate the uploader. So I wouldn’t describe it as establishing a strong trust relationship with the uploader.
This is funny but ultimately a mischaracterization of a popularity contest. Node culture is extreme–perhaps pathological–about using many dependencies to work around the limited standard library but the same kind of attacks happen everywhere people are releasing code. The underlying problem is that once you release something it takes only seconds before someone else can be running your code with full privileges to access their account.
That’s why the joke doesn’t really work: America is a huge outlier for gun violence because we lack structural protections. Australia doesn’t have fewer attacks in proportion to a smaller population, they have a lower rate of those attacks per-capita because they have put rules in place to be less of a soft target.
I think everything you're saying about the difference between school shootings and NPM supply chain attacks is correct, but at the same time "You made a joke about why A is like B, but here's why A and B are actually different, therefore the joke is not funny" is not persuasive. Comedy does not need to be rigorous, the person you're replying to is not arguing that supply chain attacks are like school shootings, therefore open source programmers should do active shooter drills. That would be fallacious reasoning.
It's literally just a joke. If it tickles your fancy, it works for you. If you get lost in the weeds of comparing the socio-political mechanisms of open source to guns, or note that supply chain attacks happen to other package managers, the joke won't work for you.
I assure you, it works just fine for me even though yes I think it would be ridiculous to claim there's anything more to the comparison than, "This thing keeps happening, nobody thinks doing anything about it is worth the bother, so look at that, it keeps happening."
I chuckled, too, but I’m a Python developer and it’s not like this doesn’t happen there either. If you want the shorter version: “laugh after you’ve hardened your update process”.
Among other things, the attack space for npm is just so much larger. We run a large C# codebase (1M+ LOC) and a somewhat smaller TypeScript codebase (~200K LOC). I did a check the other day, and we have one potentially vulnerable nuget dependency for every 10,000 lines of C# code, but one potentially vulnerable npm dependency for about every 115 lines of TS code.
Deeply underrated comment. You can peel back the layers of sarcasm like... An onion.
Btw, it's copypasta at this stage
Anyone familiar with The Onion knows this, The Onion themselves repost the exact same thing every time there's a school shooting. Which emphasizes how regularly it happens, and therefore in turn I have no objection to a joke like this becoming copy pasta every time an NPM supply chain attack takes place.
https://duckduckgo.com/?q=site%3Atheonion.com+%22no+way+to+p...
"The issue is actually lack of guns, the way way to prevent this is by having more guns" kind of doubling down.
The issue is the people that use npm, and choose to have 48 layers of dependencies without paying for anything. Blaming microsoft, which is a company which pays engineers and audits all of its code and dependencies, is a step in the wrong direction in the necessary self reflection path off npm vulns.
It's a popularity issue; npm is an easy target. I don't see why it wouldn't happen to golang for example. You just need take over the git repo it's over for all users upgrading like npm
As far as I remember:
"go get" doesn't execute downloaded code automatically; there's no "postinstall" script (there can be a manual "go generate" or "go tool" the user may run)
Go doesn't upgrade existing dependencies automatically, even when adding a new dependency: you need an explicit "go get -u"
You don't use the same tool to both fetch and publish ("go get" vs "git push") so it's less likely a module publisher would get pwned while working on something unrelated.
The Go community tends not to "micropublish" so fewer people have or need commit rights to published go modules.
Go has a decent standard library so there are fewer "missing core functionality" third-party packages that world + dog depends on.
Npm is easier to pwn than Go, Maven, RubyGems, PyPI, CPAN, etc. because the design has more footguns and its community likes it that way
postinstall is a liability for sure, but as soon as you execute untrusted code, it's the same no matter the language. Last week, npm pawn was working like this without a postinstall, which could be the same with Go. Nothing prevents me from pushing a code that would read all your files as soon as you load the library in your code.
I notice you didn't address the other 4 differences. All 5 are about "defence in depth", making things less likely - and conversely, not doing them makes pwning more likely.
I'll add a 6th difference: "go get" downloads source code, not maintainer-provided tarballs. You can't sneak extra things in there that aren't in the code repo.
What about Java's Maven, much more popular and longer living?
When your only dependencies are Spring and Apache Commons, which requires legal approval in your corporation to use, and each update requires scrutiny, it's hard to get any supply chain attacks, right?
What makes you think Maven is more popular?
I think the cooldown approach would make this type of attack have practically no impact anymore, if nobody ever updates to a newly published package version until, say, 2-3 days have gone by, surely there will be enough time for owner of the package to notice he got pwnd.
Renovate Bot has this setting.
https://docs.renovatebot.com/configuration-options/#minimumr...
I've never heard of this. It sounds like a solid default to me. If you _really_ need an update you can override it, but it should remain the default and not allow opting out.
https://github.com/pnpm/pnpm/issues/9921
the funny thing about this is if everyone has the same cooldown, aren’t we back in the same boat?
sure there are other ways for the package maintainer to notice they were pwned, but often they will not notice.
The cool down isn't for end users. It is for package maintainers and scanners.
What about cases when the update fixes a security issue? Anybody using this approach would be a target for a few more days.
I know it sounds preposterous but there there are more ways to apply patches than npm pull
Update package versions manually, you say? The audacity!
Here’s a one-liner for node devs on MacOS, pin your versions and manually update your supply chain until your tooling supports supply chain vetting, or at least some level of protection against instantly-updated malicious upstream packages.
Would love to see some default-secure package management / repo options. Even a 24 hour delayed mirror would be better than than what we have today.
find . -name package.json -not -path "/node_modules/" -exec sh -c ' for pkg; do lock="$(dirname "$pkg")/package-lock.json" [ -f "$lock" ] || continue tmp="$(mktemp)" jq --argfile lock "$lock" \ ".dependencies |= with_entries(.value = $lock.dependencies[.key].version) | .devDependencies |= with_entries(.value = $lock.dependencies[.key].version // $lock.devDependencies[.key].version)" \ "$pkg" > "$tmp" && mv "$tmp" "$pkg" done ' sh {} +
The expected secure workflow should not require an elaborate bash incantation, it should be the workflow the tools naturally encourage you to use organically. "You're holding it wrong" cannot be possible.
? Package lock files from npm/yarn/pnpm automatically lock all your dependencies (including transitive deps)
What does this actually achieve?
Accidentally installing a malicious package in your dev environment, the concern isn’t “what’s already installed”, it’s what’s potentially going to be installed in the future by you or your colleagues.
So, you pin the version and update periodically when security issues arise in your dependencies.
Maybe the same as if "npm config set save-exact true" was enabled when adding the dependencies.
Whether that's so important, I'm not sure.
You can indent every line of a code block on Hacker News by two spaces to have it render as code.
Anyone have a good solution to scan all code in our Github org for uses of the affected packages? Many of the methods we've tried have dead ended. Inability to reliably search branches is quite annoying here.
npm audit - will tell you if there's any packages with known vulnerabilities. https://docs.npmjs.com/cli/v11/commands/npm-audit I'd imagine it's considerably slower than search, but hopefully more reliable.
If you have tens of thousands of repos with branches to match you'll be scanning all year.
Proxy NPM with something like Artifactory which stops the bad package getting back in or ending up in any new builds.
Follow it up with endpoint protection to weed the package out of the local checked out copies and .npm on the individual dev boxes.
Have you tried Dependency Track from OWASP? Generate SBOM from each repo/projects and post it with API to DT and you have full overview. You have to hook it up so it is done automatically because of course stuff will always move.
Any junior engineer should be able to solve this with grep in an afternoon.
For several thousand repos? Ensuring none of the 451 package versions have been installed on any branch in any repo? I don't think it's so simple.
pnpm have already implemented a minimum age policy
https://github.com/pnpm/pnpm/issues/9921
Here's a short recap of what you can do right now, because changing the ecosystem will take years, even if "we" bother to try doing it.
1. Switch to pnpm, it's not only faster and more space efficient, but also disables post-install scripts by default. Very few packages actually need those to function, most use it for spam and analytics. When you install packages into the project for the first time, it tells you what post-install scripts were skipped, and tells you how to whitelist only those you need. In most projects I don't enable any, and everything works fine. The "worst" projects required allowing two scripts, out of a couple dozen or so.
They also added this recently, which lets you introduce delays for new versions when updating packages. Combined with `pnpm audit`, I think it can replace the last suggestion of setting up a helper dependency bot with zero reliance on additional services, commercial or not:
https://pnpm.io/settings#minimumreleaseage
2. If you're on Linux, wrap your package managers into bubblewrap, which is a lightweight sandbox that will block access to almost all of your system, including sensitive files like ~/.ssh, and prevent anything running under it from escalating privileges. It's used by flatpak and Steam. A fully working & slightly improved version was posted here:
https://news.ycombinator.com/item?id=45271988
I posted the original here, but it was somewhat broken because some flags were sorted incorrectly (mea culpa). I still prefer using a separate cache directory instead of sharing the "global" ~/.cache because sensitive information might also end up there.
https://news.ycombinator.com/item?id=45041798
3. Setup renovate or any similar bot to introduce artificial delays into your supply chain, but also to fast-track fixes for publicly known vulnerabilities. This suggestion caused some unhappiness in the previous discussion for some reason — I really don't care which service you're using, this is not an ad, just setup something to track your dependencies because you will forget it. You can fully self-host it, I don't use their commercial offering — never has, don't plan to.
https://docs.renovatebot.com/configuration-options/#minimumr...
https://docs.renovatebot.com/presets-default/#enablevulnerab...
4. For those truly paranoid or working on very juicy targets, you can always stick your work into a virtual machine, keeping secrets out of there, maybe with one virtual machine per project.
You can also use tools like safe-chain which connects to malware databases and blocks installations of malicious packages. In this case it would have blocked installs around 20 minutes after the malware was added as this was how long it took to be added into the malware databases. https://www.npmjs.com/package/@aikidosec/safe-chain
When we close and reopen VSCode (and some other IDEs), it updates the NPM packages for the installed plugins. Would these mitigations steps (e.g. pnpm) also take care of that?
Bubblewrap seems excellent for Linux uses - on macOS, it seems like sandbox-exec could do some (all?) of what bubblewrap does on Linux. There's no official documentation for SBPL, but there are examples, and I found sandboxtron[0] which was a helpful base for writing a policy to try to contain npm
0: https://github.com/lynaghk/sandboxtron/tree/main
sandbox-exec is so frustrating. It could be a genuinely excellent solution to a whole bunch of sandboxing problems, except...
1. Documentation is virtually nonexistent. I think that is inexcusable for a security tool!
2. The man page says that it's deprecated, and has done for around a decade. No news on when they will actually remove it, maybe they never will? Hard to recommend it with that axe hanging over it though.
Absolutely agreed on the lack of documentation, it seems completely insane (I assume this is because they want to reinforce that only Apple should be writing policies - but still no excuse for it)
>Hard to recommend it with that axe hanging over it though.
Given the alternative being no way to limit untrusted tooling at all today, it seems worthwhile using it despite these problems?
There's also a (very slim) chance that if it became central to the security of developers on macOS that Apple would give slightly more consideration to it
Yes definitely worth using it, but I don't know how much time I want to spend integrating it deeply into my own open source projects given its uncertain status.
Yeah I know what you mean... one positive is it looks like Google use it in Chromium[0], so at least Google think the API will stick around for a while (and provides a big platform Apple would break if they discontinued it)
0: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
It seems to me like one obvious improvement is for npm to require 2fa to submit packages. The fact that malware can just automatically publish packages without a human having to go through an MFA step is crazy.
My non-solution years ago was to use as little dependencies as possible. And vendor node_modules then review every line of code changed when I update dependencies.
Not every project and team can do that. But when feasible, it's a strong mitigation layer.
What worked was splitting dependency diff review among the team so it's less of a burden. We pin exact versions and update judiciously.
You can't realistically do that when for example you use Jest as your test runner, which alone would add 300 packages.
ESLint would be another culprit, adding 80 packages.
It quickly gets out of hand.
To me it seems like the fewest projects could use this approach you described.
You usually can. You just gotta be a bit adventurous.
https://github.com/lukeed/uvu is a testing library with almost no dependency.
https://github.com/biomejs/biome is a linter written in Rust which in theory has a smaller attack surface.
And as long as you stay some versions behind bleeding edge, you can use time in your favor to catch supply chain attacks before they reach your codebase.
Well, can you?
Maybe you can.
Or you're talking about an approach you utilized in some side projects rather than moderately sized commercial web applications? I don't imagine there's many out there that have less than hundreds of dependencies.
That’s the solution. The whole theory of the many-eyes model is that lots of people will read the code.
You are doing the work. These automatic library installing services seem to have a massive free-rider problem.
The NPM monoculture is the problem. It would be absurd to suggest that all backend engineers use the same build tooling and dependency library, but here we are with frontend. It's just too big of an attack surface.
It would be absurd to make such a suggestion. However, the comparison is not correct. Not all front-end development uses the same build tooling or dependency libraries, or programming language for that matter. Even if you narrow to the typescript ecosystem, it's still not true.
And yet, 99% of front end development uses NPM.
According to https://tsh.io/state-of-frontend#package-manager, it's not quite that high.
The title does not do justice to public course of thinking which is more like "Oh no, not again! Anyway, look at this shiny new tool."
Here is an issue from 2013 where developers are asking to fix the package signing issue. Gone fully ignored because doing so was “too hard”: https://github.com/npm/npm/pull/4016
Related:
Shai-Hulud malware attack: Tinycolor and over 40 NPM packages compromised
https://news.ycombinator.com/item?id=45260741
I think if somebody wants to see library distribution channels tightened up they need to be very specific about what they would like to see changed and why it would be better, since it would appear that the status quo is serving what people actually want - being able to create and upload packages and update them when you want.
> But right now there are still no signed dependencies and nothing stopping people using AI agents, or just plain old scripts, from creating thousands of junk or namesquatting repositories.
This is as close as we get in this particular piece. So what's the alternative here exactly - do we want uploaders to sign up with Microsoft accounts? Some sort of developer vetting process? A curated lib store? I'm sure everybody will be thrilled if Microsoft does that to the JS ecosystem. (/s) I'm not seeing a great deal of difference between having someone's NPM creds and having someone's signing key. Let's make things better but let's also be precise, please.
> But right now there are still no signed dependencies
Considering these attacks are stealing API tokens by running code on developer's machines; I don't see how signing helps, attackers will just steal the private keys and sign their malware with those.
Could they detect code running from a new IP address or location and ask for a 2FA code?
postinstall is running on the developer's machine, from an endpoint security perspective, it's the actual developer performing the malicious actions, their machine, their IP address and their location.
What are you talking about, NPM keeps having issues that "status quo" of other platforms doesn't.
We treat code repositories as public infrastructure, but we don't want to pay for it, so corporations run them, with their profit interest in mind. This is the fundamental conflict, that I see. And one solution, more non profits as organisations behind them.
funny how npm is the exact same model as maven, gopkg, cpan, pip, mix, cargo, and a million others.
but only npm started with a desire to monetize it (well, npm and docker hub) and in its desire for control didn't implement (or allowed the community to implement) basic higiene.
The beatings will continue until JS dev culture reforms.
I see two ways to fight supply chain attacks:
* The endless arms race.
But, nevermind. It's been 2 years since Jia Tan and the amount of such 'occurrences' in the npm ecosystem in the past 10 years are bordering on uncountable at this point.
And yet this hack got through? This amateuristic and extremely obvious attempt? The injected function was literally named something like 'raidBitcoinEthWallet' or whatnot.
npm has clearly done absolutely nothing in this regard.
We haven't even gotten to the argument of '... but then hackers will simply use the automated tools themselves and only release stuff that doesn't get flagged'. There's nothing to talk about; npm has done nothing.
Which gets us to:
* Web of trust
This seems to me to be a near perfect way for the big companies that have earned so, so much money using the free FOSS they rely on, to contribute.
They spend the cash to hire a team that reviews FOSS stuff. Entire libraries, sure, but also updates. I think most of them _already do this_, and many will even openly publish issues they found. But they do not publish negative results (they do not publish 'our internal team had a quick look at update XYZ of project ABC and didn't see anything immediately suspicious').
They should start doing that. And repos like npm, maven, CPAN, etcetera should allow either the official maintainer of a library, or anybody, to attach 'assertments of no malicious intent'.
Imagine that npm hosts the following blob of text for NPM hosted projects in addition to the javascript artefacts:
> "I, google dev/security team, hereby vouch for this update in the senses: not-malicious. We have looked at it and did not see anything that we think is suspicious or worse. We make absolutely no promises whatsoever that this library is any good or that this update's changelog accurately represents the changes in it; we merely vouch for the fact that we do not think it was written with malicious intent. We explicitly disavow any legal or financial liability with this statement; we merely stake our good name. We have done this analysis on 2025-09-17 and did it for the artefact with SHA512 hash 1238498123abac. Signed, [public/private key infrastructure based signature, google.com].
And a general rule that google.com/.well-known/vouch-public-key or whatever contains the public key so anybody can check.
Aside from Jia Tan/xz (which always stops any attempt; Jia Tan/xz was so legendary, exactly how the fuck THIS still happens given that massive wakeup call boggles my mind!), every supply chain attack was pretty dang easy to spot; the problem was: Nobody was looking, and everybody configures their build scripts to pick up point updates immediately.
We should update these to 'pick them up after a certain 'vouch' score is reached'. Where everybody can mess with their scoring tables (don't trust google? reduce the value of their vouch to 0).
I think security-crucial 0day fixing updates will not be significantly hampered by this; generally big 0-days are big news and any update that comes out gets analysed to pieces. The vouch signatures would roll in within the hour after posting them.
[dead]
[dead]
[flagged]
Yes