> First of all, a linear “score” like CVSS just cannot work in cybersecurity. Instead, we should have a system on the attributes of a vulnerability.
This is exactly what CVSS is: a scoring system based on attributes.
> In the first category, we might have attributes such as: Needs physical access to the machine, Needs to have software running on the same machine, even if in a VM, Needs to run in the same VM.
This is exactly what the AV vector in CVSS is.
> In the second category, we might have attributes such as: Arbitrary execution, Data corruption (loss of integrity), Data exfiltration (loss of confidentiality).
This is exactly what impact metrics in CVSS are.
I fear the author has a severe misunderstanding of what CVSS is and where the scores come from. There's even an entire CVSS adjustment section for how to modify a score based on your specific environment. I'd recommend playing around with the calculator a little to understand how the scores work better: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator
As a pentester, who does not love CVSS[0], I found the article explaining how to replace CVSS with CVSS very amusing
[0] CVSS is often poorly understood and used by internal teams so for our internal engagements, we prefer words like "minor", "medium", "major", "critical" to describe criticity and impact and "easy", "medium", "hard" to describe exploitation difficulty (which loosely translates to likelihood), and the reasoning behind all this is very similar to what CVSS does
The essence of it is that "PEF" is from the user's point of view - pain, effort (work around), frequency. "REV" is from the developer's point of view- risk, effort (fix), verifiability.
Something that has a low PEF score and high REV score would not be practical to fix while something that is high PEF and low REV is something that should be prioritized high.
Less than 24 hrs into it, and now we have two problems from this defunding fiasco:
All the original problems that exist within CVE.
"Let's just reinvent the wheel!"
Yes, you have a dev background, which entitles you to an opinion, and you also have good intentions. This road is noble. However, the crux of this disaster is not technical, it's political. Maybe reinventing the wheel will be a huge success. Maybe it can wear the crown of free and open source for a while. But it's much more likely this fails as things become difficult to maintain, and you become tired, or poor, and are forced to stop, with nobody, or even worse, an enemy (this is an internationally critical database) controlling the database. So let's focus on solving the original funding disaster without jumping to forking and fracturing as a knee-jerk solution.
Okay, so what's your proposal (or any proposal) to fix this funding disaster? As a technical person I don't know how to affect the political process, and am thus disinclined to participate meaningfully. People with political ability and/or power have bigger problems to deal with right now, and also seem unable to affect the current political process. So what options do we have but to try our best, even if those ways are ultimately doomed?
My understanding of CVE is superficial at best. I thought it was just an acronym publicly identifying vulnerabilities; I didn't realise there was a political structure behind it all.
While the article presents good food for thought, certification isn't a practical solution to the problem at hand. This database seems like a reasonable alternative.
It is “just” that, but:
How are numbers assigned?
How can others find details?
Who determines when these details are public? (note: full CVE details can be used to exploit critical software)
If they’re not always public, who gets to see them? And who handles that dissemination?
Who takes care of duplicates?
Lots of work does go into this, even if it’s “just” an identifier.
Just because there is a problem doesn't necessarily mean you personally (or anybody, or any group of people) can solve it. Go ahead and try of you want, but also be wide-eyed about what success looks like, what the chances of it are, and what the costs will be.
This is a vapid answer, like "write your senator". Of course I do, every chance I get. It doesn't seem to have meaningfully affected our political process.
It isn't. I'll expand on it. Tell your wife, your cousin, your bartender why you voted. Tell them how this CVE system is critical, and why they should care about voting for people that truly understand this. If you vote and influence others to vote, you're solving the problem fully and directly.
Voting is all but useless in deep-(any color) states though, that's the problem. I mean... yes, it's the decent thing to do, but in the end it's wasted effort in the American political system.
That's only true if you think "vote" means "vote once every 4 years, for the President".
Even if the only political action you ever do is informed voting, if you're voting in every election, knowing as much as you can about the candidates, then you have a real chance of starting to move the needle.
> let's focus on solving the original funding disaster
The _original_ funding disaster is that this problem was delegated to the economic machinery of a nation-state, and humanity is presently in the process of evolving beyond nation-states.
Innovation in communication and information archival is an extremely long evolutionary process, persisting across aeons in the case of media like DNA and language, while the trivial shuffling through different varieties of state happens on comparatively extremely short (sometimes century or shorter) scales.
So, any solution that truly addresses the _original_ funding disaster must be future-compatible with an internet in which we've overcome the burden of nation-states.
> humanity is presently in the process of evolving beyond nation-states
No, it isn't. This is some Curtis Yarvin BS.
> So, any solution that truly addresses the _original_ funding disaster must be future-compatible with an internet in which we've overcome the burden of nation-states.
What does this even mean? How do you envision a "future-compatible" CVE database? And what does it have to do with nation states?
So if I understand it correctly, the blog author proposes to create a professional certification, require companies that produce software to have at least one of this certified individuals be responsible for reporting vulnerabilities of the companies software, complete with creating authorities that issue such certifications, training and also compliance enforcement.
And all this to fix a broken CVE system? I assume that the friction this generates has a bigger negative impact on the overall ecosystem than the non-optimal CVE system that exists right now.
Getting agreement on a better scoring system for CVE's will be hard enough, assuming it's possible at all given the competing interests.
It makes a top down imposed set of technical fixes for a lot of things broken in our industry look at best like an impossible dream. If anyone claiming they have an oracle that tells you how much effort should be put into QA for any given piece of software is a bullshitter. If you let the bullshitters loose they will create quagmire of rules leading to a huge amount of busy work that mainly benefits them.
A huge amount of experimentation is required to figure out what approaches work. Granted, that experimentation isn't happening now. That's why EU's approach looks like the right one to me. Prevent vendors from shrugging off all liability to defects in their product in their licences, which gives bugs (of all sorts) potential for a serious financial bite. The severity of the bite is determined largely by the customer - did it hurt so badly perusing the vendor in the courts (perhaps via a class action) is worth it? That IMO is where the severity should be determined. Vendors and bug hunters have their own agendas that numerous examples have shown seriously compromise their ability to grade bugs. Finally it leaves the software developers free to experiment and invent their own responses. That's far better than giving handing that responsibility to bureaucrats. There are far more computer engineers out there, and their solutions will be much better at making their products reliable than forcing them to follow some universal set of rules, no matter how well intentioned those rules may be.
Paternalistic interventionism wrapped up in the usual engineering propensity to overestimate our ability to understand and solve political and human problems well outside our immediate expertise.
I feel like requiring software "engineers" to be actual capital E Engineers would fix a lot of problems in our industry. You can't build even a small bridge without a PE, because what if a handful of people get hurt? But on the other hand your software that could cause harm to millions by leaking their private info, sure, whatever, some dork fresh out of college is good enough for that.
And in the current economic climate, even principled and diligent SEs might be knowingly putting out broken software because the bossman said the deadline is the end of the month, and if I object, he'll find someone who won't. But if SEs were PEs, they suddenly have standing, and indeed obligation, to push back on insecure software and practices.
While requiring SEs to be PEs would fix some problems, I'm sure it would also cause some new ones. But to me, a world where engineers have the teeth to push back against unreasonable or bad requirements sounds fairly utopian.
I agree completely with you, in principle. The problem is that Engineers don't struggle with a mountain appearing in the middle of the river partway through construction.
It is a significantly broader problem. Processes are nearly always to blame for failure, not disciplines or people. For example, the sales team would need to come on board (don't sell anything that isn't planned or - better - completed), product would have to commit to features well in advance, the c-suite would need to learn how to say "no."
With all of that you would lose the ability to pivot. Software projects would takes years before any results could be shown. Just how things used to be. Maybe this can be done without that trade-off, but I'm not aware of any means.
I'm a (relatively new) math teacher. I realized I don't like writing on the whiteboard, so I bought myself a cheap Wacom Tablet off eBay. But then I couldn't find any existing Wacom-compatible software that was designed for my usecase—teaching in front of a live class of ten-year-olds, so last weekend I "vibe-coded" an app for myself. I just used the app for the first time while teaching today, it was great.
This codebase is probably terrible, because it was mostly written by AI. I manually edited certain bits, but there are large sections of the codebase I literally haven't looked at.
Is this a problem? The app works well for me!
My point here is, I'd really hate to gatekeep software development to a small group of "licensed" engineers. In fact, I want the opposite: to empower more people to make software for themselves, so they can control their own computers instead of being at the whims of tech giants. (This is also why I dislike iOS so much.)
I do also take your point about safety, but I think we need to acknowledge that not all software is security critical and it doesn't need to be treated in the same way!
> My point here is, I'd really hate to gatekeep software development to a small group of "licensed" engineers. If anything, I want the opposite--to enpower more people to make software for themselves, so they can make their computers work for them. (This is why I dislike iOS so much.)
I 100% agree. I wouldn't want to gatekeep software development in general. I would only put the PE requirement on companies that are running a service connected to the internet that collects user data.
Want to make an application that never phones home at all? Go nuts. Want to run a service that never collects any sensitive data? Sure thing! Want to run a service that needs sensitive data to function? Names, addresses, credit card info? Yeah, you're going to need a PE to sign off of that.
Side note, I was a math teacher in a previous life. Congrats on the relatively new career, and thanks for your service.
> Want to make an application that never phones home at all? Go nuts. Want to run a service that never collects any sensitive data? Sure thing! Want to run a service that needs sensitive data to function? Names, addresses, credit card info? Yeah, you're going to need a PE to sign off of that.
Agreed, but I do think a tool like curl makes this a little complicated. To my knowledge, curl itself does not phone home or collect user data, but it's obviously security critical.
...or maybe it's not, now that I think about it. Curl is not end-user software. Maybe when other software uses curl, that software gets a PE sign off. But now this is starting to feel to me like another dumb compliance checkbox system. Is it?
I think end-users should always be empowered to be cavalier with their own cybersecurity. Organizations managing the data of others, however, should be held to a higher standard. If this means that an organization is using curl, they should have a PE responsible for auditing curl for security flaws.
What's the plan for when one of your vibecoded app's vulnerabilities is exploited and a stranger's penis appears in front of your class of ten-year-olds? Is "AI did it" going to save your job / keep you off the sex offender registry?
This app doesn't use the internet. I'm sure it could be used as part of some complex exploit chain, but now we're talking about a highly sophisticated attack.
I think part of the problem with that is that for physical engineering, there are clear, well-understood, deterministic and enumerable requirements that, as long as you as the engineer understand them and take them properly into account, your bridges and buildings won't fall down.
With software engineering, yes, there are best practices you can follow, and we can certainly do much better than we've been doing...but the actual dangers of programming aren't based on physical laws that remain the same everywhere; they're based on the code that you personally write, and how it interacts with every other system out there. The requirements and pitfalls are not (guaranteed to be) knowable and enumerable ahead of time.
Frankly, what would make a much greater difference, IMNSHO, would be an actual industry-wide push for ethics and codes of conduct. I know that such a thing would be pretty unpopular in a place like Y Combinator (and thus HackerNews), because it would, fundamentally, be saying "put these principles ahead of making the most money the fastest"—but if we could start a movement to actually require this, and some sort of certification for people who join in, which can then be revoked from those who violate it...
If we could get such a cultural shift to take place, it would (eventually) make it much harder for unscrupulous managers and executives to say "you'll ship with these security holes (or without doing proper QA), because if you don't we make less money" and actually have it stick.
I think we're basically describing the same thing. Asking a software engineering process to be the same as a physical engineering process is not realistic. A PE for SEs would look more like a code of ethics and conduct than a PE for say civil engineering.
The key thing to borrow from physical engineering is the concept of a sign off. A PE would have to sign off on a piece of software, declaring that it follows best practices and has no known security holes. More importantly, a PE would have the authority and indeed obligation to refuse to sign off on bad software.
But expecting software to have clear, well-understood, deterministic requirements and follow a physical engineering requirements-based process? Nah. Maybe someday, I doubt in my lifetime.
I think about this a lot and I tend to agree. There’s so much misinformation and ghost in the machine these days. I wish swes went to seek out the truth more. I’m not saying it dosent happen I just wish we had more engineering in this field.
>"So yes, I get it: we shouldn’t trust companies, or even FOSS projects, to self-report.
Unless…what if we made penalties so large for not reporting, and for getting it wrong, that they would fall over themselves to do so?"
We know this doesn't work, and author admits as much.
However, the proposed solution is to add another cert into the mix. But it's not clear how this designation would be applied globally, with agreement across the globe on the requirements, punishments, etc. Not to be rude to the author, but it sort of seems like they forgot that not all software is developed in the US. (Not to mention, I really don't want another cert)
Yeah, it seemed to jump the shark a bit here. Professional bodies, certified engineers, taking on liability for Open Source code... there's a LOT going on here...
Here's what'll really happen; no one cares or wants yo be a certified professional. Companies don't care about it. We carry on as is...
This is a classic over-engineered solution that nobody wants to a problem that barely exists. Just add burocracy, what could possibly go wrong...??
So, historically the creation of bureaucracy in the US government included industry professionals to guide the requirements and a public comment period before finalization. This is done because most people in government recognize they are not up to date on the latest industry knowledge.
Destroying everything and creating a new bureaucracy is in absolutely no way better than fixing the original one on updated information.
It seems you may have fallen victim to the very well thought out "government bad!" argument.
Currently being certified has no value, not to Open Source, not to Closed Source, since companies are clearly doing fine without it. So it's hard to see them paying extra for it.
We've had professional certifications for years. Novell, Citrix, Microsoft, SAP, Oracle, they all make money selling certification to naive users. Anyone who's bothered knows they don't really mean much.
But hey, if you think there's demand, set up a body and give it a go. Personally I think it's a waste of time, but if you can get enough companies to care, and enough developers to pay, you'll have a nice business.
> We know this doesn't work, and author admits as much.
Where do I admit this? About fines? Yes, fines don't work.
The difference with my proposal is that companies wouldn't lose a few days' worth of revenue to a fine, they would lose 100% of revenue. That goes from being a "cost of doing business" to an existential threat.
> Not to be rude to the author, but it sort of seems like they forgot that not all software is developed in the US.
I didn't forget. In fact, it's because of worldwide things that I keep pushing this here in the US. The EU already passed the Cybersecurity Resilience Act [1].
Sure, we may not have things apply globally, but we don't need agreement on the punishments globally. We just need agreement on the certification globally.
We have done global agreements before. ICANN, International Telecommunications Union, etc. ICANN is interesting because it started as US-only and expanded.
>Where do I admit this? About fines? Yes, fines don't work.
Yes, about fines. From your post: "Ah, yes, fines for companies are not enough. I agree."
>they would lose 100% of revenue.
We can't get the government to enforce this when tens of millions of records are leaked publicly, this absolutely will not happen for failure to report a vulnerability. If you have any idea of how to make it happen, please, lets immediately apply it to breaches and then figure out how to apply it to failure to report vulnerabilities.
>We just need agreement on the certification globally.
As far as I am aware, there is no certification (one which is legally required to obtain a job) on the planet that is globally recognized. But I would be happy to be proven wrong here.
>but we don't need agreement on the punishments globally.
Which will end up with some countries not willing to charge 100% loss of revenue, causing a mass exodus of companies from any country which does charge 100%, thus making the solution untenable.
ICANN is an interesting example, but it's not a certification. The scale (and thus administration, compliance, etc.) is very different.
> This idea I had months ago will surely fix all the problems I just started thinking about today.
I very rarely find myself agreeing with some take the author has made. To the point where I almost said never agree. But I always read though, because even though the suggestion is always surface level, it's also always well written and well expressed. I like the help in reasoning through my own thoughts, and his musings always give a good place to start explaining and correcting from.
I hate, with a passion, CVE farmers. Because sa much of it is noise these days. But everyone complaining^1 so far have all completly missed the forest for the trees. The reason everyone uses CVEs still is because the value from having a CVE was never to know the severity. (The difference between unauthenticated remote arbitrary code execution, and might create a partial denial of service in some rare and crafted cases, is 9.9 and 9.3) The value has always been the complete lack of ambiguity when discussing some defect with a security implication. You don't really understand something if you can't explain it, you can explain it if you don't have the words or names for it. CVE farming is a problem, but everyone uses CVEs because it makes defects easier to understand and talk about without misunderstandings or miscommunication.
I'd love to see whatever replaces CVEs included a super set, where CVEs, also have CRE, where Vulnerability is replaced by Risk and only when [handwavey answer about project owner agreement], which would ideally preserve the value we get from the current system. But would allow the incremental improvement suggested by the original comment this essay is responding to. I would like my CVEs to be exclusively vulns that are significant. But even more than I want that, I don't want to have to argue about where the bar for significant belongs!
No company wants to manage CVEs, there's nothing that's going to meaningfully change that in the short term. Which means no one is looking for a better CVE system. Everybody wants the devil they know, I have complaints about the CVE system. But don't want to try to replace it without accounting for how it's used, in addition to how it works (and breaks).
1^: it's still early, and the people rushing to post are often only looking at the surface level. I'm excited to hear deeper more reasoned thoughts, but that's likely to take more than just 24h
I tried to read this with an open mind, but I think the poster is talking about a lot of problems that are adjacent to CVE (coordinated vulnerability disclosure and vulnerability scoring, primarily) while missing the primary value that CVE provides (a consistent vocabulary to talk about vulnerabilities and a centralized clearing house for distributing vulnerability data) and as a result their proposed solution misses the mark.
The article quotes a lobsters post approvingly:
1. We end up with a system like CVE where submitters are in charge of what’s in the database other than egregious cases. This is what MITRE supported as the default unless someone became a CNA, something they’ve been handing out much more freely over the last few years to address public scrutiny.
2. We end up with a system not like CVE where vendors are in charge of what’s a vulnerability. This seems to be what Daniel and others want.
I guess the first problem with this is that the CNA system very much puts vendors in de facto control of what goes in the database. But, this description of CVE-like systems is missing the forest for the trees, in that the alternative to CVE is not one of the two scenarios described, but the wild-west situation that existed before CVE, where vulnerability info came from CERT, from Bugtraq/Full Disclosure/etc., and from vendors, often using wildly different language to describe the same thing.
The whitepaper[0] that led to the CVE system described a pretty typical scenario:
Consider the problem of naming vulnerabilities in a consistent fashion. For example, one
vulnerability discovered in 1991 allowed unauthorized access to NFS file systems via
guessable file handles. In the ISS X-Force Database, this vulnerability is labeled nfs-> guess
[8]; in CyberCop Scanner 2.4, it is called NFS file handle guessing check [10]; and the same
vulnerability is identified (along with other vulnerabilities) in CERT Advisory CA-91.21, which
is titled SunOS NFS Jumbo and fsirand Patches [3]. In order to ensure that the same
vulnerability is being referenced in each of these sources, we have to rely on our own
expertise and manually correlate them by reading descriptive text, which can be vague
and/or voluminous.
That, and a central clearing house, are what is at stake if a system like CVE disappears, and I fail to see how any professional licensing scheme -- unless the licensing body replicated the CVE system or something like it -- would do anything to address that.
parliament32's comment in this thread perfectly addresses the issues with the articles treatment of CVSS, so I'll not rehash that here, other than to say that the actual score output of CVSS is bad and the people who designed it should feel bad.
Please don't drag Rust into needless controversies like this. Rust does improve the situation. But not even the language developers will make such bold claims.
To begin with, memory safety errors aren't the only source of security bugs. The simplest example is an SQL injection attack. Besides, almost all Rust programs contain hidden unsafe code. You can check the standard library if you don't believe me. Memory safety in Rust depends on verification of those unsafe interfaces. The chance of having such bugs is pretty low since they are widely reviewed (this is the point of Rust unsafe). But such errors do occur sometimes and are reported from time to time.
> First of all, a linear “score” like CVSS just cannot work in cybersecurity. Instead, we should have a system on the attributes of a vulnerability.
This is exactly what CVSS is: a scoring system based on attributes.
> In the first category, we might have attributes such as: Needs physical access to the machine, Needs to have software running on the same machine, even if in a VM, Needs to run in the same VM.
This is exactly what the AV vector in CVSS is.
> In the second category, we might have attributes such as: Arbitrary execution, Data corruption (loss of integrity), Data exfiltration (loss of confidentiality).
This is exactly what impact metrics in CVSS are.
I fear the author has a severe misunderstanding of what CVSS is and where the scores come from. There's even an entire CVSS adjustment section for how to modify a score based on your specific environment. I'd recommend playing around with the calculator a little to understand how the scores work better: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator
And CVSS4 has added more metrics - including an AT (Attack Requirements) field:
> Are there any conditions necessary for an attack which the attacker cannot influence?
https://nvd.nist.gov/vuln-metrics/cvss/v4-calculator
As a pentester, who does not love CVSS[0], I found the article explaining how to replace CVSS with CVSS very amusing
[0] CVSS is often poorly understood and used by internal teams so for our internal engagements, we prefer words like "minor", "medium", "major", "critical" to describe criticity and impact and "easy", "medium", "hard" to describe exploitation difficulty (which loosely translates to likelihood), and the reasoning behind all this is very similar to what CVSS does
Have you ever stumbled across the PEF/REV method for classifying bugs?
https://www.fincher.org/tips/General/SoftwareDevelopment/Bug...
The essence of it is that "PEF" is from the user's point of view - pain, effort (work around), frequency. "REV" is from the developer's point of view- risk, effort (fix), verifiability.
Something that has a low PEF score and high REV score would not be practical to fix while something that is high PEF and low REV is something that should be prioritized high.
Less than 24 hrs into it, and now we have two problems from this defunding fiasco:
All the original problems that exist within CVE.
"Let's just reinvent the wheel!"
Yes, you have a dev background, which entitles you to an opinion, and you also have good intentions. This road is noble. However, the crux of this disaster is not technical, it's political. Maybe reinventing the wheel will be a huge success. Maybe it can wear the crown of free and open source for a while. But it's much more likely this fails as things become difficult to maintain, and you become tired, or poor, and are forced to stop, with nobody, or even worse, an enemy (this is an internationally critical database) controlling the database. So let's focus on solving the original funding disaster without jumping to forking and fracturing as a knee-jerk solution.
Okay, so what's your proposal (or any proposal) to fix this funding disaster? As a technical person I don't know how to affect the political process, and am thus disinclined to participate meaningfully. People with political ability and/or power have bigger problems to deal with right now, and also seem unable to affect the current political process. So what options do we have but to try our best, even if those ways are ultimately doomed?
https://euvd.enisa.europa.eu/
This one is funded by the EU and accepts direct submissions. It’s probably the best replacement: state-backed, long-running, and reliable.
My understanding of CVE is superficial at best. I thought it was just an acronym publicly identifying vulnerabilities; I didn't realise there was a political structure behind it all.
While the article presents good food for thought, certification isn't a practical solution to the problem at hand. This database seems like a reasonable alternative.
It is “just” that, but: How are numbers assigned? How can others find details? Who determines when these details are public? (note: full CVE details can be used to exploit critical software) If they’re not always public, who gets to see them? And who handles that dissemination? Who takes care of duplicates?
Lots of work does go into this, even if it’s “just” an identifier.
Just because there is a problem doesn't necessarily mean you personally (or anybody, or any group of people) can solve it. Go ahead and try of you want, but also be wide-eyed about what success looks like, what the chances of it are, and what the costs will be.
Moving to more stable financial markets, away from the United States, would be a start.
>As a technical person I don't know how to affect the political process
Vote.
This is a vapid answer, like "write your senator". Of course I do, every chance I get. It doesn't seem to have meaningfully affected our political process.
It isn't. I'll expand on it. Tell your wife, your cousin, your bartender why you voted. Tell them how this CVE system is critical, and why they should care about voting for people that truly understand this. If you vote and influence others to vote, you're solving the problem fully and directly.
Voting is all but useless in deep-(any color) states though, that's the problem. I mean... yes, it's the decent thing to do, but in the end it's wasted effort in the American political system.
That's only true if you think "vote" means "vote once every 4 years, for the President".
Even if the only political action you ever do is informed voting, if you're voting in every election, knowing as much as you can about the candidates, then you have a real chance of starting to move the needle.
I mean everyone could just pitch in to support it rather than the US government? I'm sure a few big corps could od it
> let's focus on solving the original funding disaster
The _original_ funding disaster is that this problem was delegated to the economic machinery of a nation-state, and humanity is presently in the process of evolving beyond nation-states.
Innovation in communication and information archival is an extremely long evolutionary process, persisting across aeons in the case of media like DNA and language, while the trivial shuffling through different varieties of state happens on comparatively extremely short (sometimes century or shorter) scales.
So, any solution that truly addresses the _original_ funding disaster must be future-compatible with an internet in which we've overcome the burden of nation-states.
> humanity is presently in the process of evolving beyond nation-states
No, it isn't. This is some Curtis Yarvin BS.
> So, any solution that truly addresses the _original_ funding disaster must be future-compatible with an internet in which we've overcome the burden of nation-states.
What does this even mean? How do you envision a "future-compatible" CVE database? And what does it have to do with nation states?
So if I understand it correctly, the blog author proposes to create a professional certification, require companies that produce software to have at least one of this certified individuals be responsible for reporting vulnerabilities of the companies software, complete with creating authorities that issue such certifications, training and also compliance enforcement.
And all this to fix a broken CVE system? I assume that the friction this generates has a bigger negative impact on the overall ecosystem than the non-optimal CVE system that exists right now.
Not just to fix the broken CVE system, but to fix a lot of things that are broken in our industry.
Getting agreement on a better scoring system for CVE's will be hard enough, assuming it's possible at all given the competing interests.
It makes a top down imposed set of technical fixes for a lot of things broken in our industry look at best like an impossible dream. If anyone claiming they have an oracle that tells you how much effort should be put into QA for any given piece of software is a bullshitter. If you let the bullshitters loose they will create quagmire of rules leading to a huge amount of busy work that mainly benefits them.
A huge amount of experimentation is required to figure out what approaches work. Granted, that experimentation isn't happening now. That's why EU's approach looks like the right one to me. Prevent vendors from shrugging off all liability to defects in their product in their licences, which gives bugs (of all sorts) potential for a serious financial bite. The severity of the bite is determined largely by the customer - did it hurt so badly perusing the vendor in the courts (perhaps via a class action) is worth it? That IMO is where the severity should be determined. Vendors and bug hunters have their own agendas that numerous examples have shown seriously compromise their ability to grade bugs. Finally it leaves the software developers free to experiment and invent their own responses. That's far better than giving handing that responsibility to bureaucrats. There are far more computer engineers out there, and their solutions will be much better at making their products reliable than forcing them to follow some universal set of rules, no matter how well intentioned those rules may be.
Paternalistic interventionism wrapped up in the usual engineering propensity to overestimate our ability to understand and solve political and human problems well outside our immediate expertise.
What could possibly go wrong?
I feel like requiring software "engineers" to be actual capital E Engineers would fix a lot of problems in our industry. You can't build even a small bridge without a PE, because what if a handful of people get hurt? But on the other hand your software that could cause harm to millions by leaking their private info, sure, whatever, some dork fresh out of college is good enough for that.
And in the current economic climate, even principled and diligent SEs might be knowingly putting out broken software because the bossman said the deadline is the end of the month, and if I object, he'll find someone who won't. But if SEs were PEs, they suddenly have standing, and indeed obligation, to push back on insecure software and practices.
While requiring SEs to be PEs would fix some problems, I'm sure it would also cause some new ones. But to me, a world where engineers have the teeth to push back against unreasonable or bad requirements sounds fairly utopian.
I agree completely with you, in principle. The problem is that Engineers don't struggle with a mountain appearing in the middle of the river partway through construction.
It is a significantly broader problem. Processes are nearly always to blame for failure, not disciplines or people. For example, the sales team would need to come on board (don't sell anything that isn't planned or - better - completed), product would have to commit to features well in advance, the c-suite would need to learn how to say "no."
With all of that you would lose the ability to pivot. Software projects would takes years before any results could be shown. Just how things used to be. Maybe this can be done without that trade-off, but I'm not aware of any means.
I'm a (relatively new) math teacher. I realized I don't like writing on the whiteboard, so I bought myself a cheap Wacom Tablet off eBay. But then I couldn't find any existing Wacom-compatible software that was designed for my usecase—teaching in front of a live class of ten-year-olds, so last weekend I "vibe-coded" an app for myself. I just used the app for the first time while teaching today, it was great.
This codebase is probably terrible, because it was mostly written by AI. I manually edited certain bits, but there are large sections of the codebase I literally haven't looked at.
Is this a problem? The app works well for me!
My point here is, I'd really hate to gatekeep software development to a small group of "licensed" engineers. In fact, I want the opposite: to empower more people to make software for themselves, so they can control their own computers instead of being at the whims of tech giants. (This is also why I dislike iOS so much.)
I do also take your point about safety, but I think we need to acknowledge that not all software is security critical and it doesn't need to be treated in the same way!
> My point here is, I'd really hate to gatekeep software development to a small group of "licensed" engineers. If anything, I want the opposite--to enpower more people to make software for themselves, so they can make their computers work for them. (This is why I dislike iOS so much.)
I 100% agree. I wouldn't want to gatekeep software development in general. I would only put the PE requirement on companies that are running a service connected to the internet that collects user data.
Want to make an application that never phones home at all? Go nuts. Want to run a service that never collects any sensitive data? Sure thing! Want to run a service that needs sensitive data to function? Names, addresses, credit card info? Yeah, you're going to need a PE to sign off of that.
Side note, I was a math teacher in a previous life. Congrats on the relatively new career, and thanks for your service.
> Want to make an application that never phones home at all? Go nuts. Want to run a service that never collects any sensitive data? Sure thing! Want to run a service that needs sensitive data to function? Names, addresses, credit card info? Yeah, you're going to need a PE to sign off of that.
Agreed, but I do think a tool like curl makes this a little complicated. To my knowledge, curl itself does not phone home or collect user data, but it's obviously security critical.
...or maybe it's not, now that I think about it. Curl is not end-user software. Maybe when other software uses curl, that software gets a PE sign off. But now this is starting to feel to me like another dumb compliance checkbox system. Is it?
Curl is end-user software when Debian packages it in their repository.
I think end-users should always be empowered to be cavalier with their own cybersecurity. Organizations managing the data of others, however, should be held to a higher standard. If this means that an organization is using curl, they should have a PE responsible for auditing curl for security flaws.
Good job.
What's the plan for when one of your vibecoded app's vulnerabilities is exploited and a stranger's penis appears in front of your class of ten-year-olds? Is "AI did it" going to save your job / keep you off the sex offender registry?
This app doesn't use the internet. I'm sure it could be used as part of some complex exploit chain, but now we're talking about a highly sophisticated attack.
Security decisions are made in the context of a threat model. Who is going to target their bespoke application with this attack and why?
For the same reason people deface vulnerable websites, hijack social media accounts, make prank calls.. just for the lulz
The same company that hires bossman to push deadlines would just stop hiring "licensed" SEs. Problem solved with mouthy SEs pushing back
You would, of course, have to have similar enforcement that goes along with PE.
Then it would be a matter of criminal negligence on the part of the bossman.
I think part of the problem with that is that for physical engineering, there are clear, well-understood, deterministic and enumerable requirements that, as long as you as the engineer understand them and take them properly into account, your bridges and buildings won't fall down.
With software engineering, yes, there are best practices you can follow, and we can certainly do much better than we've been doing...but the actual dangers of programming aren't based on physical laws that remain the same everywhere; they're based on the code that you personally write, and how it interacts with every other system out there. The requirements and pitfalls are not (guaranteed to be) knowable and enumerable ahead of time.
Frankly, what would make a much greater difference, IMNSHO, would be an actual industry-wide push for ethics and codes of conduct. I know that such a thing would be pretty unpopular in a place like Y Combinator (and thus HackerNews), because it would, fundamentally, be saying "put these principles ahead of making the most money the fastest"—but if we could start a movement to actually require this, and some sort of certification for people who join in, which can then be revoked from those who violate it...
If we could get such a cultural shift to take place, it would (eventually) make it much harder for unscrupulous managers and executives to say "you'll ship with these security holes (or without doing proper QA), because if you don't we make less money" and actually have it stick.
I think we're basically describing the same thing. Asking a software engineering process to be the same as a physical engineering process is not realistic. A PE for SEs would look more like a code of ethics and conduct than a PE for say civil engineering.
The key thing to borrow from physical engineering is the concept of a sign off. A PE would have to sign off on a piece of software, declaring that it follows best practices and has no known security holes. More importantly, a PE would have the authority and indeed obligation to refuse to sign off on bad software.
But expecting software to have clear, well-understood, deterministic requirements and follow a physical engineering requirements-based process? Nah. Maybe someday, I doubt in my lifetime.
I think about this a lot and I tend to agree. There’s so much misinformation and ghost in the machine these days. I wish swes went to seek out the truth more. I’m not saying it dosent happen I just wish we had more engineering in this field.
>"So yes, I get it: we shouldn’t trust companies, or even FOSS projects, to self-report.
Unless…what if we made penalties so large for not reporting, and for getting it wrong, that they would fall over themselves to do so?"
We know this doesn't work, and author admits as much.
However, the proposed solution is to add another cert into the mix. But it's not clear how this designation would be applied globally, with agreement across the globe on the requirements, punishments, etc. Not to be rude to the author, but it sort of seems like they forgot that not all software is developed in the US. (Not to mention, I really don't want another cert)
Yeah, it seemed to jump the shark a bit here. Professional bodies, certified engineers, taking on liability for Open Source code... there's a LOT going on here...
Here's what'll really happen; no one cares or wants yo be a certified professional. Companies don't care about it. We carry on as is...
This is a classic over-engineered solution that nobody wants to a problem that barely exists. Just add burocracy, what could possibly go wrong...??
I do want to be certified. I think it would be a great way to make money building Open Source Software.
> Companies don't care about it. We carry on as is...
If required by law, companies would care.
> This is a classic over-engineered solution that nobody wants to a problem that barely exists.
The sorry state of our industry means the opposite: the problem is big, but lack of teeth means companies can ignore it and externalize the costs.
> Just add burocracy, what could possibly go wrong...??
I'd prefer to create our own bureaucracy, not have governments push one on us, like the Cyber Resilience Act does in the EU.
> I'd prefer to create our own bureaucracy...
So, historically the creation of bureaucracy in the US government included industry professionals to guide the requirements and a public comment period before finalization. This is done because most people in government recognize they are not up to date on the latest industry knowledge.
Destroying everything and creating a new bureaucracy is in absolutely no way better than fixing the original one on updated information.
It seems you may have fallen victim to the very well thought out "government bad!" argument.
Currently being certified has no value, not to Open Source, not to Closed Source, since companies are clearly doing fine without it. So it's hard to see them paying extra for it.
We've had professional certifications for years. Novell, Citrix, Microsoft, SAP, Oracle, they all make money selling certification to naive users. Anyone who's bothered knows they don't really mean much.
But hey, if you think there's demand, set up a body and give it a go. Personally I think it's a waste of time, but if you can get enough companies to care, and enough developers to pay, you'll have a nice business.
Herding cats sounds easier.
> We know this doesn't work, and author admits as much.
Where do I admit this? About fines? Yes, fines don't work.
The difference with my proposal is that companies wouldn't lose a few days' worth of revenue to a fine, they would lose 100% of revenue. That goes from being a "cost of doing business" to an existential threat.
> Not to be rude to the author, but it sort of seems like they forgot that not all software is developed in the US.
I didn't forget. In fact, it's because of worldwide things that I keep pushing this here in the US. The EU already passed the Cybersecurity Resilience Act [1].
Sure, we may not have things apply globally, but we don't need agreement on the punishments globally. We just need agreement on the certification globally.
We have done global agreements before. ICANN, International Telecommunications Union, etc. ICANN is interesting because it started as US-only and expanded.
[1]: https://en.wikipedia.org/wiki/Cyber_Resilience_Act
>Where do I admit this? About fines? Yes, fines don't work.
Yes, about fines. From your post: "Ah, yes, fines for companies are not enough. I agree."
>they would lose 100% of revenue.
We can't get the government to enforce this when tens of millions of records are leaked publicly, this absolutely will not happen for failure to report a vulnerability. If you have any idea of how to make it happen, please, lets immediately apply it to breaches and then figure out how to apply it to failure to report vulnerabilities.
>We just need agreement on the certification globally.
As far as I am aware, there is no certification (one which is legally required to obtain a job) on the planet that is globally recognized. But I would be happy to be proven wrong here.
>but we don't need agreement on the punishments globally.
Which will end up with some countries not willing to charge 100% loss of revenue, causing a mass exodus of companies from any country which does charge 100%, thus making the solution untenable.
ICANN is an interesting example, but it's not a certification. The scale (and thus administration, compliance, etc.) is very different.
To provide some additional context to OP.
In the CRA, there’s (among others):
- reporting of actively exploited vulns or severe incidents to a national cert
- reporting obligation of vulns to the provider of that vulnerable code
- mandatory vulnerability disclosure policy (to receive vuln reports)
- obligation to provide security updates and alert customers when a vuln has become known
We’ll see how well this is all followed, but from a security perspective these are all good ideas.
About the fines, there’s a second option: make them more frequent, so there’s less chance on getting away with (minor) transgressions.
This would require well staffed regulatory bodies. At least for GDPR, I don’t think we have that.
> This idea I had months ago will surely fix all the problems I just started thinking about today.
I very rarely find myself agreeing with some take the author has made. To the point where I almost said never agree. But I always read though, because even though the suggestion is always surface level, it's also always well written and well expressed. I like the help in reasoning through my own thoughts, and his musings always give a good place to start explaining and correcting from.
I hate, with a passion, CVE farmers. Because sa much of it is noise these days. But everyone complaining^1 so far have all completly missed the forest for the trees. The reason everyone uses CVEs still is because the value from having a CVE was never to know the severity. (The difference between unauthenticated remote arbitrary code execution, and might create a partial denial of service in some rare and crafted cases, is 9.9 and 9.3) The value has always been the complete lack of ambiguity when discussing some defect with a security implication. You don't really understand something if you can't explain it, you can explain it if you don't have the words or names for it. CVE farming is a problem, but everyone uses CVEs because it makes defects easier to understand and talk about without misunderstandings or miscommunication.
I'd love to see whatever replaces CVEs included a super set, where CVEs, also have CRE, where Vulnerability is replaced by Risk and only when [handwavey answer about project owner agreement], which would ideally preserve the value we get from the current system. But would allow the incremental improvement suggested by the original comment this essay is responding to. I would like my CVEs to be exclusively vulns that are significant. But even more than I want that, I don't want to have to argue about where the bar for significant belongs!
No company wants to manage CVEs, there's nothing that's going to meaningfully change that in the short term. Which means no one is looking for a better CVE system. Everybody wants the devil they know, I have complaints about the CVE system. But don't want to try to replace it without accounting for how it's used, in addition to how it works (and breaks).
1^: it's still early, and the people rushing to post are often only looking at the surface level. I'm excited to hear deeper more reasoned thoughts, but that's likely to take more than just 24h
Related ongoing threads:
CVE Foundation - https://news.ycombinator.com/item?id=43704430
CVE program faces swift end after DHS fails to renew contract [fixed] - https://news.ycombinator.com/item?id=43700607
Funding to Mitre's CVE was just reinstated:
https://www.forbes.com/sites/kateoflahertyuk/2025/04/16/cve-...
Oh gosh that was an interesting read. Became a fan of the author and his irreverence in reading this.
You can't solve people problems with technology.
How is creating a professional certification a technical solution? This sounds like a people solution to a people problem.
I think it might be time for an OpenCVE...
I tried to read this with an open mind, but I think the poster is talking about a lot of problems that are adjacent to CVE (coordinated vulnerability disclosure and vulnerability scoring, primarily) while missing the primary value that CVE provides (a consistent vocabulary to talk about vulnerabilities and a centralized clearing house for distributing vulnerability data) and as a result their proposed solution misses the mark.
The article quotes a lobsters post approvingly:
I guess the first problem with this is that the CNA system very much puts vendors in de facto control of what goes in the database. But, this description of CVE-like systems is missing the forest for the trees, in that the alternative to CVE is not one of the two scenarios described, but the wild-west situation that existed before CVE, where vulnerability info came from CERT, from Bugtraq/Full Disclosure/etc., and from vendors, often using wildly different language to describe the same thing.The whitepaper[0] that led to the CVE system described a pretty typical scenario:
That, and a central clearing house, are what is at stake if a system like CVE disappears, and I fail to see how any professional licensing scheme -- unless the licensing body replicated the CVE system or something like it -- would do anything to address that.parliament32's comment in this thread perfectly addresses the issues with the articles treatment of CVSS, so I'll not rehash that here, other than to say that the actual score output of CVSS is bad and the people who designed it should feel bad.
0 - https://www.cve.org/Resources/General/Towards-a-Common-Enume...
In the age of Rust, I wonder if CVE is even necessary anymore.
I can never tell when someone posts something like this if it is a joke or not.
In case it isn't, there are many weaknesses and vulnerabilities which Rust does not protect from.
Even if they are joking, no matter how hyperbolic or satirical, someone absolutely believes it as true.
I've heard people advocating for rust because they genuinely believe it prevents all software exploits.
Security comprehension is frighteningly low among SWEs.
Taking this at face value, please see the Rustsec advisory database for prima facie evidence that vulnerability enumeration is still necessary
https://rustsec.org/advisories/
Please don't drag Rust into needless controversies like this. Rust does improve the situation. But not even the language developers will make such bold claims.
To begin with, memory safety errors aren't the only source of security bugs. The simplest example is an SQL injection attack. Besides, almost all Rust programs contain hidden unsafe code. You can check the standard library if you don't believe me. Memory safety in Rust depends on verification of those unsafe interfaces. The chance of having such bugs is pretty low since they are widely reviewed (this is the point of Rust unsafe). But such errors do occur sometimes and are reported from time to time.