Sustain: Episode 203: What’s wrong with CVEs? Daniel Stenberg of cURL wants you to know

SustainOSS SustainOSS 10/13/23 - Episode Page - 28m - PDF Transcript

Hello, and welcome to Sustain, the podcast where we talk about sustaining open source

for the long haul. Who are we? Where did we come from? Where are we going? Wait a minute,

wait, there's a security issue somewhere? Wait, what level is it? We're going to get

through all of those things today. I'm very excited to talk to my guest today. He has

a familiar name because we've had him on the podcast before. That's right, we have Daniel

Stenberg on from Curl Daniel. How are you doing? Hello, hello. Good to be here again. Yeah,

it wasn't even that long time ago I was here. So yeah. It really wasn't, but a cool thing

came up in the ecosystem and we figured that you're probably the best person to talk about

and address it. And so in something that's relatively new for sustain, we don't always

do this. We're talking about current events and not just how you got here and what you think.

So we're talking about CVEs today. CVEs are common vulnerabilities and exposures. They

were launched in September of 1999, which was a while ago. And they don't have the best reputation,

but they're a start. I'm getting some of this from a blog post by Jake Edge, who had a really

good summary of what we're talking about today. But generally, CVEs are one of these ways of

saying, Hey, there's a vulnerability in your code. Can you fix it? And oh, we know there's

a vulnerability. We're fixing it. So on August 25th, the Curl team, which is led by Daniel

Stenberg received an email. The email said, I want to let you know that there's a recent

Curl CVE publish and it doesn't look like it was acknowledged by the Curl authors and

just not mentioned in the Curl website. CVE 2020 1909 Daniel, can you tell me a bit about

how this happened? Did this email go directly to you? What was the vulnerability that they're

talking about? So let's start with that. That email was sent to the Curl library mailing

lists by one of the contributors and who, I guess, recognized that we hadn't announced

this from the Curl project. So usually, we always take care of when we get a security

problem reported, we make sure that we document everything we request a CVE or a CVE ID, I

guess it's the proper name for it. And then we work with the reporter and we work on effects

and we publish everything and we make sure that everyone is aware of the CVE. But in

this case, we didn't do that. And it was said to be about Curl. So that's why we were told

about it. It was something wrong. And first, it was recently announced in August and it

had 2020 in the ID, which 2020 is then the year of the flow, I guess, which sort of told

us that this is something old announced now. So that's only that is a weird signal. Or

I mean, it's not impossible, but sort of it triggers something. Wait a minute, what's

this? And it turns out that someone then just registered, well, asked for a CVE ID for a

flaw. Someone thought that we had back in 2020. It's actually a bug that we fixed. Someone

filed a CVE for it. And then when they have filed for a CVE or requested that, it basically

just as a bug tracker ID, right? Anyone can submit a request for a CVE. And you don't even

have to have much data or information about it. You can basically, well, Curl, some version

number, suspected integer overflow, and you get a CVE ID for that. And well, sure enough,

that's what's happened, right? It was done anonymously, or at least I don't know who did

it. And it just showed up and then made it public.

So I have a couple of questions. Okay, so this was an older bug, which had already been

fixed, but someone filed a CVE. Now, when you file a CVE, who are you filing with? Is there

a grand registrar?

Yeah, it's actually kind of a complicated system. But MITRE.org are the head organization of

this. They're funded by the American government, basically. So MITRE.org are the keepers of

the database of every CVE ever requested, right? So that's a huge number of entries. And they

have metadata for every such number, right? And they also have sub organizations, so you can

have a certificate. It's a CVE numbering authority as a CNA. So you can actually request it from

other authorities. But there's also MITRE, who is the head organization of that. So basically,

when you find a problem in a particular product, you can request a CVE for that particular

product. And depending on which product you might have to talk to a particular CNA to get

that number. But anyway, for curl, we don't have a dedicated CNA, so you could request it from

MITRE. So you get a number, and you can say, oh, sure, now the bug is public, or the CVE is

public. And the reason you can do it for an old bug is right, because that version might still be

used out there, right? So there might be people still using that old version. So it sort of, it

can be relevant to still get a CVE for that old version, because who knows how many people are

actually still using that. So that concept makes sense. But someone filing this CVE, and then it

appears in the database, and maybe it appearing in the database, isn't that bad either, maybe? It's

an entry in the database somewhere. Why do we care about that? But it becomes a issue for us,

because then suddenly consumers of that database, imports that information into their

databases. And one of the biggest consumers of that database is NVD, right? National

Vulnerability Database, which is also fun, because national here means, of course, again, as a US

organization, your US. Yeah, but it is for everyone, right? So that NVD, then they host that data,

then when they add a severity score for everything they import into their database. So

that's really interesting. So a severity score. So anyone can report a vulnerability. Basically,

that's what you just said. You can be an anonymous vulnerability, which seems kind of odd to me.

But let's assume that there's good people out there doing the good work of sleuthing around for

real bugs. How do they assign this score? And let's talk in particular about this 2021 that was

filed. Was it a really bad bug? It was a really silly bug. It was one of those silly, stupid ones,

but it was far away from a bad bug. Got it. So it had a low score. So let's start with the actual

bug, because I think it's hilarious. So curl has a retry option. So if you try to download a file

and you get like a recipe and you get a trans of what we call a transient error back, basically say

five or something. It says that maybe the server is currently stupid. Try again later. Yeah, because

that can happen, right? So then curl has a retry option. So you can tell retry a few times. And

then we would retry again after a while. So then we have this retry delay that says, don't

retry immediately retry after this many seconds instead of the regular waiting setup. Okay, and

if you then use that option to say retry after this many seconds, and you use a very large number,

internally, we would convert that to milliseconds by multiplying by 1000. That's what you do. But in

this case, I didn't check the number properly. So if you put in a big enough number, and you

multiply it by 1000, it would not shit in the variable, it would overflow the 64 bit variable.

Basically, since it's written in C, it's actually an undefined behavior. But in most cases, it'll

wrap and end up a very small number. So basically, you would then, if you use curl from the

command line and enter a huge number, you know, 994-55-7778. And then it would actually not wait

for 500 million years, it would wait for 32 seconds instead.

Okay, that seems like a kind of thing that could potentially mean a server somewhere might fall

down. But it's really just like not a huge bug.

Exactly. It's very silly bug. Yes. But it's yeah, I thought about that actually, at the time we got

that because it was actually reported as a security problem back in 2019. And we really

struggled with finding any kind of security angle to this. So we didn't know there's no security

problem, treat it as a regular bug, fix it, make sure that we don't wrap, we don't end up in that

situation. We capped it, you can just do it 500 million years and not more or whatever it is.

But anyway, so the description for the CVE, when someone created, it said integer overflow in

this version without describing anything more than that. Basically, there's an integer overflow.

I'm sure it was an integer overflow. So when NVD then consumes that, and they get that, oh,

there's a new CVE, we need to set a score for this. How do they do that? There's an integral

flowing curl. Well, curls talks in networks, right? And an integral overflow, it could be really

bad. So you crank up all the levers to the maximum, and then you'll get 9.8 out of 10 as a

score. And I'm joking a little bit, but it's actually that way they sort of go, they insist that

they use all available public information to set the score, but they don't look for any other than

the information they already got, which was basically nothing. So they really make up sort of,

yeah, integral flowing curl. How bad can that be? The worst possible case, probably really bad.

So 9.8 seems a bit ridiculous. An integer overflow in this particular thing, only if it's

retrying, isn't really going to destroy the world. This isn't giving away social security

numbers. It's not causing zero day exploits and other dependencies downstream. It's just saying,

oh, do it at another time. And maybe that. So what I'm curious about is, who are the people

who decided, oh, curl network, okay, bad. Who made those? Are they policy people? Are they coders?

Do you have any idea who works at NVIDIA who would set that score?

I have no idea. I've done this dance many times now that I find that NVIDIA sets a ridiculously

high score for our problems. And I email them and say, hey, how come you did this? It's kind of

stupid because it's, I think they make a disservice to the entire ecosystem by doing this,

because it's ridiculous. It doesn't help anyone to do this. So I sort of tried to get them to

stop doing that. I mean, this is open source too. You can just, there's even information, you can

go and look at the code yourself if you want to. And you can ask us, what do you think about it?

Explain the problem to us. And we could help them actually end up with something, but they

don't do that. I do soon because they get a billion of these every day, and they have sort of

0.3 seconds for each to make it up. I don't know. But it makes it a really horrible system. They

basically just set that score. Some team, I don't know, even if you email them, they never

respond with the name or anything. They're always just the NVIDIA team. And you get a response

from them. It's the NVIDIA team that says, okay, we can set another score. So if I insist and

email them, like I did this time, I can actually wind on them and they will go back and try and

find a better number. And in this case, they eventually did.

So do you think that the majority of their gradings are kind of off then if they don't seem to be

that technically like able to understand what vulnerabilities should get the score?

Absolutely. I see no way how they can be relevant. I mean, just out of randomness, of course, they

might hit roughly well at some points in time, but they don't make a good enough effort to

understand the problem to be able to actually set the good score. So they insist that they have

this system where they assume the worst possible. And that's how they set the score. But assuming

the worst possible in every case, it doesn't make it a good score. It just makes it a silly score.

So I want to expand out a bit. Are there any other major vulnerability databases that are relevant

today to an open source project, say working in networking like Curl? Like what other databases

are important besides NVIDIA? I think the GitHub one is sort of up and coming. The one that from

what I hear is being used a lot. But I also see them getting severe discourse from NVIDIA when

they populate their database. So I think there's also this cross-pollination sort of. So it's not

necessarily, they might get the scores anyway from NVIDIA, even if they can then set the rooms.

I'm not sure exactly how they cooperate or not.

That's okay. So this is one of the major databases is what I'm trying to establish

for securities right now in the world, which is really interesting because we're seeing an increase

in the amount of discussion around cybersecurity is important. Shoring up databases is important.

Vulnerabilities are bad. We need to fund the entire ecosystem to make sure that all vulnerabilities

are doing okay. But if this crucial part of the ecosystem, which is how you grade vulnerabilities

and how those are reported easily, depending on the technical knowledge of the people who would

be able to appraise a possible vulnerability, if that's flawed, then we have to kind of go back

to the drawing board. Am I missing something in that assessment? No, I think that is entirely

correct. I mean, this isn't anything new, right? This hasn't been working like this

pretty much since forever. But I think usually back in the day, we really didn't care about

NVIDIA creates a silly score. I mean, why do we care about that? It doesn't really matter to us.

We insist that it says whatever score, they say another score. Okay, traditionally, we haven't

bothered about that because it didn't really affect us. But I think over time, exactly as you say,

CVEs and securities in general, we have more emphasis on that. And we think about it more.

And we rely on CVEs much, much more now than we did only five years ago. But the CVEs haven't

improved. But we build entire ecosystems on top of these CVEs much more now. And we lean on them

much more. So we hope that they're good, but they're not. And in my case, I see a lot of problems now

because these CVEs are now used by security scanners everywhere. And they look for these CVEs in

your systems. And then they find these curl, so-called security vulnerabilities with really

high scores, because that's what they say are there. And then people get all upset and scared.

And what do we do? We have a 9.8 critical security vulnerability in our product because we use Carl.

It's not very good.

So we wanted to get more points of view on this issue. So we reached out to Dan Lorenz,

co-creator of Sixdoor, co-founder and CEO at ChainGuard. We've had him on the podcast before.

Their mission is to make software supply chains secure by default. So Dan, who are you?

Hey, so my name is Dan Lorenz. I'm a co-founder and CEO of a company called ChainGuard that works

in open source and software supply chain security. This week, Daniel Stenberg, the maintainer and

founder of the commonly used curl project, started a bit of an uproar again when he publicly

complained about a very bad CVE entry in the national vulnerability database. This is the

first time Daniel has run into trouble with the NVD. And I doubt this will be the last time.

He's in a pretty unique position as an open source maintainer of a very widely used piece of software

and that he has to deal with it way more often than most people do. I've been following this

space and working in it for probably close to a decade now. And so I've seen a lot of things go

well and a lot of things go poorly in the national vulnerability database. What is your experience

with the NVD? If you're not aware, the NVD is the global database used to track all public

software vulnerabilities. This covers open source tools and libraries, but it also covers proprietary

software as well. It's been around for decades and it hasn't really changed a ton in that time,

which I think is why we're running into a lot more friction with it today. There are problems

with the NVD. It also works pretty well in some cases. A couple of the really bad issues with it

are work surfaced in Daniel's report this week, which caused some of the problems that he was

complaining about. One is the CVSS scoring mechanism. That's the way the vulnerabilities are scored in

the NVD. The main scoring system out there today is called CVSS3. There have been some drafts for

CVSS4 that they're not quite public yet inside of the NVD itself. So most of the scores that you

see are related to CVSS3. There are a couple different ways these scores get entered, but in

this case, the case of the vulnerability that Daniel was complaining about, the score is assigned by

the NVD team itself after reviewing the report. Do you believe that a majority of CVEs are misguided

and are CVEs ridiculously broken? There are a lot of problems with the way the

vulnerabilities get reported in the NVD. First of all, it's a global right system. That's the easiest

way to think about it. Anyone can file a CV against any piece of software. That means the quality

varies dramatically. There are also some weird incentives where people like to brag about getting

CVEs reported to themselves. In the higher the score, the more impressive it looks when you

get one of these on your resume. So that leads to a lot of inflation in the scoring system itself.

The CV that Daniel was complaining about had a CVSS score of 9.8, which is about as high as

it gets. 10 is technically possible, but stuff like Log Force Shell wasn't even a 9.8 in the scoring

system. So there's almost zero chance that this was actually correct. And looking at the CV itself,

Daniel didn't even think this was a vulnerability at all. So this was from a couple of years ago,

somebody was probably trying to get it onto their resume, reported it with a tendency to overinflate

the impact, and it just got blindly accepted. I think one of the biggest things to improve

the NVD in these cases would be to apply a more stringent filter to things with a high score

and in things that have a high reach. Curls installed on pretty much every device in the

world and even running in stuff like the space station and running in the Mars Rover. So it's

ubiquitous software and reporting a noisy CV against it has a huge impact and a huge waste of time

for everyone in the industry. I realized that they can't apply this level of scrutiny to everything,

but there actually aren't that many CVs issued a year and there aren't that many at that high

severity and there aren't that many in ubiquitous libraries. So I spent a little bit more time,

I think they could dramatically improve the results in the data in the NVD for things that

actually matter to people today. Thank you so much, Dan. Really appreciate it.

So, Daniel, it's not perfect. We just reached out to NIST's NVD and here's what they had to say.

They said, we want to make clear the publication of the CV is not something we have control over,

that for transparency purposes, our analysis efforts rely on publicly available information,

which does change over time, and that we rely on public insight to ensure that the most recent

information is included for up-to-date assessments. Now, I find this quote really interesting

because, for instance, Mark Dislage, I think I pronounced that correctly, at Ubuntu quoted that

this CVE is not a security issue and the curl author intends on disputing the CVE, so marketing

is not effective. That's great. What do you know that NVD doesn't know and what do you think about

their response? Well, they of course say that they don't affect the existence of the CVEs,

because as I said, that MITRE handles the CVE, so if the CVE exists, and well, I tried to reject it

with MITRE as well. So, in this case, I contacted MITRE and said, please reject this, it's not a

security problem, but they refuse to do that because they too, I don't know exactly how they

determine this, but they think it's a security problem. They too have this, apparently know

more about this than I do. So, it exists and therefore, NVDs think they set a severe score

because it exists and they won't remove it because MITRE hasn't removed it. So, then we just end up in

this sure what's the severity score and I find it interesting. So, when I argued with NVD about

this particular one, I said, it's not a security problem at all, so it shouldn't be there, but

since you won't remove it, you should just put it as low as possible because that would be the

second best. And basically, I got them to do that and they did that by reading, pretty much,

they quoted, I think, three different links to Reddit commenters who commented on the issue

or as a discussion and you follow up to my blog post, which I think is completely, I mean, come

on, I told you about this, I wrote a long blog post about it and you're quoting Reddit commenters

for commenting on the things that I wrote. I was complete. I mean, in the end, I was happy about

it because they downgraded it then from 9.8 to 3.3. So, I guess that was good because now it doesn't

really matter. Now, it's just a 3.3. Well, it's already been passed, right? This is just an older

versions of Curl. Yes. Yeah. Okay. Okay. I have another question and this is silly.

When the CVE was reported on the mailing list and when you talk to NVD about this CVE,

at any point in any of the discussions, did anyone say we would like to fund your work

fixing this vulnerability in this very important project, which has now been marked as unsafe

because it has a CVE? Has anyone offered out to fund that work? No. Okay. Not a single one,

but I also think that most of everyone who's actually sort of involved in Curl, we can all

see that this isn't really a security problem. This is a meta problem because of this organization.

It's not an actual problem in Curl and that's also why I like to sort of log about it and

make a fuss about it because I want everyone who actually cares about it to know that it's

actually not a security problem. And then even if someone else actually still lists this as a

security problem or talk about it as a security problem, I don't have to care about that too

much as long as everyone who actually needs to know no. So what do you think we should do instead?

Yeah. That's a really good question because I think as we have now as an ecosystem, we build

so much on this concept of CVEs and using these CVEs and the infrastructure here. So it's not

an easy thing to fix. It's not an easy thing to replace. So I don't have any good answers here

what to do. A lot of people tell me that it'll be better with CVSS version four, which is a new

way to get the score done. But I think that seems very naive to think that it's about how to

calculate the scores because if you don't care about the input, the output will be equally wrong

in just maybe slightly differently wrong. And for my case, unfortunately, the short-term answer

for someone like me is to register my own CNA, which is sort of a numbering authority within

the CVE system. So that means that I can be responsible for my own CVE numbers for my products,

which means that I can prevent anonymous users to file CVEs on my products in the future.

But I think I will do this and I'm sort of in the process of doing it. But I think it's a really

lame fix because it certainly doesn't scale. Everyone cannot do this to just prevent this

silliness. So it's not a fix for the entire universe. It's just a personal thing that I

can do to fix my issues. So if anyone is able to log CVEs, I'm curious what's stopping a CVE

DDOS on open source maintainers? Could I just open a ton of CVEs on a project and then they

would have to deal with them all and then take all their time and then not work on the future that

they want? Pretty much. You can do that today. I'm not going to. I'm not a bad actor. You,

as in the bad ones out there, the only restriction here is that system with the CNAs. So if you're

a CNA, there are 300s of them. So there are many, but they can sort of say that we are responsible

for these products. So they can sort of stop you from doing that. So that's why you don't see

rogue CVEs for known products because they are already taken care of by CNAs that take care

of those products. So they will say, no, that's not a CVE. So therefore, you can only do this for

the smaller projects that don't have CNAs that cover them basically. So for curl, you can do it

for Microsoft Word. You cannot. Interesting. But given that the majority of open source software

is run by like one or two people, if that's a smaller project, that means we have a massive

issue in how we think about our infrastructure and how we think about reporting. Yes.

All right. Well, that's fun. Any closing thoughts? I see this a lot. And I've actually already

seen that CVE. I've reported another legitimate CVE just days ago, right? And even legitimate CVEs

get a similar problem in that nowadays, everyone sucks in the CVE databases and they scan for

these CVEs. So it is a growing problem with people seeing things that they think is security

problems everywhere. And a lot of security scanners now scan for them. And so, yeah,

it's both illegitimate and valid ones are sort of showing up everywhere. But for both good and

bad. So I think it's an area that needs some kind of cleaning up. I'm not sure exactly what.

And it's certainly an area that is going to keep causing us headaches going forward until we

do something. And again, I don't even know what the fix is here.

I don't know either. It's also really interesting to me that someone like you who is

strapped for time, part of a very important project that's being used all across the ecosystem,

can be called upon to do work or like have to respond to work that's coming from,

say, the U.S. I mean, the EU could also have this sort of thing. It just always strikes me

as really interesting to see how a national database can be used globally. And that's just part

of the problem we have today with today's infrastructure being shared. And policymakers

going forward are consistently like talking about securities being the major thing. How do we

shore up our digital infrastructure? How do we stop everything going down? But it's kind of

bigger than that because it's not about countries anymore. It's about the entire world.

Oh, absolutely. That's also why I think it doesn't really matter if it's called national or if it's

American or EU because every security problem is going to, of course, affect everyone everywhere.

So it doesn't really matter. Geography is not the question. The idea behind having a unique

idea for a security problem like a CV idea is a good idea, but it needs some reinforcements to

actually be a good system all throughout this. Listeners, if you have ideas for how to fix that,

do let us know. You can email me actually podcast at the same OSS.org and I'll be happy to take

those and probably file them away because I'm not a security researcher, but still interested in

your thoughts. Daniel, it's been great to have you on. Thank you so much for coming and talking

about this weird issue that we haven't talked about before on the platform of how vulnerabilities get

reported and how they show up and what you have to do with them. I really appreciate it at the time.

This episode was written and produced by Justin Dorffman. It was also produced by me, Richard

Littar and Tina Arboleta. It's edited and shown us by Peach Tree Sounds. We want to thank Dan

Lorenz for his commentary. Also, thank you to NISD for taking the time to respond to questions

addressed in this episode. We do appreciate that. If you want to hear more episodes like this that

are more focused on a particular issue, do let us know podcast is the same OSS.org again.

And also please give us a five star rating on Apple podcast, Spotify, wherever you do things,

it helps people discover the show, share this podcast wherever you can. And if you use curl,

throw Dan a bone. There are ways to donate on the website. You can also donate to this podcast.

Dan, finally, thank you so much once again. Take care. Good luck and don't have any more

integer overflows, I guess. Thank you.

Machine-generated transcript that may contain inaccuracies.

Guests

Daniel Stenberg | Dan Lorenc



Panelist

Richard Littauer



Show Notes

Today, we are switching things up and doing something new for this episode of Sustain, where we’ll be talking about current events, specifically security challenges. Richard welcomes guest, Daniel Stenberg, founder, and lead developer of the cURL project. Richard and Daniel dive into the complexities of Common Vulnerabilities and Exposures (CVEs), discussing issues with how they are reported, scored, and the potential impact on open source maintainers. They also explore the difficulty of fixing the CVE system, propose short-term solutions, and address concerns about CVE-related DDOS attacks. Dan Lorenc, co-founder, and CEO of Chainguard, also joins us and offers insights into the National Vulnerability Database (NVD) and suggests ways to improve CVE quality. NDS’s response is examined, and Daniel shares his frustrations and uncertainties regarding the CVE system’s future. Hit download now to hear more!

[00:01:00] Richard explains that they will discuss Common Vulnerabilities and Exposures (CVEs) and mentions that CVEs were launched in September 1999, briefly highlighting their purpose. He mentions receiving an email about a CVE related to the cURL project, which wasn’t acknowledged by the cURL team.

[00:01:50] Daniel explains that the email about the CVE was sent to the cURL library mailing list by a contributor who noticed the issue. He describes the confusion about the old bug being registered as a new CVE. discusses the process of requesting a CVE. He also mentions the National Vulnerability Database (NVD) and how it consumes and assigns severity scores to CVEs.

[00:03:54] Daniel discusses the process of requesting a CVE which involves organizations like MITRE, and he mentions the National Vulnerability Database (NVD) and how it consumes and assigns severity scores to CVEs.

[00:06:21] Richard asks about how NVD assigns severity scores to CVEs and specifically in the case of CVE 2020, and Daniel describes the actual bug in curl, which was a minor issue involving retry delays and not a severe security threat.

[00:09:57] Richard questions who at NVD determines these scores and whether they are policy makers or coders, to which Daniel admits he has no idea and discusses his efforts to address the issue. He expresses frustration with NVD’s scoring system and their lack of communication.

[00:11:18] Daniel and Richard discuss their concerns about the accuracy and relevance of CVE ratings, especially in cases where those assigning scores may not fully understand the technical details of vulnerabilities.

[00:14:37] We now welcome Dan Lorenc to get his point of view on this issue. Dan introduces himself and talks about his experience with the NVD, highlighting some of the issues with CVE scoring and the varying quality of CVE reports.

[00:16:11] Dan mentions the problems with the CVSS scoring and the incentives for individuals to report vulnerabilities with higher scores for personal gain, leading to score inflation. Dan suggests that NVD could improve the quality of CVEs by applying more scrutiny to high-severity and widely used libraries like cURL, which could reduce the noise and waste of resources in the industry.

[00:18:23] Richard presents NVD’s response to their inquiry. Then, Daniel and Richard discuss NVD’s response and the discrepancy between their assessment and that of open source maintainers like Daniel who believe that some CVEs are not valid security issues.

[00:20:44] Richard asks if anyone offered to fund the work to fix vulnerabilities in important open source projects like cURL when a CVE is reported. Daniel replies that no such offers have been made, as most involved in the project recognize that some CVEs are not actual security problems, but rather meta problems caused by the CVE rating system.

[00:21:40] Daniel explains his short-term solution of registering his own CNA (CVE Numbering Authority) to manage CVEs for his products and prevent anonymous users from filing CVEs.

[00:23:04] Richard raises concerns about the potential for a CVE DDOS attack on open source, overwhelming them with a flood of CVE reports.

[00:24:20] Daniel comments on the growing problem of both legitimate and invalid CVEs being reported, as security scanners increasingly scan for them. Richard reflects on the global nature of the problem, and Daniel emphasizes the importance of having a unique ID for security problems like CVEs.



Links


SustainOSS
SustainOSS Twitter
SustainOSS Discourse
podcast@sustainoss.org
SustainOSS Mastodon
Open Collective-SustainOSS (Contribute)
Richard Littauer Twitter
Richard Littauer Mastodon
Daniel Stenberg Twitter
Daniel Stenberg Mastodon
Daniel Stenberg Website
Dan Lorenc Twitter
National Vulnerability Database
CVE
cURL
Chainguard
Sustain Podcast-Episode 185: Daniel Stenberg on the cURL project
Sustain Podcast-Episode 93: Dan Lorenc and OSS Supply Chain Security at Google


Credits


Produced by Justin Dorfman & Richard Littauer
Edited by Paul M. Bahr at Peachtree Sound
Show notes by DeAnn Bahr Peachtree Sound

Special Guests: Daniel Stenberg and Dan Lorenc.

Support Sustain