Skip Navigation

Do you actually audit open source projects you download?

The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.

Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?

Let's hear it!

81 comments
  • All I do is look into the open issues, the community, docs etc. I don't remember auditing the code.

  • I look whether if someone has audited the code or not & even then I simply find Libre stuff trustworthy anyways

  • I generally look over the project repo and site to see if there's any flags raised like those I talk about here.

    Upon that, I glance over the codebase, check it's maintained and will look for certain signs like tests and (for apps with a web UI) the main template files used for things like if care has been taken not to include random analytics or external files by default. I'll get a feel for the quality of the code and maintenance during this. I generally wouldn't do a full audit or anything though. With modern software it's hard to fully track and understand a project, especially when it'll rely on many other dependencies. There's always an element of trust, and that's the case regardless of being FOSS or not. It's just that FOSS provides more opportunities for folks to see the code when needed/desired.

    • That's something along the lines I do as well, but your methods are far more in depth than mine. I just glance around documentations, how active the development is and get a rough idea if the thing is just a single person hobby-project or something which has a bit more momentum.

      And it of course also depends on if I'm looking for solutions just for myself or is it for others and spesifically if it's work related. But full audits? No. There's no way my lifetime would be enough to audit everything I use and even with infinite time I don't have the skills to do that (which of course wouldn't be an issue if I had infinite time, but I don't see that happening).

  • I do not. But then again, I don’t audit the code of the closed source software I use either.

  • I trust the community, but not blindly. I trust those who have a proven track record, and I proxy that trust through them whenever possible. I trust the standards and quality of the Debian organization and by extension I trust the packages they maintain and curate. If I have to install something from source that is outside a major distribution then my trust might be reduced. I might do some cursory research on the history of the project and the people behind it, I might look closer at the code. Or I might not. A lot of software doesn't require much trust. A web app running in its own limited user on a well-secured and up-to-date VPS or VM, in the unlikely event it turned out to be a malicious backdoor, it is simply an annoyance and it will be purged. In its own limited user, there's not that much it can do and it can't really hide. If I'm off the beaten track in something that requires a bit more trust, something security related, or something that I'm going to run it as root, or it's going to be running as a core part of my network, I'll go further. Maybe I "audit" in the sense that I check the bug tracker and for CVEs to understand how seriously they take potential security issues.

    Yeah if that malicious software I ran that I didn't think required a lot of trust, happens to have snuck in a way to use a bunch of 0-day exploits and gets root access and gets into the rest of my network and starts injecting itself into my hardware persistently then I'm going to have a really bad day probably followed by a really bad year. That's a given. It's a risk that is always present, I'm a single guy homelabbing a bunch of fun stuff, I'm no match for a sophisticated and likely targeted nation-state level attack, and I'm never going to be. If On the other hand if I get hacked and ransomwared along with 10,000 other people from some compromised project that I trusted a little too much at least I'll consider myself in good company, give the hackers credit where credit is due, and I'll try to learn from the experience. But I will say they'd better be really sneaky, do their attack quickly and it had better be very sophisticated, because I'm not stupid either and I do pay pretty close attention to changes to my network and to any new software I'm running in particular.

  • depends like for known projecte like curl i wont because i know its fine but if its a new project i heard about i do audit the source and if i dont know the lang its in i ask someone that does

  • Yes. It's important to verify the dependencies and perform audits like automated scans on the source code and packages from repositories like PyPi and npm. Which is done on my day job.

    Also before mirroring data, I look at the source code level if I see anything suspicious. Like phoning home or for example obfuscated code. Or other red flags.

    Even at home, working on 'hobby projects', I might not have the advantage of the advance scanning source code tools, but I'm still suspicious, since I know there is also a lot of sh*t out there.

    Even for home projects I limit the amount of packages I use. I tent to only use large (in terms of users), proven (lot of stars and already out for a long time) and well maintained packages (regular security updates, etc.). Then again, without any advance code scanning tool it's impossible to fully scan it all. Since you still have dependencies on dependencies with dependencies that might have a vurnability. Or even things as simple as openssl heartbleed bug or repository take overs by evil maintainers. It's inevitable, but you can take precautions.

    Tldr: I try my best with the tools I have. I can't do more then that. Simple and small projects in C is easier to audit then for example a huge framework or packages with tons of new dependencies. Especially in languages like Python, Go and Javascript/typescript. You have been warned.

    Edit: this also means you will need to update your packages often. Not only on your distro. But also when using these packages with npm and PyPi, go or php composer. Just writing your code once and deploy is not sufficient anymore. The chances you are using some packages that are vulnerable is very high and you will need to regularly update your packages. I think updating is just as important as auditing.

  • If I can read it in around an afternoon, and it’s not a big enough project that I can safely assume many other people have already done so, then I will !

    But I don’t think it qualifies as “auditing”, for now I only have a bachelor’s in CS and I don’t know as much as I’d like about cybersecurity yet.

  • I do not, but I sleep soundly knowing there are people that do, and that FOSS lets them do it. I will read code on occasion, if I'm curious about technical solutions or whatnot, but that hardly qualifies as auditing.

  • I do not audit code line by line, bit by bit. However, I do due diligence in making sure that the code is from reputable sources, see what other users report, I'll do a search for any unresolved issues et al. I can code on a very basic level, but I do not possess the intelligence to audit a particular app's code. Beyond my 'due diligence' I rely on the generosity of others who are more intelligent than I and who can spot problems. I have a lot of respect and admiration for dev teams. They produce software that is useful, fun, engaging, and it just works.

  • no. ive skimmed through maybe 2 things overall but thats about it. i use too many apps to be able to audit them all and i dont have the proper skills to audit code anyway, and even if i did i would still have to re-audit after every update or every few years. its just not worth the effort

    youre taking a chance whether you use closed or open source software, at least with open source there is the option to look through things yourself, and with a popular project theres going to be a bigger chance of others looking through it

  • I vet lesser known projects, but yea I do end up just taking credibility for granted for larger projects. I assume that with those projects, the maintainers team with pull access is doing that vetting before they accept a pull.

  • I implicitly trust FOSS more than closed source but because that trust has been earned through millions of FOSS projects.

    On occasion, I will dive deep into a codebase especially if I have a bug and I think I can fix it.

    You can't do this with closed source or even source available code because there is no guarantee that the code you have is the code that's been compiled.

  • If it looks sketchy I'll look at it and not trust the binaries. I'm not going to catch anything subtle, but if it sets up a reverse shell, I can notice that shit.

  • Nah not really...most of the time I'm at least doing a light metadata check, like who's the maintainer & main contributors, any trusted folks have starred the repo, how active is development and release frequency, search issues with "vulnerability"/"cve" see how contributors communicate on those, previous cve track record.

    With real code audits... I could only ever be using a handful of programs, let alone the thought of me fully auditing the whole linux kernel before I trust it 😄

    Focusing on "mission critical" apps feels pretty useless imho, because it doesn't really matter which of the thousands of programs on your system executes malicious code, no? Like sure, the app you use for handling super sensitive data might be secure and audited...then you get fucked by some obscure compression library silently loaded by a bunch of your programs.

  • No, I pretty much only look at the number of contributors (more is better)

  • Depends on how the project and how long they have been around.

  • I don't because I don't have the necessary depth of skill.

    But I don't say I "blindly" trust anyone who says they're FOSS. I read reviews, I do what I can to understand who is behind the project. I try to use software (FOSS or otherwise) in a way that minimizes impact to my system as a whole if something goes south. While I can't audit code meaningfully, I can setup unique credentials for everything and use good network management practices and other things to create firebreaks.

81 comments