The point is not that you can audit it yourself, it's that SOMEBODY can audit it and then tell everybody about it. Only a single person needs to find an exploit and tell the community about it for that exploit to get closed.
But eventually somebody will look and if they find something, they can just fork the code and remove anything malicious.
Anyways, open source to me is not about security, but about the public "owning" the code. If code is public all can benefit from it and we don't have to redo every single crappy little program until the end of time but can instead just use what is out there.
Especially if we are talking about software payed for by taxes. That stuff has to be out in the open (with exception for some high security stuff - I don't expect them to open source the software used in a damn tank, a rocket or a fighter jet)
You can get a good look at a T-bone by sticking your head up a cow's ass but I'd rather take the butcher's word for it.
There are people that do audit open source shit quite often. That is openly documented. I'll take their fully documented word for it. Proprietary shit does not have that benefit.
I had a discussion with a security guy about this.
For software with a small community, proprietary software is safer. For software with a large community, open source is safer.
Private companies are subject to internal politics, self-serving managers, prioritizing profit over security, etc. Open source projects need enough skilled people focused on the project to ensure security. So smaller companies are more likely to do a better job, and larger open source projects are likely to do a better job.
This is why you see highly specialized software has really enterprise-y companies running it. It just works better going private, as much as I hate to say it. More general software, especially utilities like OpenSSL, is much easier to build large communities and ensure quality.
You don't need to. If it's open source, it's open to billions of people. It only takes one finding a problem and reporting it to the world
There are many more benefits to open source:
a. It future proofs the program (many old software can't run on current setups without modifications). Open source makes sure you can compile a program with more recent tooling and dependencies rather than rely on existing binaries with ancient tooling or dependencies
b. Remove reliance on developer for packaging. This means a developer may only produce binaries for Linux, but I can take it and compile it for MacOS or Windows or a completely different architecture like ARM
c. It means I can contribute features to the program if it wasn't the developer's priority. I can even fork it if the developer didn't want to merge it into their branch.
You shouldn't automatically trust open source code just because its open source. There have been cases where something on github contains actual malicious code, but those are typically not very well known or don't have very many eyes on it. But in general open source code has the potential to be more trustworthy especially if its very popular and has a lot of eyes on it.
Here are a few things that apparently need to be stated:
Any code that is distributed can be audited, closed or open source.
It is easier to audit open source code because, well, you have the source code.
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
Examples of vulnerabilities published by independent researchers demonstrates 2 things: people are auditing open source software for security issues and people are in fact auditing closed source software for security issues
Vulnerabilities published by independent researchers doesn't demonstrate any of the wild claims many of you think they do.
No software of a reasonable size is 100% secure. Closed or open doesn't matter.
Open source software is safe because so few people use it it's not worth a hacker's time to break into it (joking, but of course that doesn't apply to server software)
I really like the idea of open source software and use it as much as possible.
But another "problem" is that you don't know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about. Sure, you can compile by yourself. But who does that 😉?
We trust open source apps because nobody would add malicious codes in his app and then release the source code to public. It doesn't matter if someone actually looks into it or not, but having the guts to publish the source codes alone brings a lot of trust on the developer. If the developer was shady, he would rather hide or try to hide the source code and make it harder for people to find it out.
I don't really think auditing is a compelling argument for FOSS. You can hire accredited companies to audit and statically analyse closed source code, and one could argue that marketable software legally has to meet different (and stricter) criteria due to licensing (MIT, GPL, and BSD are AS IS licenses), that FOSS do not have to meet.
The most compelling argument for FOSS (for me) is that innovation is done in the open. When innovation is done in the open, more people can be compelled to learn to code, and redundant projects can be minimised (i.e. just contribute to an existing implementation, rather than inventing a new). It simply is the most efficient way to author software.
I'm probably wearing rose tinted glasses, but the garage and bedroom-coders of the past, whom developed on completely open systems moved the whole industry forward at a completely different pace than today.
Did you fabricate that CPU? Did you write that compiler? You gotta trust someone at some point. You can either trust someone because you give them money and it's theoretically not in their interest to screw you (lol) or because they make an effort to be transparent and others (maybe you, maybe not) can verify their claims about what the software is.
I would say the best with open source is that if the devs do not have time to look at your request then you can make a pr and if they won't approve it in time then you can fork it with the fix, that is what lemmy.world did for example. I have also needed to do just that for a few packages. Also if the docs are too simplified then you can just check out the code yourself. It have helped many times.
As a packager, I totally relate to this: we generally don't have the resources to follow the upstream development of the projects we rely on, let alone audit all the changes they make between releases.
Open source software still has security advantages — we can communicate directly with the maintainers, backport security fixes and immediately release them to users, fix bugs that affect the distribution, etc. — but I agree that it's not a silver bullet.
IDK why, but this had me imagining someone adding malicious code to a project, but then also being highly proactive with commenting his additions for future developers.
"Here we steal the user's identity and sell it on the black market for a tidy sum. Using these arguments..."
Even audited source code is not safe. Supply-chain attacks are possible. A lot of times, there's nothing guaranteeing the audited code is the code that's actually running.
Heartbleed is the only counter example anyone needs to know that open source isn't perfect. Intelligence agencies were likely sucking up encrypted traffic because nobody was paying attention to the most commonly used TLS library in the world
Ha! It's not just whether you know how but whether you actually do it.
I remember one a few years back, a fairly large project (I don't remember the name though), very active community but no one LOOKED. That's part of the problem.
I think that new 1 billion token AI paper that just came out is going to be auditing all code for us instantly before downloading it. Its going to revolutionize security in open source. Probably a business opportunity there.
Free software has only promised its users the Four Freedoms, which are the freedoms to use, share, modify, and share modified copies of the software. That is not an inherent guarantee that it is more secure.
Even if you yourself don't know how to work with code, you can always enlist the community or a trusted friend to exercise freedoms on your behalf. This is like saying right to repair is meaningless because you don't know how to repair your own stuff.
Very true. There was an issue in one of the linix communities a while where someone got away with submitting malicious code. It was eventuslly discovered and corrected, but it does go to show that bad actors can do some serious damage to open source projects.
I don't use the term "open source". I say free software because giving someone else control over your computing is unjust. The proprietor of the program has absolute control over how the program works and you can not change it or use alternative versions of it