- Pip install Boto3 - Thu, Mar 24 2022
- Install Boto3 (AWS SDK for Python) in Visual Studio Code (VS Code) on Windows - Wed, Feb 23 2022
- Automatically mount an NVMe EBS volume in an EC2 Linux instance using fstab - Mon, Feb 21 2022
Robin Seggelmann, the creator of the flaw, was a German graduate student at the time he wrote the code. His current employer, Deutsche Telekom, published a statement that tries to fend off all conspiracy theories. Seggelmann is cited in the statement, and what he says sounds like a joke to me. He claims that it is an advantage of open source that everyone can check the code of security-relevant software. And, right in the next sentence, he admits that the OpenSSL project lacks support from the community despite the fact that it is used by millions of users.
OpenSSL is part of the most popular Linux distributions; Android is also affected by the flaw. Considering that the majority of websites run on Linux machines, we can assume that most of the encrypted Internet traffic was vulnerable. Thus, I find it amazing that the code contribution of a student was just “verified” by one person (Stephen Henson).
On the other hand, if you know that OpenSSL consists of 420,000 lines of code, you can imagine that quite a few developers would be needed to ensure that the code is secure. Of course, it is much more fun to write your own programs than it is to search for flaws in the code of others. You really must have a lot of faith in the open source ideology if you are willing to dig through the code of others without pay.
However, there are other groups of coders who are much more motivated to find security flaws in open-source software. And those guys follow other ideologies. Some of them believe they are doing it “for their country,” and others only believe in their bank accounts. Decide for yourself which group is more dangerous.
But finding security flaws is tiresome even if you are highly motivated. It is much easier to write the “bug” yourself or, if you don’t have the skills, you pay someone—say, a student who needs to finance his studies—to do it for you. Note that I am not accusing Seggelmann of anything. However, I do think that serious investigations of this case are needed that go far beyond just looking at the code.
If a backdoor is detected, developers can easily claim that the flaw was inadvertently submitted. Proving the opposite is very hard and requires extensive investigations into the social environment of the developer. The problem is that these investigations only happen (if at all) when such a case becomes publicly known.
I think closed-source software is different in that software businesses have a lot to lose if security flaws are detected in their software; if it turns out that a backdoor was deliberately inserted in the code, it could mean the end of the company. Thus, software vendors have a high interest in double-checking the background of their developers and paying them well to make them invulnerable to bribery. How many companies would accept code from a student they only know through email or online forums? Even more important is that only a limited number of developers have access to the source code. This simply reduces the risk of the bad guys gaining access to security-relevant information. Yes, security by obscurity works extremely well.
I have outlined in my post about the TrueCrypt case why I believe that many open-source projects are inherently insecure. Many people think that, just because the source code is publicly available, it is unlikely that someone succeeded in infiltrating the code with backdoors. However, the truth is—and the Heartbleed case demonstrates this excellently—that it is extremely easy to submit code that contains backdoors. The fact that a certain software program is widely used is no protection at all.
The main problem of open source is this: the number of bad guys skimming through open-source code to find vulnerabilities outnumbers the good guys who try to make the code secure. The motivation and, in many cases, the resources of the bad guys exceed those of the good guys who often only work on an open-source project in their spare time.
Yes, many companies contribute to open-source projects. And one should think that IT behemoths like Google or Amazon, who benefit enormously from open source, would have interest enough to ensure that at least software on which the entire Internet infrastructure depends is maintained properly. Heartbleed demonstrated that this is not the case.
It is not that I don’t like open source. The lines you are reading now have been created by great open-source software, and I often have a lot of fun when I adapt the code to my needs. Thus, open source has many benefits. However, knowing that myriad of hackers are trying to find security holes in the WordPress code does not make me sleep better. If Google and company would invest more in open source and provide projects such as OpenSSL with enough manpower, things would look different.
To answer one of the questions in the title, no, open source as such is not inherently insecure. It depends very much on the project and the people behind it. And, of course, closed source is not safe from backdoors either. At the end of the day, what matters is who you trust. If you trust the developers behind a certain open-source project, fine. If you don’t trust a certain company, that’s fine, too. However, you must not trust in ideologies. Only trust or mistrust persons.
What does Heartbleed mean for you? After you updated all your systems running OpenSSL, you should change all active passwords that you sent through HTTPS since January, 2012. Next, you tell all your users to do the same. And then you wait for the helpdesk calls dropping in from users who forgot their new password. 🙂