
A DECADE AGO someone realised that the locks keeping swathes of the internet secure were not working. OpenSSL, a tool used to encrypt anything from social-media passwords to e-commerce purchases, had a fatal flaw that made the information it was supposed to protect visible to potential hackers. The discovery was unsurprising to anyone who knew about the team behind OpenSSL. The software, used by almost 20% of websites—including tech companies making billions of dollars in annual profits—was largely run by two men named Steve, who worked on it in their spare time. Comments on the code contained admissions of potential weaknesses, such as “EEK! Experimental code starts.”
After the flaw, which came to be known as “Heartbleed”, was discovered, tech companies pledged millions of dollars to expand OpenSSL’s team. The hobbyists would become paid staff, better able to secure the web. But last month another hole in the internet’s infrastructure was discovered: a volunteer who for two years had helped run XZ Utils, a piece of software used to compress and decompress data on Linux, an operating system used in key parts of the internet’s infrastructure, had smuggled malware into the code, allowing hackers to send nefarious commands that would otherwise have been prevented. Once again a volunteer-run project had been breached—this time, deliberately. In 2021 Log4j, a tool that records computer errors, faced a similar vulnerability. Given the frequency with which these breaches occur, why is so much critical software maintained by hobbyists?
In part it is a quirk of history, says Mar Hicks, a historian of technology at the University of Virginia. The internet has been decentralised since its founding: businesses share control with academics and hobbyists. That has always involved an uneasy truce between state and corporate interests, who bankrolled the projects that got the internet started, and volunteer enthusiasts who maintain much of the technology, adds Ciaran Martin, former chief executive of Britain’s National Cyber Security Centre, the defensive arm of GCHQ, the country’s signals-intelligence agency. Tinkerers and hobbyists form a large part of the open-source movement, a community of internet-users who develop free-to-use software and make the underlying code publicly available. Such software is commonly deployed across large parts of internet infrastructure because of its low cost.
Although one person can maintain a small-scale software project, the pressure increases when the technology becomes widely adopted. Life—including paid work—can get in the way. And hobbyists can simply get bored. Or worse: Lasse Collin, the developer who led the maintenance of XZ Utils, warned two years before the recent breach that its upkeep was harming his mental health.
This results in a danger to all internet users. Synopsys, a cyber-security firm, analysed software across 17 industries, and found that three-quarters of the code it reviewed had weak spots that had either been shown to make it vulnerable, or had previously been exploited by hackers. That is worrying, but hard to rectify. The infrastructure that underpins our digital world is too deeply embedded to tear up and rebuild to be fit for purpose, says Mr Martin. There is no commercial incentive for any one company to do so, he reckons. Omkhar Arasaratnam, of the Open Source Security Foundation (OpenSSF), a successor to the project set up with big-tech funding after Heartbleed, says the tech industry wants to build ever more skyscrapers, but leaves the plumbing and sewerage to volunteers.
If the tech companies won’t build better infrastructure themselves, there is still a way to improve internet security: to pay organisations, like OpenSSF, enough to employ others to do it. But for that to work, tech companies would need to band together—compelled by governments, if necessary—to provide consistent funding, rather than splurging when something goes wrong and forgetting about the problem when attention subsides.■