Hack, Report, Get Sued: The Accidental Heroes of Cybersecurity
Imagine, if you will, that you discover your neighbour's front door is held shut by nothing more than the optimistic application of a bit of Blu-Tack. Being a decent sort, you knock on their door, explain the situation, and suggest they might want to invest in an actual lock. In a rational world, your neighbour thanks you, fixes the door, and perhaps offers you a cup of tea. In the world of enterprise software and government systems circa 1990 to the present day, your neighbour calls the police, has you arrested for trespassing, and issues a press release about how the real threat to their home security is nosy people like you. Welcome to the history of responsible disclosure.
The Early Days: When 'Hacker' Just Meant 'Clever'
Before the tabloids decided that every person who owned a modem and hoodie was planning to bring down Western civilisation, the computing community largely operated on something resembling common sense. In the 1960s and 70s, the MIT hacker culture was built on a foundational principle: systems should be open, knowledge should be shared, and if something was broken, you said so. Loudly. Publicly. Often with a certain amount of gleeful showmanship. There were no coordinated disclosure policies because there was barely a software industry to coordinate with.
This cheerful transparency couldn't last, of course. Money got involved. The 1980s brought the commercialisation of software, and with it, the dawning corporate realisation that their products were riddled with security holes so gaping that one could, and often did, drive a metaphorical lorry through them. The instinctive response to this discovery was not, as one might hope, to fix the holes. It was to sue anyone who mentioned them.
The Full Disclosure Wars: Security Through Litigation
By the late 1980s and early 90s, a philosophy had emerged that would come to be known as full disclosure. The logic was admirably simple: if a vulnerability exists, the public deserves to know about it, because the public is the one running the software. If vendors won't fix their bugs, public pressure, and public embarrassment, might convince them to reconsider. The Bugtraq mailing list, launched in 1993, became the cathedral of this movement, a place where researchers posted vulnerabilities in excruciating detail for anyone with an internet connection to read.
Vendors, predictably, were livid. Not because their software was insecure, that they seemed broadly fine with, but because someone was telling people about it. The argument that dominated corporate legal departments throughout the 90s was essentially this: the researcher who found the flaw, documented it, and published it was more culpable than the vendor who shipped dangerously broken code in the first place. This logic would not survive contact with a philosophy degree, but it held up admirably well in cease-and-desist letters.
The Computer Fraud and Abuse Act of 1986 in the United States became the weapon of choice. Vague enough to be deployed against almost anyone who had ever typed a URL they shouldn't have, it turned security research into a legally perilous hobby. The UK's Computer Misuse Act of 1990 followed a similar philosophy. Researchers found themselves in the peculiar position of having to choose between telling nobody about critical flaws, in which case criminals would find them anyway, or telling the vendor and hoping not to be prosecuted for the effort.
The Invention of 'Responsible Disclosure': A Compromise Nobody Was Entirely Happy With
In 2002, Scott Culp of Microsoft, and if you appreciate the irony of Microsoft of all companies moralising about security, do take a moment to savour it, published an essay arguing that the full disclosure crowd were being irresponsible. His position was that researchers should give vendors time to patch before going public. The community, not entirely enamoured with taking ethical guidance from a company whose operating system was functionally a petri dish for malware, pushed back. Hard.
What emerged from this argument was a model called responsible disclosure, later rebranded to the somewhat more palatable coordinated vulnerability disclosure. The concept was this: a researcher finds a flaw, privately notifies the vendor, gives them a reasonable period, typically 90 days became the industry standard, thanks largely to Google's Project Zero formalising it in 2014, and if no patch is forthcoming, publishes regardless. This was, all things considered, a grown-up compromise. Vendors got time to fix things. Researchers got the eventual right to go public. Users got, in theory, some protection.
In practice, vendors used the 90-day window to do everything except fix the vulnerability. They wrote press releases, consulted lawyers, held internal meetings about the PR implications, and occasionally sent the researcher a sternly worded email asking whether they'd considered keeping quiet indefinitely. Some did fix things promptly and professionally. They deserve their moment of acknowledgement, and it will be brief, because they are not the interesting part of the story.
Bug Bounties: Paying People to Find the Holes You Really Should Have Found Yourself
The bug bounty programme was, in retrospect, an inspired act of institutional aikido. Rather than fighting with researchers, companies decided to pay them. Netscape launched arguably the first formal bug bounty in 1995. Mozilla formalised things further in the early 2000s, and by the 2010s, platforms like HackerOne and Bugcrowd had turned vulnerability discovery into something approaching a gig economy for the technically gifted.
The sums involved became genuinely significant. Google has paid out hundreds of millions of dollars through its Vulnerability Reward Programme. Microsoft, Apple, and most major technology companies followed suit. Governments, bless them, eventually caught up, the US Department of Defence launched a 'Hack the Pentagon' initiative in 2016, which one hopes caused at least one senior official to briefly lie awake at night contemplating the decisions that had led to this point.
Bug bounties did not solve the underlying problem, which is that software is written by humans and humans are, as a species, magnificently inconsistent. What bounties did do was establish a financial incentive for disclosure over exploitation, which is, let's be honest, more reliable than appealing to people's better nature. The dark side of the bounty economy is that it created a legitimate market that exists in uncomfortable proximity to the decidedly illegitimate market of selling vulnerabilities to governments and intelligence agencies. A zero-day that earns a researcher $10,000 from Google might fetch $1,000,000 from a nation-state buyer. The arithmetic of ethics, it turns out, gets complicated when the numbers get large enough.
Seven Hackers Walk Into the Senate...
In May 1998, something genuinely surreal happened in Washington. Seven members of a Boston hacker collective called L0pht Heavy Industries, known by handles including Mudge, Space Rogue, Kingpin, and Weld Pond, climbed into a rented fifteen-passenger van bristling with antennas, drove to Capitol Hill, and testified before the Senate Committee on Governmental Affairs. They were, the committee's staff noted with evident bewilderment, among the first witnesses in the history of federal hearings to be allowed to testify under aliases. The only previous group afforded that privilege had been participants in the Witness Protection Programme. Make of that what you will.
What they told the senators was blunt to the point of cruelty: the internet was catastrophically insecure, government systems were wide open, and any of the seven men sitting before them could take the entire internet down within thirty minutes. Senator Joe Lieberman, apparently overcome by the occasion, compared them to Rachel Carson and suggested they might be modern-day Paul Reveres. Senator Fred Thompson, who chaired the proceedings, nodded gravely and said they would have to do something about it. The Washington Post later described what followed as a tragedy of missed opportunity. The senators did very little, the vulnerabilities largely persisted, and twenty years later four members of L0pht returned to Capitol Hill for an anniversary briefing and delivered essentially the same message. Progress had been made, they acknowledged. The underlying problems had not gone away. Rush Limbaugh, for his part, had described them on radio as 'long-haired nerd computer hackers', which perhaps tells you everything you need to know about America's capacity for nuanced technology policy.
The L0pht hearing was remarkable precisely because it treated researchers as allies rather than suspects, a rarity then, and more common now than it once was, but still not universal. The group had been meeting with Richard Clarke at the National Security Council before the hearing, vetted and vouched for, which is the intelligence community's way of saying 'we know you could cause chaos, so let's make you feel useful instead.' It was, in its clunky, bipartisan way, a template for how government and the security community might actually talk to each other. That this template took another two decades to even partially catch on is one of the enduring disappointments of the field.
The Rogues' Gallery: When Disclosure Goes Badly
No history of responsible disclosure would be complete without a moment of silence for those who tried to do the right thing and were thoroughly punished for it. Aaron Swartz, whose story is too large and too tragic to be summarised fairly here, faced potential decades in prison for downloading academic articles. Andrew Auernheimer, known as Weev, served time in federal prison for accessing AT&T's website through a URL that was, by any reasonable definition, publicly accessible. The charges were eventually overturned on a technicality, which is law's way of saying 'we got the wrong answer but we're not going to admit why.'
Closer to home, the Computer Misuse Act 1990 has produced its own catalogue of head-scratching outcomes. Consider the case of Dan Cuthbert, a professional penetration tester who, on New Year's Eve 2004, donated to a Tsunami disaster relief charity website and then, as any security professional with a conscience might, ran a brief check to see whether the site handling his payment details was actually secure. It was not a sophisticated probe. It was the sort of thing a competent professional does reflexively, the digital equivalent of checking whether the lock on a door you've just walked through actually works. British Telecom's intrusion detection system logged the activity, the Metropolitan Police Computer Crime Unit came calling, and in October 2005 Cuthbert was convicted under the Act and fined £400.
The case became a minor cause célèbre in security circles, and rightly so. Cuthbert had donated his own money to charity, checked whether the site was safe, found it wasn't, and received a criminal conviction for his trouble. The charity's donation infrastructure had real vulnerabilities. Nobody prosecuted the people responsible for that. The Act, it turned out, was rather more interested in the person who noticed the problem than the people who created it. This logic, or rather, the complete absence of it, became the defining feature of Computer Misuse Act jurisprudence for the next two decades.
The structural problem with the Act is one that parliamentarians have been wringing their hands about since at least 2004, when the All-Party Parliamentary Internet Group first called for reform, and have continued wringing ever since, with the vim and urgency of people who have absolutely no intention of doing anything about it. The Act makes no meaningful distinction between a criminal exploiting a vulnerability and a researcher identifying one. Both are, technically, committing the same offence. An 80 per cent majority of UK cybersecurity professionals have reported worrying about whether their legitimate research might land them in court. The CyberUp Campaign, which has spent years lobbying for a statutory defence for good-faith researchers, put it succinctly: the Act is protecting the vulnerabilities, not the public. The government, after years of consultation and review, has largely agreed in principle and done very little in practice. This is, by the standards of Westminster technology policy, practically a triumph.
Where We Are Now: Progress, Of a Sort
The current state of responsible disclosure is, charitably, mature. Less charitably, it is a system that functions adequately when all parties are acting in good faith, and collapses entertainingly when they are not. The ISO/IEC 29147 and 30111 standards now exist to formalise vulnerability disclosure and handling. The EU's Network and Information Security directives have pushed member states toward taking security reporting somewhat more seriously. The UK government's own NCSC published a vulnerability disclosure toolkit that reads, by government document standards, almost like something a human being might voluntarily read.
Organisations like CISA in the US have advocated for legal protections for good-faith security researchers, acknowledging that the alternative, a world where vulnerabilities are discovered by criminals rather than disclosed by researchers, is demonstrably worse for everyone. Some companies have adopted security.txt files, a simple standard for telling researchers how to report vulnerabilities, which represents the industry's way of putting up a politely worded sign that says 'yes, we know this might be broken, please do tell us before you tell everyone else.'
The Inconvenient Conclusion
The history of responsible disclosure is ultimately a story about the collision between two groups of people who want, at least nominally, the same thing, secure systems, and who have spent the better part of four decades being spectacularly unhelpful to one another in pursuit of that goal. Vendors spent years treating security researchers as adversaries, which had the predictable effect of making some of them adversarial. Researchers spent years treating vendors as the enemy, which made productive collaboration somewhat challenging.
What the field has slowly, painfully, and occasionally through the medium of high-profile catastrophes learned is that the actual enemy is the vulnerability itself. Not the person who found it. Not the person who reported it. Not even, entirely, the person who shipped the flawed software in the first place, because software is hard and humans make mistakes and sometimes you ship code with a SQL injection flaw in it because you were tired and the deadline was Tuesday.
The security community is, by historical standards, functioning reasonably well. Disclosure timelines are increasingly respected. Legal protections are slowly improving. Bug bounties, for all their imperfections, have created a functioning market for doing the right thing. The occasional researcher still gets prosecuted for finding a flaw that an organisation would rather have pretended didn't exist, but this happens somewhat less frequently than it once did. Whether this constitutes progress or merely the bar being on the floor is, perhaps, a matter of perspective.
The neighbour with the Blu-Tack on their door, in other words, now occasionally says thank you. They still sometimes call the police. But progress, in cybersecurity as in life, is rarely linear, frequently undignified, and almost always slower than it should be. The important thing is that we keep knocking.
Quality cynicism takes research, time, and an alarming number of browser tabs. If you found this useful, a donation would be appreciated, and considerably less dramatic than a Senate hearing. Thank you!
References
Walshe, T. and Simpson, A. (2023). 'Towards a Greater Understanding of Coordinated Vulnerability Disclosure Policy Documents', Digital Threats: Research and Practice, 4(2), pp. 1-36. Available at: https://dl.acm.org/doi/10.1145/3586180 (Accessed: 17 March 2026).
Computer Misuse Act 1990 (c.18). London: HMSO.
Householder, A.D., Wassermann, G., Manion, A. and King, Ch. (2017). The CERT® Guide to Coordinated Vulnerability Disclosure. CMU/SEI-2017-SR-022. Pittsburgh: Carnegie Mellon University Software Engineering Institute. Available at: https://resources.sei.cmu.edu/asset_files/specialreport/2017_003_001_503340.pdf (Accessed: 17 March 2026).
Oates, J. (2005). 'Tsunami hacker convicted', The Register, 6 October. Available at: https://www.theregister.com/2005/10/06/tsunami_hacker_convicted/ (Accessed: 17 March 2026).
National Security Archive (2019). 'Cybersecurity: When Hackers Went to the Hill — Revisiting the L0pht Hearings of 1998', George Washington University, 9 January. Available at: https://nsarchive.gwu.edu/briefing-book/cyber-vault/2019-01-09/cybersecurity-when-hackers-went-hill-revisiting-l0pht-hearings-1998 (Accessed: 17 March 2026).
Pinsent masons (2005). 'Regrettable' conviction under Computer Misuse Act. 7 October. Available at: https://www.pinsentmasons.com/out-law/news/regrettable-conviction-under-computer-misuse-act (Accessed: 17 March 2026).
US Senate Committee on Governmental Affairs (1998). Weak Computer Security in Government: Is the Public at Risk? Hearing, 19 May 1998, 342 Dirksen Senate Office Building, Washington DC. Available at: https://www.hsgac.senate.gov/hearings/weak-computer-security-in-government-is-the-public-at-risk-/ (Accessed: 14 March 2026).
Wikipedia (2025) 'L0pht', Wikimedia Foundation. Available at: https://en.wikipedia.org/wiki/L0pht (Accessed: 17 March 2026).