Calls for lawful access mechanisms, the need for much better IT security, and the Trustless Computing Certification Body initiative
(Written with contribution by Nick Kelly, Jovan Golic and other members of the Advisory Board of the Trustless Computing Association). (For comments and shares please refer to this copy on Linkedin).
_______________________________________________________________________________
Lawful access and personal privacy may not be an “either-or” choice or a zero-sum game, but a “both or neither” challenge, solvable by applying the same ultra-resilient trustless democratic safeguards.
Last October 11th, the US, UK, Australia, New Zealand, Japan, and India, published a joint International Statement: End-To-End Encryption and Public Safety, issuing a call to IT providers, NGOs, and stakeholders to find a solution to the problem of encryption technologies hindering law enforcement and security agencies from detecting and preventing grave crimes by individuals and nation-states. On Nov 6th, a Draft Resolution of the Council of European Union was revealed, which was very similar to such a Statement, and literally identical to it in key provisions.
These calls differ in key terms from similar calls over the last decades - starting with US President-elect Joe Biden in 1991 - in the fact that they call on civil society, industry and academia to find a solution that enables lawful access requests that are not only legal, but also “necessary and proportionate, and is subject to strong safeguards and oversight."
This novelty raises hopes that new mechanisms, standards, certifications, and bodies could be created - reserved for IT systems for human communications conceived for the highest levels of security - that apply such new stringent requirements to mechanisms that can both ensure legitimate lawful access as well as radically raise the level of security and privacy of those systems.
Lawful access until the 90s
Since World War II, the US government has possessed the capability to collect nearly all communications transmitted over-the-air or national and international data cables. They have recorded the information via listening posts dotted worldwide by compromising network nodes, sniffing cables and airwaves, and via more targeted ad-hoc means as needed.
They have been able to decipher nearly all such collected data due to their very well-resourced and advanced capability in the decryption of encryption protocols, and corruption of the implementation of such protocols in software and hardware. Additionally, they convinced 130 other nations for decades to use compromised encryption devices made by Swiss Crypto AG - with the exception of some diplomatic communications within the Soviet Block, which often used encryption robust enough to resist decryption.
Meanwhile, nearly all private citizens in the US or the West have been unable to protect their phone, fax, paper post, or in-person activities against illegal or unconstitutional governmental spying. It is a historical fact that US security agencies have extensively abused such power against activists, cultural figures, and journalists for decades.
Yet, the high marginal cost, and risk of discovery, of surveilling each individual limited the US government's ability to surveil many individuals for long periods. If you were not on a target list of thousands or had very stringent safeguards, you could expect to escape illegal government surveillance.
An increase in the use of telecommunications and digital means in the 70s and 80s reduced the marginal cost of such surveillance, increasing law enforcement agencies' ability to enforce the law, fight Soviet communism; and unfortunately also greatly expand the scale of their ability to compromise civil rights of innocent US and foreign citizens.
Unbreakable encryption for the masses
Everything changed in the early 90s.
New advanced encryption protocols were invented in the late 80s that made it eventually convenient for anyone with access to a PC, an Internet connection, and a resilient battle-tested open-source software implementation of such protocols – such as Phil Zimmermann's PGP - to protect their communications even from government abuse.
Suddenly the status quo was revolutionized in two ways.
On one side, it created a potential opportunity for citizens to protect themselves against all criminals, adversaries and even government overreach, by using new open-source encryption tools maintained and tested by an international community of privacy activists, beyond the reach and jurisdiction of US export control laws.
On the other, a colossal risk arose for the US and all other western public security agencies, that they could lose the ability to intercept the communications of enemy states and criminal suspects, even when duly authorized to do so if those actors used a proper software and hardware implementation of those protocols.
It was then, in 1991, that Joe Biden, as Chairman of the Senate Judiciary Committee, introduced a bill called the Comprehensive Counter-Terrorism Act that stated: "It is the sense of Congress that providers of electronic communications services and manufacturers of electronic communications service equipment shall ensure that communications systems permit the government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law."
It was Biden's bill, and the looming threat that this newly-found strong encryption would be outlawed, that Phil Zimmermann said at the time "led me to publish PGP electronically for free that year.". This open technological solution suddenly enabled anyone in the world to communicate securely, even from US remote interception.
In such a bill proposal, however, Joe Biden did not specify "how" those providers should "shall ensure that communications systems permit the government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law."
Yet, the overly generic nature of such provisions, and the prevailing draft implementations of such bill, lead a group of digital rights experts and activists to foresee that ill-thought implementations of such law would have curtailed the freedoms they had just gained for themselves and all citizens with such new encryption protocols and software while doing nothing to stop criminals.
The fears of those digital rights experts and activists proved to be exactly right.
As the use of algorithmically unbreakable encryption kept spreading, in 1993, the Clinton Administration developed a system hardware component, the Clipper Chip, that was promoted and encouraged to be inserted (initially on a voluntary basis) in US human electronic communication devices and systems, to ensure remote access by the government, when authorized by due legal process.
Yet, the Clipper Chip was conceptually ill-conceived (why would criminals use it instead of foreign ones!), was proven hackable by simple researchers, and no technical, legal, and procedural safeguards and certifications where put in place or proposed to sufficiently mitigate the risk of large-scale abuse by governments and criminals. Many experts also rightfully feared that even if those were to be put in place, a future more authoritarian government might repeal or weaken them - for example, in a time of public security crisis - with enormous risks for a permanent implementation of an Orwellian state.
So, thanks to the great work of a long list of US and UK computer scientists and encryption experts, the belief that the government-mandated insertion of a technical component for all IT devices could be a solution for the real dichotomy between personal privacy and public safety died there and then. No concrete proposals for a similar solution have seen the light of day since then, fortunately.
From the Clipper Chip till Today
What we are left with is more of the same limbo, wherein neither the US and western government nor digital privacy activists have been able to find another way to reconcile the need for digital privacy and legitimate digital investigations. Each has worked to promote one of two very legitimate, constitutional, and democratic mandates and rights: the promotion of civil freedom, on one side, and that of the law enforcement capability to prevent and prosecute crimes and enemies on the other.
Both sides have been at a loss, and the resulting polarization has done nothing but pushing any solution catering to both needs further away.
On one side, digital privacy activists developed ever more secure protocols and better inspected open-source software implementations at the application level. They have been rebuffing regular government calls for lawful access mechanism, so-called Crypto Wars. Many of them believe they have been winning such Crypto Wars since they prevented the official mandate of backdoors.
Yet, paradoxically, while they were kept busy on such battle-front, governments corrupted all IT technological stacks and standards, resulting in de-facto surreptitious backdoors in every system, in the form of plausibly deniable bug-doors, with very little accountability and largely accessible by cybercriminals. Whatever price they are willing to pay in terms of cost and inconvenience, they buy only very insufficient and marginal privacy, easily defeated by determined governments or criminals, often at scale with minimal cost in terms of economic costs or discoverability, resulting in digital privacy and civil rights today orders of magnitude worse than in the 1990s.
On the other, the US government and western allies (just as other powerful nations) – in order to maintain access – did the only thing they could. They started breaking all relevant IT technologies and standards in critical stacks before and after the encryption process - down to the OS, CPU, foundry - to ensure that either the encryption would be compromised, or that an undetectable malware could be installed on the device that would be able to access or modify the encrypted information on the display or read in clear text in hardware and software stacks below the application level or via keyloggers.
The US government kept expanding authorizations for such programs to undermine the IT infrastructure beyond the encryption protocol layer, while its western allies casually condemned and kept a blind eye on the execution of those programs in their territories in return for a limited ability to be assisted or directly search through such global trove of information to protect against internal and external threats.
Meanwhile, every one or two years, since that initial proposal by Joe Biden in 1991, top government executives from EU and US have been making jointly generic calls for a solution to the same “going dark” problem, which were never followed up by approved legislation, except recently by Australia, with much criticism.
Budgets and more-or-less constitutional legal authorizations for such breaking of all IT took a whole new scale after the 9/11 Attacks, where tens of billions were allocated to such activities, as we learned via the Snowden revelations.
Why the status quo is a big problem also for law enforcement and security agencies
The US and cyber-powerful governments have maintained access to their ability to remotely hack at a low cost nearly any device, at scale. Yet, this resulted in the creation of IT infrastructure for human communications that is so complex and vulnerable that the evidence they acquire in such through targeted surveillance is often not reliable, not accepted in the highest courts in western nations, and greatly reducing the security and privacy of our elected officials, businesses, politicians, candidates, journalists, and resiliency of the democratic systems.
In this new Wild West, intelligence agencies have no other choice but to increase their investments and shrewdness in a race to outcompete nations and resourced criminal syndicates as the greatest stockpilers of multiple critical vulnerabilities of exploits in ALL systems - by aggressively competing to be the first buyers, inserters, and stockpilers of fresh, new, and "plausibly deniable" critical vulnerabilities.
Their legitimate hacking capability is less consistent and produces less reliable evidence and intelligence due to the high probability of concurrent undetected hacking by multiple entities - and the fact that such systems are often designed to make forensic analysis harder rather than easier - so much so that evidence so acquired is structurally contested by highest civilian courts in Germany and France, as well as in Italy.
As highlighted by Rami Efrati, former Head of Cyber Division of Prime Minister Office of Israel, during a recent university lecture (from min 9.35), intelligence agencies' legitimate hacking capability is often inconsistent, as a consequence of all IT end-points being broken at multiple levels.
This makes so that legitimate state hacking of those systems produces less reliable evidence and intelligence due to the high probability of concurrent undetected hacking by multiple other entities - and the fact that such systems are often designed to make forensic analysis harder rather than easier.
Often US law enforcement or intelligence need to resort to parallel construction to acquire evidence that will stand in court, but at a variable cost in terms of compliance to regulations.
The problem is even more significant because it is becoming ever more apparent that we cannot choose between freedom and public safety. That is because, in the process of maximizing their mission, security agencies have not only eliminated the privacy of citizens and active citizens but even broken by design even the technologies, standards, and certifications that are used by their own government for the most critical system to maintain a genuinely democratic regime - and therefore, in turn, public safety, favoring the fraudulent undemocratic emergence of autocratic regimes in western nations.
Examples of that are the continued compromisation by NSA of the US NIST standardization body, and the hacking of the US Office for Personnel Management, of western elected officials and heads of state like Angela Merkel, of the US Democratic National Committee, the terrible state of electronic electoral voting systems, and the 2016 and 2020 US Presidential elections as well as the utter vulnerability of mainstream social media networks, like Facebook, to large-scale hacking and illegal manipulations. And the list goes on.
Can such recent calls hint at a solution to the conundrum?
The above-mentioned International Statement of October 11th, as discussed, differs from previous calls in a very significant way, which opens up a major opportunity for genuine positive progress in these issues.
The call states that all providers of IT systems should "Enable law enforcement access to content in a readable and usable format where an authorization is lawfully issued, is necessary and proportionate, and is subject to strong safeguards and oversight."
So, in addition to the requirement of the legality of those government access requests, they add very notable additional requirements of necessity, proportionality, and safeguards and oversight – Whatever may be the authors’ intentions, the literal meaning of the text explicitly recognizes how current (and future) national legislations do not currently satisfy those additional requirements, but they should.
The statement does not clarify what kind of entity – governmental or non-governmental, national or international - should be responsible for assessing that the request is “necessary and proportionate”, nor which entity should be responsible for assessing those adequate “safeguards and oversights”, technical and procedural, are in place.
Given the historical record of governmental overreach and of their inability to ensure adequate technical safeguards and oversight, it is unlikely that a national government could be trusted by citizens to define, govern, and implement such "safeguards and oversight".
Also, such “safeguards and oversight”, even if properly or even perfectly implemented, would be of little or no use unless the target IT device or system itself is not also subject to the same or similarly stringent safeguards and oversight to mitigate the risk that such IT system is hackable by determined hacking governments or criminals.
This points to the possibility that a new international non-profit standards-setting and certifications body could be set up whose governance is suitable to ensure that adequate "safeguards and oversight" are in place, not only for the mechanisms by which IT providers should vet and provide legal access, but also for the entire life-cycle and supply chain of the target IT systems themselves.
However, this way to ensure both meaningful levels of privacy and legitimate lawful access can only succeed if applied to target IT systems conceived with the highest security standards, such as those for the most sensitive human communication domains and use cases. These commonly require a radically reduced level of complexity, transparency of source designs of all critical hardware and software, and extreme security review in relation to complexity, and more. These requirements result in much lower performance and features than common IT for human communications.
So, therefore, the same extreme “safeguards and oversight” would need to be applied to an IT system for the most sensitive human use cases, as well as to the mechanisms used to ensure legitimate lawful access to them.
For high-security IT systems, personal privacy and legitimate lawful access, far from being an “either-or” choice or a zero-sum game, are a “both or neither” challenge whereby each can be solved only by applying to both the same extreme democratic procedural and technical “safeguards and oversights”.
Trustless Computing Certification Body initiative
Since 2015, our Trustless Computing Association, its partners, and its spin-off startup have been building a uniquely accountable, resilient, and independent Trustless Computing Certification Body ("TCCB") – and an initial compliant open ecosystem, computing base, and 2mm-thin human computing device.
The TCCB is uniquely aimed to achieve radically-unprecedented levels of confidentiality and integrity trustworthiness for the most sensitive IT communication systems for private use (and later governmental), while concurrently ensuring legitimate lawful access, to prevent criminal abuse.
All TCCB-compliant IT services will mandatorily provide, through a Seevik Room, three essential services: key and data recovery service to all its users to handle cases of user's death or user's loss of passwords; a radically increased resilience to insider threat; and mechanism to ensure lawful access requests that are legal, constitutional and respecting international human rights.
The Seevik Room will reach such levels of trustworthiness by (a) using (in critical stacks and end-points) only IT security technologies and cryptographic protocols, that are state-of-the-art, battle-tested, open, and extremely security-reviewed in relation to complexity; (b) oversight of critical processes via randomly-selected citizens (and possibly elected officials) acting as citizen-jury or citizen-witnesses; (c) continuously standardized and certified by a new international body, whose governance will ensure very high citizen-accountability, competency, and resilience against state pressures.
In a Position Paper for the Trustless Computing Certification Body, published in 2018, we detail the technical and organizational safeguards and oversight mechanisms that TCCB and Seevik Room would enact. We recount and respond in detail to several academic papers written by the top US and UK cybersecurity experts that, over the last 30 years, highlighted the great challenges a “trusted third party” solutions to lawful access, and explain why those concerns either do not apply or are mitigated in TCCB, especially when applied only to ultra high-assurance IT systems, as TCCB does.
Conclusions
So, therefore, there is no way to mandate lawful access mechanisms for current IT technologies, that would be enforceable on international criminals, and would not create unacceptable costs and risks in terms of citizens’ privacy.
The main reasons for that (1) governments have in the past proven their inability to implement adequate and robust-enough technical and procedural standards that could constitute the necessary “safeguard and oversight”; (2) indicting evidence acquired through such means would not be accepted by the highest courts because current technologies (and security certifications) are just too broken, complex and forensic-unfriendly in their designs to sufficiently mitigate the risk of such evidence being tampered, planted or hidden.
So maybe after all “lawful access and personal privacy are not an either-or choice, but a both or neither challenge” as the slogan of our Free and Safe in Cyberspace Conference Series recites.
TCCB and Seevik Room provide a solution that would resolve the “how” providers ensure lawful and legitimate access in a way that maximizes both cyber-investigation capability and guarantees privacy and security assurance levels much higher than those available today to even the most resourced and most expert users.
On one hand, TCCB would solve the problem that it is virtually certain that every IT system available today to normal citizens is already backdoored because security is the strength of the weakest link. At least one critical technical stack of even the best of today's IT systems is already backdoored (or bug-doored, plausibly deniable backdoors in the form of critical vulnerabilities that are inserted, or discovered and undisclosed) and hackable by many mid-level hacking entities.
Many law-abiding rich, powerful, or politically-exposed persons would choose to use TCCB-compliant devices for their most sensitive communications, even if it includes a formal mechanism to enable lawful access, because they know they’d be much more protected than other non-TCCB-compliant IT systems, from all hackers, including governmental overreach.
Even western governmental law enforcement and intelligence agencies will significantly benefit overall from TCCB because, even though they or sub-groups within them would lose the ability to deliberately hack into such devices, they will be able to access evidence accordingly if authorized by a due process that is both legal and constitutional, do so via a social mechanism, which is assured to be available, provides evidence in a short time and, most importantly, produce evidence that is reliable and that will stand in a court of law up to the highest courts.