If backdoors are already everywhere, and we can’t just trust FBI nor Apple with them, why not an offline citizen-jury-based third party?!

There was a great blog post Feb 16th on Forbes by Prof. Villasenor on the Apple vs FBI public “spat”, or whatever it really was, titled If Apple Can Create A Backdoor To The iPhone, Could Someone Else?. He wrote [bold is mine]:

In short, outsiders trying to bypass the iPhone’s security measures would face some very high obstacles. But hackers can be ingenious and determined. And a group or nation-state unconcerned about the resulting legal implications might try to employ any number of methods—including hacking into Apple’s corporate systems, finding a rogue current or former Apple employee, or plain old reverse engineering—to get information that might make it possible to create a backdoor.  The court order and Apple’s response also raise an interesting and important question about whether we should rethink the very definition of backdoor. If a product is designed in a way that would make it possible to create a backdoor, even though none has yet been created, does that mean that in some sense a backdoor already exists?  And, there is a further point: Opponents of government-mandated backdoors often argue, with good reason, that relying on government goodwill alone would result in far too little privacy protection.But, for some of the same reasons, isn’t relying on company goodwill problematic as well?

Furthermore, even if Apple “wins” this apparent spat, we still have no way to know who and how many people/entities actual have critical remote access. And neither Apple, given their device technology, processes and supply-chain complexity, which is way beyond meaningful verifiability. It is really just a game of posturing of Apple and FBI, whereas the one get good PR, the other gets to keep breaking in without meaningful accountability, and most people and “experts” believe the hype and keep using Apple products as they were secure enough, exposing  critical info about themselves on those devices which they wouldn’t if they knew better.

For years, many of the greatest experts have told us that “backdoors” systems could only be symmetrical, i.e. where we see that there many ways to make the very asymmetrical, although never perfectly so, and possibly not sufficiently so. (See “Asymmetrical backdoor” on wikipedia). As we’ve read in the Apple vs. FBI coverage, lawful access may require an in-person physical access to the service provides facilities with its lawyers’ oversight, to ensure the capability does not spill out; and/or the access capability may be constituted not only of data/code but also of a unique physical device, that is much harder to spill out.
   In another example, the Brazilian state IT agency SERPRO has internal regulations that socio-technically intrinsically require that 4 state officials of different public agencies need to be physically present and approving at a specific hosting room and consent in order to allow the unencrypted access to the emails of a state employee following a court order. 

Many of the most embattled libertarian-oriented e-privacy experts and activists seem to believe that there is a 4th solution, where we do not need to trust NSA, Apple or any third-party. They say that we should have completely P2P systems where each individual is its own guarantors of its own freedom while for law enforcement and national security, well “tough luck” for them.  They think individuals to not have to have to confide anyone, by assembling its own device, and configuring its own publicly verifiable software, which have supposedly been verified enough by experts in relation to complexity. We explain in this post Cyber-libertarianism vs. Rousseau’s Social Contract in cyberspace why we cannot avoid having several people in position to compromise completely such life-cycle, and therefore we’ll need anyway a radically trustworthy third-party, a set of extreme organizational processes, socio-technical systems and open techs, to ensure such trustworthiness. Such 4th solution is technically and socio-technically impossible, at least for many decades until when a citizen will be able to make its own devices in its home basement.

As emerged in this years, and described in the article excerpt above, we’ve been recognizing for years that undeclared backdoors are nearly everywhere for corporate,  state and criminal access, and we’ve been working with world-class partner and advisors on proposing one of the possible conceptual alternative solution, and related standard proposal, in the form of radically trustworthy third-party, that becomes crucial to guarantee rights-respecting lawful access, butalso essential to provide meaningful assurance against the provider, insider threat and many other third parties in the life-cycle, as described in our Trustless Computing Initiative:

SERVICE CLASSES & LAWFUL ACCESS: The new standards setting and certification processes will initially certify only specific service classes of end-2-end IT services addressed at specific market sub-domains, in full compliance with the EU Charter and local constitutions:

A. Pure P2P Communication Service, without anyone’s access, by design, to user encryption keys, except the user himself;

B. Hybrid P2P Communication Service, which provide voluntary (i.e. in addition of what is required of current laws) availability by the provider to evaluate and approve constitutional – no more no less – lawful access requests, through independent citizen-accountable processes with extreme safeguards against abuse, instead through the Provider’s attorneys as done today. An example can be the CivicRoom concept; which is centered on the explicit on-site approval of a jury of 5 or more random-sampled citizens at a specific facility, coupled with transparency and independent verification relative to complexity of all critical techncal and organizational components involved in the process that radically exceed current state-of-the-art.

C. Targeted lawful access Services, including all critical technological components and organizational processes critically involved;

D. Ultra- critical Internet-connect cyber-physical systems and individual endpoints, in complex and dynamic environment, including for IoT, narrow AI, critical infrastructures.

For these ideas, we’ve often been ostracised, and attacked by many of the most dogmatic e-privacy activists and experts as mindless or sold to the enemy. Hopefully, time has come to start a discussion, possibly even through our Free and Safe in Cyberspace global event series.

[Update of Feb 21st] According to a recent detailed analysis by a US forensic expert, Apple was required to create a forensic tool which, under US law is obligated to be shared in public for evaluation and verification. Complying could therefore expose such tools to reverse engineering and then abuse by many possible parties. But not necessarily. Maybe the design of such forensic tool (in combination possibly with next firmware update to all iPhones) could intrinsically require, to sufficient level of assurance, that anyone trying to access should have both (a) physical control of the device and (b) possession of a unique physical device (such as an hardware encryption key) that is physically stored with aradically trustworthy third party (or the provider, possibly, but we’d have to trust them blindly …)All such infrastructure, however, should be devised to standards of verification relative to complexity of all critical components that far exceed current state of the art.

Rufo Guerreschi