The “trustless” fallacy

Disintermediation — … again

Trustless has become kind of a dogma in the crypto community.
Lets start with a look at the roots, that is, the Bitcoin Whitepaper. It mentions trust in the following contexts:

  • With legacy payment systems, merchants cannot trust customers to not reverse payments, thus require more information from them than would otherwise be necessary
  • Intermediaries lead to higher transaction cost (this claim has not yet been proven conclusively enough)
  • Typically, trusted central authorities guarantee that there’s no double spend. Gives this entity tremendous power (which implies potential for abuse)
  • In a centralized system, privacy depends on the central entity to not leak private information.

Satoshi’s clear, non-bullshit language and the elegance of the system he envisioned reminds me a lot of the story about Joshua Levine (as narrated in this book) who in the 1980s set out to cut out the powerful intermediaries on Wall Street (which human brokers still were back then) by building an electronic system which would allow ordinary people to directly interact with each other.
The discrepancy between the effects Joshua expected to have vs what actually happened should be a cautionary lesson to everybody in the crypto community who sincerely wants to change the world for the better.

Invisible trust requirements…

Satoshi clearly identified the trust requirements his system was about to eliminate. He did however probably not have such a clear idea about trust requirements hereby introduced, because on an abstract level they are not visible.

  • Math: you need to trust the mathematical foundations of the cryptography used by crypto-economic systems (unless you’re one of the few understanding them in-depth)
  • Implementation: unless you write your own implementation of a crypto-economic protocol, you need to trust the implementation somebody else (you may or may not know) wrote. In theory you may also just audit a 3rd party implementation. In practice that’s hard. Really hard if you accept the possibility of a malicious developer!
    Btw. even if you write your own implementation and even if you make no mistakes, you may still be hit by bugs in a library you use (example).
  • Distribution: a bug-free implementation alone isn’t enough. You also need to trust the distribution channels used. Ideally, you have upfront been handed a cryptographic fingerprint of the developer and always checked the integrity of downloads. In practice, you may rely on the Appstore infrastructure of Google or Apple or on the admins (in case you don’t, the developer of your implementation likely may). It may not surprise you that here mistakes can happen too, even for high-profile projects (example).
  • Runtime environment: You may be running Qubes OS or Privacy Machine. Or you may be one of the 99%+ using a more mundane and less secure setup. Let alone the fact that we’ve just recently been reminded that we can’t even rely on our Hardware to be secure.

Essentially, the requirement to trust 3rd parties has been replaced by the requirement to trust yourself: your technical capabilities (on a lot of levels), your capability to not make mistakes, your capability to not forget. To be fair, in the scenario with intermediaries those trust requirements are not totally absent, but they’re usually reduced by regulation shifting liability to the intermediary unless you act too negligently.

… lead to centralization (again)

What happens in practice when a system tries to shift the full burden of responsibility on the individual is that many (maybe most) just opt out.
Not surprisingly, alot of people today keep their crypto assets in crypto banks (coinbase et al), often without even being aware of that.

Many in the community keep believing that the solution to this problem of re-centralization is to increase the efforts of educating people and to improve usability.
In my opintion, while both are important, that’s not enough.

Trust is not binary

We need to break out of the mindset imposed by the trustless dogma.

Trust is not a simple binary concept.
I’ve recently bought a used folding bike from a stranger. He trusted me to ride the bike to an ATM out of his view of sight, because I had forgotten to bring enough cash with me. He trusted me not to take advantage of the opportunity to steal the bike and disappear — he even declined the offer to leave my phone as collateral.
This ad-hoc trust relation was induced by a lot of parameters, e.g. prior experiences of both of us, our current mood, the environment where we met, the social capital of the place where we’re living etc.

Trust needs to be understood as a gradual concept, not something hardcoded.
2-Factor-Authentication for an application handling secretes or valuables should not be configurable only in a binary way (on or off), but also be able to depend on context, e.g. the amount to be transferred by a wallet application.
Similarly, a blockchain transaction with zero confirmations isn’t automatically a bad thing to be avoided. It again depends on context — in many cases the usability advantage outweighs the risk.

The golden mean

I claim that in order to build applications which are safe and appealing for many, we need not to switch from one extreme (everybody trust a simple entity, as in Facebook) to another extreme (everybody trust only themselves, as in Bitcoin), but start from trust relations which already exist in the real world and leverage those to build more decentralized (but not atomized), more resilient — ultimately more natural — networks.

There’s already timid developments into that direction. Zeropass for example allows key recovery through distribution of key shares to family and friends. A similar concept was dubbed “social recovery” by the uPort project (mentioned in this blog post).

I think there’s much more we can do.
Our digital tools need a much better awareness about existing trust relations and become much better at inferring and also visualizing adequate trust levels. Defaults need to become smarter, because good defaults are a core element for usability and safety.

“Forgot password”

Finally, an example for what this could mean:
Vault-type applications (e.g. wallets) need a mechanism which from a user perspective works like a “forgot password” button — because that’s what most people are used to and thus also expect.
The process started by that buttons and the prerequisites to have that process in place when it’s needed, without requiring an opaque central authority and without introducing worse attack vectors than we have now, that’s one of the challenges we have to tackle.

Subscribe to get the latest
news and articles first

We follow privacy by design principles and do not tolerate spam.