June 12, 2018
5 min. read
First of all, serverless is a bit of a misnomer. It doesn’t necessarily mean that there’s no servers involved, but that the servers are abstracted away.
One may wonder: isn’t that what cloud is all about?
Well, yes and no. It was a step into that direction. The cloud abstracted away physical servers and gave us virtual instances instead. At least that’s what it was about when approaching peak hype around 10 years ago (or so).
By now, cloud providers have quadrupled their offerings (example), conveniently making it much easier to get into a vendor lock-in.
While “the cloud” initially just offered an abstraction of the physical machine, the trending concept of serverless architecture means that even the logical (virtual) machine is disappearing.
Now may be a good moment for a reminder: There is no cloud. It’s just someone else’s Computer.
Which raises the question: who owns that Computer? Who controls it? Who has the political power over it and thus ultimately over the availability of the applications and integrity of the data you may want to deploy onto it?
A while ago, Vitalik authored a very important piece: The Meaning of Decentralization. Key takeaway: it’s not about distributed vs decentralized. It’s about architectural vs logical vs political decentralization.
If you host your application at a commercial cloud provider, that may provide you with great architectural decentralization — at the same time it’s politically super-centralized.
Lets switch back to the example of the serverless chatroom.
The application is stored on IPFS, executed on the clients and uses WebRTC for (realtime) data transmission.
There’s however this little imperfection: in order for WebRTC to properly function in a NAT-distorted Internet (and no, IPv6 won’t fix that although technically it could), we may need a server anyway.
Quick explanation for those not familiar with it:
Simplified, NAT is a workaround to 2 problems:
1. A lot of software isn’t audited and safe enough to run on devices which are “open to the Internet” (meaning: anybody on the Internet could directly “talk to it”)
2. There’s not enough (v4) IP addresses for all devices connected to the Internet (let alone their current allocation due to historical reasons)
The popularization of NAT had important consequences for the Internet: the original vision of a network of devices which are logically equal (everybody can talk to everybody) was replaced by the client-server model where end-user devices don’t directly talk to each other, but through intermediating servers.
In a way, that’s how we ended up with Facebook (you know, that strange website always asking you to login or register when you’re actually just following a hyperlink and have no intention to add content).
In fact we’ve become so used to that paradigm that many of us may not even be aware that a Facebook-style social network (and many other applications) would technically be absolutely feasible without an intermediary.
We could have pretty much the same user experience without giving up control over the data and communication paths to an opaque, politically centralized intermediary.
Back again to the chatroom:
Because NAT is a reality, it needs a STUN server in order to “just work”. This server is basically a very lightweight intermediary which just helps initiating the connection. Side node: I claim that Skype became a thing mainly because of their early and reliable solution to that problem — we had VoIP before, but it usually didn’t just work.
But even if that STUN component is very lightweight, it needs to run somewhere. How to solve that?
The maker of the chatroom demo application just hardcoded a server by IP and didn’t mention this dirty secret in the description.
An attentive observer on HN noticed it however and identified the IP to belong to a server supposedly set up in 2013 by Mozilla — a server which may or may not be available tomorrow, or next week — or which may collapse if the chatroom becomes too popular.
Wouldn’t it be nice to have something like a truly public cloud (not just architecturally, but also politically decentralized) which allows us to deploy applications to the Internet and store data on the Internet? A kind of (truly) public cloud? You may not care, but for me the answer is: yes, totally!
Ethereum set out to provide a comprehensive framework which allows just that. They labeled it Web 3.0. Take a look at this blog post of 2014 outlining that vision and explaining the 3 pillars:
Swarm and Whisper are not nearly as known as Ethereum, but they’re being actively developed (and also integrated into projects: example).
What all 3 technologies have in common: they share the same P2P network.
A node participating in this P2P network may offer arbitrary sets of this services to other peers. In the used devp2p protocol, this is done by specifying each service as a subprotocol. Ethereum (eth), Swarm (bzz) and Whisper (shh) each define their own subprotocol. Even the communication protocol for light clients was defined as a dedicated subprotocol (les).
What incentives does a network node have to offer this services?
Paradoxically, this is a real issue.
Paradoxical, because — wasn’t the integration of incentive design what made Bitcoin and related projects so special?
The problem is that block rewards are an incentive for actively participating in the consensus protocol, but they’re not an incentive for routing whisper messages or answering requests of light clients etc.
It’s not that those additional services couldn’t come with their own incentive systems built-in — in fact such systems are being built — Swarm being a prime example.
But there’s a limit to how fine-grained an incentive and accounting system can be without creating prohibitive overhead costs.
I also believe that the Ethereum project needs to be careful not to drown in complexity. While funding shouldn’t become an issue for a while (it wasn’t always like that), it’s not the only limiting factor. Complexity is a bitch.
What’s more: complexity in software tends to come at the cost of usability and thereby also adoption.
This discussion in the Ethereum research forum highlights an important point: the cost of running a network node is too high — it’s not even about the direct economic cost, but about the time and cognitive effort needed to keep a node running. Ultimately, it’s about usability.
This is where ARTIS considerably departs from Ethereum:
We don’t believe in more and more micro-accounting and related increase of complexity being the solution to every incentive related problem. Neither do we assume that a performant and reliable network will emerge out of pure altruism.
ARTIS is less focused on correctly accounting for every bit somebody may have shifted for somebody else and more focused on building a commons which doesn’t break down due to the tragedy of the commons.
We want to leverage the fantastic technical toolset emerging from the work on crypto-economic systems, combine it with modern economic insights, for example those of Elinor Ostrom, and compose a powerful network which makes serverless architecture on a politically decentralized system an attractive option.
This is where Freenodes — as envisioned by the ARTIS network architecture — come in.
Freenodes are network nodes which participate in consensus passively (validating the chain) and offer a set of services which are needed for decentralized / serverless applications. To list a few possible such services:
and to also help out the demo chatroom application:
So, that’s Freenodes.
Becoming one is permissionless and requires just a stake which shows commitment and creates an economic incentive (via node rewards) to provide reliable services.
In return, Freenode operators provide not only the computing resources, but also the time and cognitive effort to keep their nodes running and up to date.
Depending on the concrete service, usage may be for free, free within some limits for ARTIS members or pay-per-use (with automated accounting).
Since ARTIS members are verified to be unique, it becomes possible for the Freenode network to offer some services neither requiring the overhead complexity of pay-per-use, yet with protection against leechers.
We believe that this intermediate layer can greatly facilitate the provision of end-user applications which don’t need to make compromises neither on usability nor on privacy or freedom — hopefully hitting a sweet spot between what is currently possible with tranditional centralized or Bitcoin-/Ethereum-style decentralized systems.