The importance of full-stack openness and verifiability

2025 Sep 24 See all posts


The importance of full-stack openness and verifiability

Special thanks to Ahmed Ghappour, bunnie, Daniel Genkin, Graham Liu, Michael Gao, mlsudo, Tim Ansell, Quintus Kilbourn, Tina Zhen, Balvi volunteers and GrapheneOS developers for feedback and discussion.

Perhaps the biggest trend of this century so far can be summarized by the phrase "the internet has become real life". It started with email and instant messaging. Private conversations that for millennia past were done with mouths, ears, pen and paper, now run on digital infrastructure. Then, we got digital finance - both crypto finance, and digitization of traditional finance itself. Then, our health: thanks to smartphones, personal health tracking watches, and data inferred from purchases, all kinds of information about our own bodies is being processed through computers and computer networks. Over the next twenty years, I expect this trend to take over all kinds of other domains, including various government processes (eventually even voting), monitoring of the public environment for physical and biological indicators and threats, and ultimately, with brain-computer interfaces, even our own minds.

I do not think that these trends are avoidable; their benefits are too great, and in a highly competitive global environment, civilizations that reject these technologies will lose first competitiveness and then sovereignty to those that embrace them. However, in addition to offering powerful benefits, these technologies deeply affect power dynamics, both within and between countries.

The civilizations that gained the most from new waves of technology are not the ones who consumed the technology, but the ones who produced it. Centrally planned equal access programs to locked-down platforms and APIs can at best provide only a small fraction of this, and fail in circumstances that fall outside of a pre-determined "normal". Additionally, this future involves a lot of trust being put in technology. If that trust is broken (eg. backdoors, security failures), we get really big problems. Even the mere possibility of that trust being broken forces a fallback to fundamentally exclusionary social models of trust ("was this thing built by people I trust?"). This creates incentives that propagate up the stack: the sovereign is he who decides on the state of exception.

Avoiding these problems requires technology across the stack - software, hardware and bio - that has two intertwined properties: genuine openness (ie. open source, including free licensing) and verifiability (including, ideally, directly by end users).


The internet is real life. We want it to become a utopia and not a dystopia.


The importance of openness and verifiability in health

We saw the consequences of unequal access to the technological means of production during Covid. Vaccines were produced in only a few countries, which led to large disparities between when different countries were able to get access to them. Wealthier countries got top-quality vaccines in 2021, others got lower-quality vaccines in 2022 or 2023. There were initiatives to try to ensure equal access, but because the vaccines were designed to rely on capital-intensive proprietary manufacturing processes that could only be done in a few places, these initiatives could only do so much.


Covid vaccine coverage, 2021-23.


The second major issue with vaccines was the opaque science and communications strategy that tried to pretend to the public that they carried literally zero risks or downsides, which was untrue and ended up contributing greatly to mistrust. Today, this mistrust has spiraled into what feels like a rejection of half a century of science.

In fact, both problems are resolvable. Vaccines like the Balvi-funded PopVax are cheaper to develop, and made with a much more open process, reducing access inequality and at the same time making it easier to analyze and verify their safety and effectiveness. We can go even further in designing vaccines for verifiability first.

Similar issues apply for the digital side of biotech. When you talk to longevity researchers, one of the first things that you will universally hear is that the future of anti-aging medicine is personalized and data-driven. To know what medicines and what changes in nutrients to suggest to a person today, you need to know the current condition of their body. This is much more effective if there can be a large amount of data digitally collected and processed, in real time.


This watch collects 1000x more data about you than Worldcoin. This has upsides and downsides.


The same idea applies for defensive biotech aimed at downside prevention, such as fighting pandemics. The earlier a pandemic is detected, the more likely it is that it can be stopped at the source - and even if it can't, each week gives more time to prepare and start working on countermeasures. While a pandemic is ongoing, there is a lot of value in being able to know in what locations people are getting sick, in order to deploy countermeasures in real time. If the average person who gets sick with a pandemic learns it, and self-isolates within an hour, that implies up to 72x less spread than if they go around infecting others for three days. If we know which 20% of locations are responsible for 80% of the spread, improving air quality there can add further gains. All of this requires (i) lots and lots of sensors, and (ii) the ability for the sensors to communicate in real time to feed information to other systems.

And if we go even further in the "scifi" direction, we get to brain-computer interfaces, which can enable great productivity, help people better understand each other through telepathic communication, and unlock safer paths to highly intelligent AI.

If the infrastructure for biological and health tracking (for individuals and for spaces) is proprietary, then the data goes into the hands of large corporations by default. Those corporations have the ability to build all kinds of applications on top, and others do not. They may offer it via API access, but API access will be limited and used for monopolistic rent extraction, and can be taken away at any time. This means that a small number of people and corporations have access to the most important ingredients for a major area of 21ˢᵗ century technology, which in turn limits who can economically benefit from it.

And on the other hand, if this kind of personal health data is insecure, someone who hacks it can blackmail you over any health issues, optimize pricing of insurance and healthcare products to extract value from you, and if the data includes location tracking they know where to wait for you to kidnap you. And in the other direction, your location data (very often hacked) can be used to infer information about your health. If your BCI gets hacked that means a hostile actor is literally reading (or worse, writing) your mind. This is no longer science fiction: see here for a plausible attack by which a BCI hack can lead to someone losing motor control.

All in all, a huge amount of benefits, but also significant risks: risks that a strong emphasis on openness and verifiability are very well suited to mitigating.

The importance of openness and verifiability in personal and commercial digital tech

Earlier this month I had to fill in and sign a form that was required for a legal function. At the time I was not in the country. A national electronic signing system existed, but I did not have it set up at the time. I had to print out the form, sign it, walk over to a nearby DHL, spend a bunch of time filling in the paper form, and then paying for the form to be express-shipped halfway across the world. Time required: half an hour, cost: $119. On that same day I had to sign a (digital) transaction to perform an action on the Ethereum blockchain. Time required: 5 seconds, cost: $0.10 (and, to be fair, without the blockchain a signature can be completely free).

These kinds of stories are easy to find in corporate or nonprofit governance, management of intellectual property rights, and much more. For the past decade, you can find them in the pitch decks of a significant fraction of all blockchain startups. And on top of this, there is the mother of all use cases of "digitally exercising personal authority": payments and finance.

There is of course a big risk in all this: what if either the software or the hardware gets hacked? This is a risk that the crypto space was early to recognize: the blockchain is permissionless and decentralized, and so if you lose access to your funds, there is no resource, no uncle in the sky that you can call for help. Not your keys, not your coins. For this reason, the crypto space was early to thinking about multisig and social recovery wallets, and hardware wallets. In reality, however, there are many situations where lack of a trusted uncle in the sky is not an ideological choice, but an inherent part of the scenario. In fact, even in traditional finance, the "uncle in the sky" fails to protect most people: for example, only 4% of scam victims recover their losses. In use cases that involve custody of personal data, reverting a leak is impossible even in principle. Hence, we need true verifiability and security - of both the software and, ultimately, the hardware.


One proposed technique for inspecting that computer chips were manufactured correctly.


Importantly, in the case of hardware, the risk that we are trying to prevent goes far beyond "is the manufacturer evil?". Rather, the problem is that there is a large number of dependencies, most of which are closed source, and any one of them being negligent can cause unacceptable security outcomes. This paper shows recent examples of how microarchitecture choices can undermine the side-channel resistance of designs that are provably secure in a model that looks at the software alone. Attacks like EUCLEAK depend on vulnerabilities that are much harder to find because of how many components are proprietary. AI models can have backdoors inserted at training time if they are trained on compromised hardware.

Another issue in all of these cases is downsides from closed and centralized systems, even if they are perfectly secure. Centralization creates ongoing leverage between individuals, companies or countries: if your core infrastructure is built and maintained by a potentially untrustworthy company in a potentially untrustworthy country, you are vulnerable to pressure (eg. see Henry Farrell on weaponized interdependence). This is the sort of problem that crypto is meant to solve - but it exists in far more domains than just the financial.

The importance of openness and verifiability in digital civic tech

I frequently talk to people of various stripes who are trying to figure out better forms of government that are well suited for their various contexts in 21ˢᵗ century. Some, like Audrey Tang, are trying to take political systems that are already functional and bring them to the next level, empowering local open-source communities and using mechanisms like citizens' assemblies, sortition and quadratic voting. Others are starting from the bottom: here is a constitution recently proposed by some Russian-born political scientists for Russia, featuring strong guarantees of individual freedom and local autonomy, strong institutional bias toward peace and against aggression, and an unprecedentedly strong role for direct democracy. Others, like economists working on land value tax or congestion pricing, are trying to improve their country's economics.

Different people may have different levels of enthusiasm for each idea. But one thing that they all have in common is they all involve high-bandwidth participation, and so any realistic implementation has to be digital. Pen and paper is okay for a very basic record of who owns what and elections run once every four years, but not for anything that asks for our input with higher bandwidth or frequency.

Historically, however, security researchers' reception to the idea of things like electronic voting has ranged from skeptical to hostile. Here is a good summary of the case against electronic voting. Quoting from that document:

First of all, the technology is "black box software," meaning that the public is not allowed access into the software that controls the voting machines. Although companies protect their software to protect against fraud (and to beat back competition), this also leaves the public with no idea of how the voting software works. It would be simple for the company to manipulate the software to produce fraudulent results. Also, the vendors who market the machines are in competition with each other, and there is no guarantee that they are producing the machines in the best interest of the voters and the accuracy of the ballots.

There are lots of real-world cases that justify this skepticism.


A critical analysis of Estonian internet voting, 2014.


These arguments apply verbatim in all kinds of other situations. But I predict that as technology progresses, the "let's not do it at all" response will become less and less realistic, across a wide range of domains. The world is rapidly becoming more efficient (for better or worse) due to technology, and I predict that any system that does not follow this trend will become less and less relevant to individual and collective affairs as people route around it. And so we need an alternative: to actually do the hard thing and figure out how to make complicated tech solutions secure and verifiable.

Theoretically, "secure and verifiable" and "open-source" are two different things. It is definitely possible for something to be proprietary and secure: airplanes are highly proprietary technology but on the whole commercial aviation is a very safe way to travel. But what a proprietary model cannot achieve is common knowledge of security - the ability to be trusted by mutually distrusting actors.

Civic systems like elections are one type of situation where common knowledge of security is important. Another is evidence gathering in courts. Recently, in Massachusetts, a large volume breathalyzer evidence was ruled invalid because information about faults in the tests was found to have been covered up. Quoting the article:

Wait, so were all of the results faulty? No. In fact, there weren't calibration issues with the breathalyzer tests in most of the cases. However, since investigators later found that the state crime lab withheld evidence showing the problems were more widespread than they said, Justice Frank Gaziano wrote that all of those defendants had their due process rights violated.

Due process in courts is inherently a domain where what is required is not just fairness and accuracy, but common knowledge in fairness and accuracy - because if there is not common knowledge that courts are doing the right thing, society can easily spiral into people taking matters into their own hands.

In addition to verifiability, there are also inherent benefits to openness itself. Openness allows local groups to design systems for governance, identity, and other needs in ways that are compatible with local goals. If voting systems were proprietary, then a country (or province or town) that wanted to experiment with a new one would have a much harder time: they would have to either convince the company to implement their preferred rules as a feature, or start from scratch and go through all the work to make it secure. This adds a high cost to innovation in political systems.

A more open-source hacker-ethic approach, in any of these areas, would put more agency in the hands of local implementers, whether they are acting as individuals or as part of governments or corporations. For this to be possible, open tools for building need to be widely available, and the infrastructure and code bases need to be freely licensed to allow others to build on top. To the extent that the goal is minimizing power differentials, copyleft is especially valuable.



A final area of civic tech that will matter in the next years is physical security. Surveillance cameras have been popping up everywhere over the past two decades, causing many civil liberties worries. Unfortunately, I predict that the recent rise of drone warfare will make "don't do high tech security" no longer a viable option. Even if a country's own laws do not infringe on a person's freedom, that means nothing if the country cannot protect you from other countries (or rogue corporations or individuals) imposing their laws on you instead. Drones make such attacks much easier. Ergo, we need countermeasures, that will likely involve lots of counter-drone systems and sensors and cameras.

If these tools are proprietary, data collection will be opaque and centralized. If these tools are open and verifiable, then we have a chance at a better approach: security equipment that provably outputs only a limited amount of data in a limited number of situations and deletes the rest. We could have a digitized physical security future that is more like digital guard dogs than a digital panopticon. One could imagine a world where public monitoring devices are required to be open source and verifiable, and anyone has a legal right to randomly choose a monitoring device in public and take it apart and verify it. University computer science clubs could frequently do this as an educational exercise.

The open source and verifiable way

We cannot avoid having digital computer things that are deeply embedded in all kinds of aspects of our (personal and collective) lives. By default, we will likely get digital computer things that are built and run by centralized corporations, optimized for a few people's profit motives, backdoored by their host governments, and where most of the world has no way to participate in their creation or know if they're secure. But we can try to steer toward a better alternative.

Imagine a world where:

This is a world where we have much more safety and freedom and equal access to the global economy than today. But making this world happen requires much more investment in various technologies:


The cybersecurity fatalism of the 00s is wrong: bugs (and backdoors) can be beaten. We "just" have to learn to value security more than other competing goals.



Openness and verifiability on every layer of the stack matters


From here to there

A key difference between this vision and a more "traditional" vision of technology is that it's much more friendly to local sovereignty and individual empowerment and freedom. Security is done less by scouring the entire world and making sure there are zero bad guys anywhere, and more by making the world more robust at every level. Openness means openness to build upon and improve every layer of technology, and not just centrally planned open-access API programs. Verification is not something reserved to proprietary rubber-stamp auditors that may well be colluding with the companies and governments rolling out the technology - it's a right, and a socially encouraged hobby, for the people.

I believe that this vision is more robust, and more compatible with our fractured global twenty-first century. But we do not have infinite time to execute on this vision. Centralized approaches to security, that involve putting more centralized data collection and backdoors, and reducing verification entirely to "was this made by a trusted developer or manufacturer", are moving forward rapidly. Centralized attempts to substitute for true open access have been attempted for decades. It started perhaps with Facebook's internet.org, and it will continue, each attempt more sophisticated than the last. We need to both move quickly to compete with these approaches, and make the public case, to people and institutions, that a better solution is possible.

If we can succeed in this vision, one way to understand the world that we get is that it is a kind of retro-futurism. On the one hand, we get the benefits of much more powerful technologies allowing us to improve our health, organize ourselves in much more efficient and resilient ways, and protect ourselves against threats, both old and new. On the other hand, we get a world that brings back properties that were second-nature to everyone back in 1900: the infrastructure is free for people to take apart, verify and modify to suit their own needs, anyone is able to participate not just as a consumer or an "app builder", but at any layer of the stack, and anyone is able to have confidence that a device does what it says it does.



Designing for verifiability has a cost: many optimizations to both hardware and software deliver highly-demanded gains in speed come at the cost of making the design more inscrutable or more fragile. Open source makes it more challenging to make money under many standard business models. I believe that both issues are overstated - but this is not something that the world will be convinced of overnight. This leads to a question: what is the pragmatic short-term goal to shoot for?

I will propose one answer: work toward a fully open-source and verification-friendly stack targeted toward high-security, non-performance-critical applications - both consumer and institutional, long-distance and in-person. This would include hardware and software and bio. Most computation that really needs security does not really need speed, and even in the cases where it does, there are often ways to combine performant-but-untrusted and trusted-but-not-performant components to achieve high levels of performance and trust for many applications. It is unrealistic to achieve maximum security and openness for everything. But we can start by ensuring that these properties are available in those domains where they really matter.