d/acc: one year later
2025 Jan 05
See all posts
d/acc: one year later
Special thanks to Liraz Siri, Janine Leger and Balvi volunteers
for feedback and review
About a year ago, I wrote an
article on techno-optimism, describing my general enthusiasm for
technology and the massive benefits that it can bring, as well as my
caution around a few specific concerns, largely centered around
superintelligent AI, and the risk that it may bring about either doom,
or irreversible human disempowerment, if the technology is built in the
wrong ways. One of the core ideas in my post was the philosophy of
: decentralized and democratic, differential defensive
acceleration. Accelerate technology, but differentially focus
on technologies improve our ability to defend, rather than our ability
to cause harm, and in technologies that distribute power rather than
concentrating it in the hands of a singular elite that decides what is
true, false, good or evil on behalf of everyone. Defense like in
democratic Switzerland
and historically quasi-anarchist Zomia,
not like the lords and castles of medieval feudalism.
In the year since then, the philosophy and ideas have matured
significantly. I talked about the ideas on 80,000
Hours, and have seen many responses, largely positive and some
critical. The work itself is continuing and bearing fruit: we're seeing
progress in verifiable open-source
vaccines, growing recognition of the value of healthy indoor air,
Community Notes continuing to shine, a breakout year for prediction
markets as an info tool, ZK-SNARKs in government
ID and social media (and securing Ethereum wallets through
account abstraction), open-source imaging tools with
applications in medicine and BCI, and more. In the fall, we had the
first significant d/acc event: "d/acc Discovery Day"
(d/aDDy) at Devcon, which featured a full day of speakers from all
pillars of d/acc (bio, physical, cyber, info defense, plus neurotech).
People who have been working on these technologies for years are
increasingly aware of each other's work, and people outside are
increasingly aware of the larger story: the same kinds of values that
motivated Ethereum and crypto can be
applied to the wider world.
Table of contents
What d/acc is and is not
It's the year 2042. You're seeing reports in the media about a new
pandemic potentially in your city. You're used to these: people get
over-excited about every animal disease mutation, and most of them come
to nothing. The previous two actual potential pandemics were
detected very early through wastewater
monitoring and open-source
analysis of social media, and stopped completely in their tracks.
But this time, prediction markets are showing a 60% chance of at least
10,000 cases, so you're more worried.
The sequence for the virus was identified yesterday. Software updates
for your pocket air tester to allow
it to detect the new virus (from a single breath, or from 15 minutes of
exposure to indoor air in a room) are available already. Open-source
instructions and code for generating a vaccine using equipment that can
be found in any modern medical facility worldwide should be available
within weeks. Most people are not yet taking any action at all, relying
mostly on widespread adoption of air filtering and ventilation to
protect them. You have an immune condition so you're more cautious: your
open-source locally-running personal assistant AI, which handles among
other tasks navigation and restaurant and event recommendation, is also
taking into account real-time air tester and CO2 data to only recommend
the safest venues. The data is provided by many thousands of
participants and devices using ZK-SNARKs
and differential
privacy to minimize the risk that the data can be leaked or abused
for any other purpose (if you want to contribute data to these
datasets, there's other personal assistant AIs that verify
formal proofs that these cryptographic gadgets actually work).
Two months later, the pandemic disappeared: it seems like 60% of
people following the basic protocol of putting on a mask if the air
tester beeps and shows the virus present, and staying home if they test
positive personally, was enough to push the transmission rate, already
heavily reduced due to passive heavy air filtering, to below 1. A
disease that simulations show might have been five times worse than
Covid twenty years ago turns out to be a non-issue today.
Devcon d/acc day
One of the most positive takeaways from the d/acc event at Devcon was
the extent to which the d/acc umbrella successfully brought people
together from very different fields, and got them to actually
be interested in each other's work.
Creating events with "diversity" is easy, but making different people
with different backgrounds and interests actually relate to each other
is hard. I still have memories of being forced to watch long operas in
middle school and high school, and personally finding them boring. I
knew that I was "supposed to" appreciate them, because if I did not then
I would be an uncultured computer science slob, but I did not connect
with the content on a more genuine level. d/acc day did not feel like
that at all: it felt like people actually enjoyed learning about very
different kinds of work in different fields.
If we want to create a brighter alternative to domination,
deceleration and doom, we need this kind of broad coalition building.
d/acc seemed to be actually succeeding at it, and that alone shows the
value of the idea.
The core idea of d/acc is simple: decentralized and
democratic differential defensive acceleration. Build
technologies that shift the offense/defense balance toward defense, and
do so in a way that does not rely on handing over more power to
centralized authorities. There is an inherent tie between these two
sides: any kind of decentralized, democratic or liberal political
structure thrives best when defense is easy, and suffers the most
challenge when defense is hard - in those cases, the far more likely
outcome is some period of war of all against all, and eventually an
equilibrium of rule by the strongest.
The core principle of d/acc extends across many domains:
Chart from My
Techno-Optimism, last year
One way to understand the importance of trying to be decentralized,
defensive and acceleration-minded at the same time, is to contrast it
with the philosophy that you get when you give up each of the three.
Decentralized acceleration, but don't care about the
"differential defensive" part. Basically, be an e/acc,
but decentralized. There are plenty of people who take this
approach, some who label
themselves d/acc but helpfully describe their focus as "OFFENSE", but
also plenty of others who are excited about "decentralized AI" and
similar topics in a more moderate way, but in my view put insufficient
attention on the "defensive" aspect.
In my view, this approach
may avoid the risk of global human dictatorship by the specific tribe
you're worried about, but it doesn't have an answer to the underlying
structural problem: in an offense-favoring environment, there's constant
ongoing risk of either catastrophe, or someone positioning themselves as
a protector and permanently establishing themselves at the top. In the
specific case of AI, it also doesn't have a good answer to the risk of
humans as a whole being disempowered compared to AIs.
Differential defensive acceleration, but don't care about
"decentralized and democratic". Embracing centralized control
for the sake of safety has permanent appeal to a subset of people, and
readers are undoubtedly already familiar with many examples, and the
downsides of them. Recently, some have worried that extreme centralized
control is the only solution to the extremes of future technologies: see
this
hypothetical scenario where "Everybody is fitted with a ‘freedom
tag' – a sequent to the more limited wearable surveillance devices
familiar today, such as the ankle tag used in several countries as a
prison alternative ... encrypted video and audio is continuously uploaded
and machine-interpreted in real time". However, centralized control is a
spectrum. One milder version of centralized control that's usually
overlooked, but is still harmful, is resistance to public scrutiny in
biotech (eg. food,
vaccines),
and the closed source norms that allow this resistance to go
unchallenged.
The risk of this approach is, of course, that the
center is often itself the source of risk. We saw this in Covid, where
gain-of-function research funded by multiple major world
governments may have been the source of the pandemic, centralized
epistemology led to the WHO not
acknowledging for years that
Covid is airborne, and coercive social
distancing and vaccine
mandates led to political backlash that may reverberate for decades. A
similar situation may well happen around any risks to do with AI, or
other risky technologies. A decentralized approach would better address
risks from the center itself.
Decentralized defense, but don't care about
acceleration - basically, attempting to slow down technological
progress, or economic degrowth.
The
challenge with this strategy is twofold. First, on balance technology
and economic growth have been massively good for humanity, and
any delay to it imposes
costs that are hard
to overstate. Second, in a non-totalitarian world, not advancing is
unstable: whoever "cheats" the most and finds plausibly-deniable ways to
advance anyway will get ahead. Decelerationist strategies can work to
some extent in some contexts: European food being healthier than
American food is one example, the success of nuclear non-proliferation
so far is another. But they cannot work forever.
With d/acc, we want to:
- Be principled at a time when much of the world is becoming tribal,
and not just build whatever - rather, we want to build
specific things that make the world safer and better.
- Acknowledge that exponential technological progress means that
the world is going to get very very weird, and that
humanity's total "footprint" on the universe will only increase. Our
ability to keep vulnerable animals, plants and people out of harm's way
must improve, but the only way out is forward.
- Build technology that keeps us safe without assuming that
"the good guys (or good AIs) are in charge". We do this by
building tools that are naturally
more effective when used to build and to protect than when used to
destroy.
Another way to think about d/acc is to go back to a frame from the
Pirate Party movements in Europe in the late 00s:
empowerment.
The goal is to build a world where we preserve human agency,
achieving both
the negative freedom of avoiding active interference (whether from other
people acting as private citizens, or from governments, or from
superintelligent bots) with our ability to shape our own destinies, and
the positive freedom of ensuring that we have the knowledge and
resources to. This echoes a centuries-long classical liberal tradition,
which also includes Stewart Brand's focus on "access
to tools" and John Stuart Mill's emphasis
on education alongside liberty as
key components of human progress - and perhaps, one might add,
Buckminster Fuller's desire to see the process of global solving be participatory
and widely distributed. We can see d/acc as a way of achieving these
same goals given the technological landscape of the 21ˢᵗ century.
The third dimension:
survive and thrive
In my post last year, d/acc specifically focused on the defensive
technologies: physical defense, bio defense, cyber defense and info
defense. However, decentralized defense is not enough to make the world
great: you also need a forward-thinking positive vision for what
humanity can use its newfound decentralization and safety to
accomplish.
Last year's post did contain a positive vision, in two places:
- Focusing on the challenges of superintelligence, I proposed a path
(far from original to me) of how we can have superintelligence without
disempowerment:
- Today, build AI-as-tools rather than
AI-as-highly-autonomous-agents
- Tomorrow use tools like virtual
reality, myoelectrics
and brain-computer interfaces to create tighter and tighter feedback
between AI and humans
- Over time proceed toward an eventual endgame where the
superintelligence is a tightly coupled combination of machines and
us.
- When talking about info-defense, I also tangentially mentioned that
in addition to _defensiv_e social technology that tries to help
communities maintain cohesion and have high-quality discourse in the
face of attackers, there is also progressive social technology
that can help communities more readily make high-quality judgements: pol.is is one example, and prediction markets are
another.
But these two points felt disconnected from the d/acc argument: "here
are some ideas for creating a more democratic and defense-favoring world
at the base layer, and by the way here are some unrelated ideas for how
we might do superintelligence".
However, I think in reality there are some very important
connections between what labelled above as "defensive" and "progressive"
d/acc technology. Let's expand the d/acc chart from last year's post, by
adding this axis (also, let's relabel it "survive
vs thrive") to the chart and seeing what comes out:
There is a consistent pattern, across all domains, that the
science, ideas and tools that can help us "survive" in one domain are
closely related to the science, ideas and tools that can help us
"thrive". Some examples:
- A lot of recent anti-Covid research focuses on the
role of viral persistence in the body as a
mechanism for why Long Covid is such a problem. Recently, there are also
signs that viral persistence may
be responsible for Alzheimer's disease - if true, this implies that
addressing viral persistence across all tissue types may be key to
solving aging.
- Low-cost and miniature imaging
tools such as those being built by Openwater can be
powerful for treating microclots,
or viral persistence, cancers, and they can also be used for BCI.
- Very similar ideas motivate the construction of social
tools built for highly adversarial environments, such as Community
Notes, and social tools built for reasonably cooperative
environments, such as pol.is.
- Prediction markets are valuable
in both high-cooperation
and high-adversity
environments.
- Zero knowledge proofs and similar technologies for doing computation
on data while preserving privacy both increase the data available for
useful work such as science, and increase privacy.
- Solar power and batteries are great for supercharging
the next wave of clean economic growth, but they are also amazing
for decentralization and physical resilience.
In addition to this, there are also important cross-dependencies
between subject areas:
- BCI is very relevant as an info-defense and collaboration
technology, because it could enable much more detailed
communication of our thoughts and intentions. BCI is not just
bot-to-consciousness: it can also be
consciousness-to-bot-to-consciousness. This echoes Plurality
ideas about the value of BCI.
- A lot of biotech depends on info-sharing, and in
many contexts people will only be comfortable sharing info if they are
confident that it will be used for one application and one application
only. This depends on privacy technology (eg. ZKP, FHE,
obfuscation...)
- Collaboration technology can be used to coordinate
funding for any of the other technology areas
The
hard question: AI safety, short timelines and regulation
Different people have very different AI timelines. Chart
from Zuzalu in Montenegro, 2023.
The argument against my post last year that I found most compelling
was a critique from the AI safety community. The argument goes: "sure,
if we have half a century until we get strong AI, we can concentrate our
energies and build all of these good things. But actually it's looking
likely we have three year timelines until AGI, and another three years
until superintelligence. And so if we don't want the world to be
destroyed or otherwise fall into an irreversible trap, we can't
just accelerate the good, we also have to slow down the bad,
and this means passing powerful regulations that may make powerful
people upset". In my post last year, I did indeed not call for any
specific strategy to "slow down the bad", beyond vague appeals to not
build risky forms of superintelligence. And so here, it's worth
addressing the question directly: if we are living in the least
convenient world, where AI risk is high and timelines are
potentially five years away, what regulation would I support?
First, the
case for caution around new regulations
Last year, the main proposed AI regulation was the SB-1047
bill in California. SB-1047 required developers of the most powerful
models (those that take over $100M to train, or over $10M in the case of
fine-tunes) to take some safety-testing measures before releasing. In
addition, it imposed liability on developers of AI models if they take
insufficient care. Many detractors argued that the bill was "a
threat to open source"; I disagreed, because the cost thresholds
meant that it affected only the most powerful models: even LLama3 was probably
under the threshold. Looking back, however, I think there was a
larger issue with the bill: like most
regulation, it was overfitted to the present-day situation. The
focus on training cost is proving fragile in the face of new technology
already: the recent state-of-the-art quality Deepseek v3 model was
trained at a cost of only $6
million, and in new models like o1 costs are shifting from training to
inference more generally.
Second, the most likely actors who would actually be responsible for
an AI superintelligence doom scenario are realistically militaries. As
we have
seen in the last half-century of biosecurity (and beyond),
militaries are willing to do scary things, and they can easily make
mistakes. AI military use is advancing rapidly today (see Ukraine,
Gaza).
And any safety regulation that a government passes, by default would
exempt their own military, and corporations that cooperate closely with
the military.
That said, these arguments are not reasons to throw up our hands and
do nothing. Rather, we can use them as a guide, and try to come up with
rules that would trigger these concerns the least.
Strategy 1: Liability
If someone acts in some way that causes legally actionable damage,
they could
be
sued.
This does not solve the problem of risks from militaries and other
"above-the-law" actors, but it is a very general-purpose approach that
avoids overfit, and is often supported by
libertarian-leaning economists
for this exact reason.
The primary targets for liability that have been considered so far
are:
- Users - the people who use the AI
- Deployers - intermediaries who offer AI as a
service for users
- Developers - the people who build the AI
Putting liability on users feels most incentive-compatible. While the
link between how a model is developed and how it ends up being used is
often unclear, the user decides exactly how the AI is used.
Liability on users creates a strong pressure to do AI in what I
consider the right way: focus on building mecha suits for the human
mind, not on creating new forms of self-sustaining intelligent
life. The former responds regularly to user intent, and so
would not cause catastrophic actions unless the user wanted them to. The
latter would have the greatest risk of going off and creating a classic
"AI going rogue" scenario. Another benefit of putting liability as close
to end usage as possible is that it minimizes the risk that liability
will lead to people taking actions that are harmful in other ways (eg.
closed source, KYC and surveillance, state/business collusion to
clandestinely restrict users as with eg. debanking, locking out large
regions of the world).
There is a classic argument against putting liability solely on
users: users may be regular individuals without too much money, or even
anonymous, leaving no one that could actually pay for a catastrophic
harm. This argument can be overstated: even if some users are
too small to be held liable, the average customer of an AI
developer is not, and so AI developers would still be incentivized
to build products that can give their users assurance that they won't
face high liability risk. That said, it is still a valid argument, and
needs to be addressed. You need to incentivize someone in the
pipeline who has the resources to take the appropriate level of care to
do so, and deployers and developers are both easily available targets
who still have
a lot of influence over how safe or unsafe a model is.
Deployer liability seems reasonable. A commonly cited concern is that
it would not work for open-source models, but this seems manageable,
especially since there is a high chance that the most powerful models
will be closed source anyway (and if they turn out to be open, then
while deployer liability does not end up very useful, it also does not
cause much harm). Developer liability has the same concern (though with
open source models there is some speed-bump of needing to
fine-tune a model to cause it to do some originally disallowed thing),
but the same counterargument applies. As a general principle,
putting a "tax" on control, and essentially saying "you
can build
things you don't control, or you can build things you do control,
but if you build things you do control, then 20% of the control has to
be used for our purposes", seems like a reasonable position for legal
systems to have.
One idea that seems under-explored is putting liability on other
actors in the pipeline, who are more guaranteed to be well-resourced.
One idea that is very d/acc friendly is to put liability on
owners or operators of any equipment that an AI takes over (eg.
by hacking) in the process of executing some catastrophically harmful
action. This would create a very broad incentive to do the hard work to
make the world's (especially computing and bio) infrastructure as secure
as possible.
If I was convinced that we need something more "muscular"
than liability rules, this is what I would go for. The goal would be to
have the capability to reduce worldwide available compute by ~90-99% for
1-2 years at a critical period, to buy more time for humanity to
prepare. The value of 1-2 years should not be overstated: a year of
"wartime mode" can easily be worth a hundred years of work under
conditions of complacency. Ways to implement a "pause" have been explored, including
concrete proposals like requiring
registration and verifying
location of hardware.
A more advanced approach is to use clever cryptographic trickery: for
example, industrial-scale (but not consumer) AI hardware that gets
produced could be equipped with a trusted hardware chip that only allows
it to continue running if it gets 3/3 signatures once a week from major
international bodies, including at least one non-military-affiliated.
The signatures would be device-independent (if desired, we could even
require a zero-knowledge proof that they were published on a
blockchain), so it would be all-or-nothing: there would be no practical
way to authorize one device to keep running without authorizing all
other devices.
This feels like it "checks the boxes" in terms of maximizing benefits
and minimizing risks:
- It's a useful capability to have: if we get warning signs that
near-superintelligent AI is starting to do things that risk catastrophic
damage, we will want to take the transition more slowly.
- Until such a critical moment happens, merely having the
capability to soft-pause would cause little harm to
developers.
- Focusing on industrial-scale hardware, and only aiming for 90-99% as
a goal, would avoid some dystopian effort of putting spy chips or kill
switches in consumer laptops or forcing draconian measures on small
countries against their will.
- Focusing on hardware seems very robust to changes in technology. We
have seen across multiple generations of AI that quality depends heavily
on available compute, especially so in the early versions of a new
paradigm. Hence, reducing available compute by 10-100x can easily make
the difference between a runaway superintelligent AI winning or losing a
fast-paced battle against humans trying to stop it.
- The inherent annoyingness of needing to get online once a week for a
signature would create a strong pressure against extending the scheme to
consumer hardware.
- It could be verified with random inspection, and doing it at
hardware level would make it difficult to exempt specific users
(approaches that are based on legally forcing shutdown, rather
than technically, do not have this all-or-nothing property,
which makes them have much greater risk of slippery-sloping into
exemptions for militaries etc)
Hardware regulation is being strongly considered already, though
generally through the frame of export
controls, which inherently have a "we trust our side, but not the
other side" philosophy. Leopold Aschenbrenner has famously advocated
that the US should race
to get a decisive advantage and then essentially
force China to sign a protocol limiting how many
boxes they are allowed to run. To me, this approach seems
risky, and could combine the flaws of multipolar
races and centralization. If we have to limit people, it seems
better to limit everyone on an equal footing, and do the hard work of
actually trying to cooperate to organize that instead of one party
seeking to dominate everyone else.
d/acc technologies in AI risk
Both of these strategies (liability and the hardware pause button)
have holes in them, and it's clear that they are only temporary
stopgaps: if something becomes possible to do on a supercomputer at time
T, it will likely be possible on a laptop at time T + 5 years. And so we
need something more stable to buy time for. Many d/acc
technologies are relevant here. We can look at the role of d/acc tech as
follows: if AI takes over the world, how would it do so?
- It hacks our computers → cyber-defense
- It creates a super-plague → bio-defense
- It convinces us (either to trust it, or to distrust each
other) → info-defense
As briefly mentioned above, liability rules are a naturally
d/acc-friendly style of regulation, because they can very
efficiently motivate all parts of the world to adopt these defenses and
take them seriously. Taiwan has been
experimenting with liability for false advertising recently, which
can be viewed as one example of using liability to encourage info
defense. We should not be too enthusiastic about putting
liability everywhere, and remember the benefits of plain old freedom in
enabling the little guy to participate in innovation without fear of
lawsuits, but where we do want a stronger push to be secure, liability
can be quite flexible and effective.
The role of crypto in d/acc
Much of d/acc goes far beyond typical blockchain topics: biosecurity,
BCI and collaborative discourse tools seem far away from things that a
crypto person normally talks about. However, I think there are some
important ties between crypto and d/acc, particularly:
- d/acc is an extension of the underlying values of
crypto (decentralization, censorship resistance, open global
economy and society) to other areas of technology.
- Because crypto users are natural early adopters, and there is an
alignment of values, crypto communities are natural early users
of d/acc technology. The heavy emphasis on community (both
online and offline, eg. events and popups), and the fact that these
communities actually do high-stakes things instead of just talking to
each other, makes crypto communities particularly appealing incubators
and testbeds for d/acc technologies that fundamentally work on groups
rather than individuals (eg. a large fraction of info defense and bio
defense). Crypto people just do things, together.
- Many crypto technologies can be used in d/acc
subject areas: blockchains for building more robust and decentralized
financial, governance and social media infrastructure, zero knowledge
proofs for protecting privacy, etc. Today, many of the largest
prediction markets are built on blockchains, and they are gradually
becoming more sophisticated, decentralized and democratic.
- There are also win-win opportunities to collaborate on
crypto-adjacent technologies that are very useful to crypto
projects, but are also key to achieving d/acc goals: formal verification, computer
software and hardware
security, and adversarially-robust governance technology. These
things make the Ethereum blockchain, wallets and DAOs more secure and
robust, and they also accomplish important civilizational defense goals
like reducing our vulnerability to cyberattacks, including potentially
from superintelligent AI.
Cursive, an app that uses fully homomorphic encryption
(FHE) to allow users to identify areas of common interest with other
users, while preserving privacy. This was used at Edge City, one of the
many offshoots of Zuzalu,
in Chiang Mai.
In addition to these direct intersections, there is also another
crucial shared point of interest: funding
mechanisms.
d/acc and public goods
funding
One of my ongoing interests is coming up with better mechanisms to
fund public goods: projects that are valuable to very
large groups of people, but that do
not have a naturally accessible business model. My past work on this
includes my contributions to quadratic
funding and its use in Gitcoin
Grants, retro
PGF, and more recently deep
funding.
Many people are skeptical of public goods as a concept. The
skepticism generally comes from two sources:
- The fact that public goods have historically been used as a
justification for heavy-handed central planning and government
intervention in society and economy.
- A general perception that public goods funding lacks rigor and is
run on social
desirability bias - what sounds good, rather than what is good - and
favors insiders who can play the social game.
These are important critiques, and good critiques. However, I would
argue that strong decentralized public goods funding is essential to a
d/acc vision, because a key d/acc goal (minimizing central points of
control) inherently frustrates many traditional business models. It is
possible to build successful businesses on open source - several
Balvi grantees
are doing so - but in some situations it is hard enough that important
projects needs extra ongoing support. Hence, we have to do the
hard thing, and figure out how to do public goods funding in a way that
addresses both of the above critiques.
The solution to the first problem is basically credible neutrality
and decentralization.
Central planning is problematic because it gives control to elites who
might turn abusive, and because it often overfits
to the present-day situation and becomes less and less effective
over time. Quadratic funding and similar mechanisms were precisely about
funding public goods in a way that is as credibly neutral and
(architecturally and politically) decentralized as possible.
The second problem is more challenging. With quadratic funding, a
common critique has been that it quickly
becomes a popularity contest, requiring project funders to spend a
lot of effort publicly campaigning. Furthermore, projects that are "in
front of people's eyeballs" (eg. end-user applications) get funded, but
projects that are more in the background (the archetypal "dependency maintained by a guy in
Nebraska") don't get any funding at all. Optimism retro funding
relies on a smaller number of expert badge holders; here, popularity
contest effects are diminished, but social effects of having close
personal ties with the badge holders are magnified.
Deep funding
is my own latest effort to solve this problem. Deep funding has two
primary innovations:
- The dependency graph. Instead of asking each juror
a global question ("what is the value of project A to
humanity?"), we ask a local question ("is project A or project
B more valuable to outcome C? And by how much?"). Humans are notoriously
bad at global questions: in a famous study,
when asked how much money they would pay to save N birds, responders
answered roughly $80 for N=2,000, N=20,000 and N=200,000. Local
questions are much more tractable. We then combine local answers into a
global answer by maintaining a "dependency graph": for each project,
what other projects contributed to its success, and how much?
- AI as distilled
human judgement. Jurors are only each assigned a small
random sample of all questions. There is an open competition through
which anyone can submit AI models that try to efficiently fill in
all the edges in the graph. The final answer is the weighted
sum of models that is most compatible with the jury answers. See here for a code
example. This approach allows the mechanism to scale to a very large
size, while requiring the jury to submit only a small number of "bits"
of information. This reduces opportunity for corruption, and ensures
that each bit is high-quality: jurors can afford to think for a long
time on each question, instead of quickly clicking through hundreds. By
using an open competition of AIs, we reduce the bias from any one single
AI training and administration process. Open market of AIs as
the engine, humans as the steering wheel.
But deep funding is only the latest example; there have been other
public goods funding mechanism ideas before, and there will be many more
in the future. allo.expert does a
good job of cataloguing them. The underlying goal is to create a
societal gadget that can fund public goods with a level of accuracy,
fairness and open entry that at least approximates the way that markets
fund private goods. It does not have to be perfect; after all,
markets are far from perfect themselves. But it should be effective
enough that developers working on top-quality open-source projects that
benefit everyone can afford to keep doing so without feeling the need to
make unacceptable compromises.
Today, the leading projects in most d/acc subject areas: vaccines, BCI,
"borderline BCI" like wrist
myoelectrics and eye tracking, anti-aging medicines, hardware, etc,
are proprietary. This has big downsides in terms of securing public
trust, as we have seen in many
of the above
areas already.
It also shifts attention toward competitive dynamics ("OUR TEAM must win
this critical industry!"), and away from the larger competition of
making sure these technologies come fast enough to be there to protect
us in a world of superintelligent AI. For these reasons, robust public
goods funding can be a strong booster of openness and freedom. This is
another way in which the crypto community can help d/acc: by putting
serious effort into exploring these funding mechanisms and making them
work well within its own context, preparing them for much wider adoption
for open-source science and technology more generally.
The future
The next few decades bring important challenges. There are two
challenges that have recently been on my mind:
- Powerful new waves of technology, especially strong AI, are
coming quickly, and these technologies come with important traps that we
need to avoid. It may take five years for "artificial
superintelligence" to get here, it may take fifty. Either way, it's not
clear that the default outcome is automatically positive, and as
described in this post and the
previous one, there are multiple traps to avoid.
- The world is becoming less cooperative. Many
powerful actors that before seemed to at least sometimes act on
high-minded principles (cosmopolitanism, freedom, common humanity... the
list goes on) are now more openly, and aggressively, pursuing personal
or tribal self-interest.
However, each of these challenges has a silver lining. First,
we now have very powerful tools to do our remaining work more
quickly:
- Present-day and near-future AI can be used to build other
technologies, and can be used as an ingredient in governance (as in deep
funding or info
finance). It's also very relevant to BCI, which can itself provide
further productivity gains.
- Large-scale coordination is now possible at much greater scales than
before. The internet and social media extended the reach of
coordination, global finance (including crypto) increased its
power, and now info defense and collaboration tools can
increase its quality and perhaps soon BCI in its
human-to-computer-to-human form can increase its depth.
- Formal verification, sandboxing (web browsers, Docker, Qubes, GrapheneOS, and much more), secure
hardware modules, and other technologies are improving to the point of
making much better cybersecurity possible.
- Writing any kind of software is significantly easier than it was two
years ago.
- Recent basic research on understanding how viruses work, especially
the simple understanding that the most important form of transmission to
guard against is airborne, is showing a much clearer path for how to
improve bio defense.
- Recent advances in biotech (eg. CRISPR, advances
in bio-imaging) are making
all kinds of biotech, whether for defense, or longevity,
or super-happiness,
or exploring multiple
novel bio hypotheses,
or simply doing
really cool things, much more accessible.
- Advances in computing and biotech together are enabling synthetic
bio tools you can use to adapt, monitor, and improve your health.
Cyber-defense tech such as cryptography makes the personalized dimension
of this much more viable.
Second, now that many principles that we hold dear are no
longer occupied by a few particular segments of the old guard, they can
be reclaimed by a broad coalition that anyone in the world is welcome to
join. This is probably the biggest upside of recent political
"realignments" around the world, and one that is worth taking advantage
of. Crypto has already done an excellent job taking advantage of this
and finding global appeal; d/acc can do the same.
Access to tools means that we are able to adapt and improve our
biologies and our environments, and the "defense" part of d/acc means
that we are able to do this without infringing on others' freedom to do
the same. Liberal
pluralist principles mean we can have a lot of diversity on how this
is done, and our commitment to common humanity goals means it should get
done.
We, humans, continue to be the brightest star. The task ahead of us,
of building an even brighter 21ˢᵗ century that preserves human survival
and freedom and agency as we head toward the stars, is a challenging
one. But I am confident that we are up to it.
d/acc: one year later
2025 Jan 05 See all postsSpecial thanks to Liraz Siri, Janine Leger and Balvi volunteers for feedback and review
About a year ago, I wrote an article on techno-optimism, describing my general enthusiasm for technology and the massive benefits that it can bring, as well as my caution around a few specific concerns, largely centered around superintelligent AI, and the risk that it may bring about either doom, or irreversible human disempowerment, if the technology is built in the wrong ways. One of the core ideas in my post was the philosophy of: decentralized and democratic, differential defensive
acceleration . Accelerate technology, but differentially focus
on technologies improve our ability to defend, rather than our ability
to cause harm, and in technologies that distribute power rather than
concentrating it in the hands of a singular elite that decides what is
true, false, good or evil on behalf of everyone. Defense like in
democratic Switzerland
and historically quasi-anarchist Zomia,
not like the lords and castles of medieval feudalism.
In the year since then, the philosophy and ideas have matured significantly. I talked about the ideas on 80,000 Hours, and have seen many responses, largely positive and some critical. The work itself is continuing and bearing fruit: we're seeing progress in verifiable open-source vaccines, growing recognition of the value of healthy indoor air, Community Notes continuing to shine, a breakout year for prediction markets as an info tool, ZK-SNARKs in government ID and social media (and securing Ethereum wallets through account abstraction), open-source imaging tools with applications in medicine and BCI, and more. In the fall, we had the first significant d/acc event: "d/acc Discovery Day" (d/aDDy) at Devcon, which featured a full day of speakers from all pillars of d/acc (bio, physical, cyber, info defense, plus neurotech). People who have been working on these technologies for years are increasingly aware of each other's work, and people outside are increasingly aware of the larger story: the same kinds of values that motivated Ethereum and crypto can be applied to the wider world.
Table of contents
What d/acc is and is not
It's the year 2042. You're seeing reports in the media about a new pandemic potentially in your city. You're used to these: people get over-excited about every animal disease mutation, and most of them come to nothing. The previous two actual potential pandemics were detected very early through wastewater monitoring and open-source analysis of social media, and stopped completely in their tracks. But this time, prediction markets are showing a 60% chance of at least 10,000 cases, so you're more worried.
The sequence for the virus was identified yesterday. Software updates for your pocket air tester to allow it to detect the new virus (from a single breath, or from 15 minutes of exposure to indoor air in a room) are available already. Open-source instructions and code for generating a vaccine using equipment that can be found in any modern medical facility worldwide should be available within weeks. Most people are not yet taking any action at all, relying mostly on widespread adoption of air filtering and ventilation to protect them. You have an immune condition so you're more cautious: your open-source locally-running personal assistant AI, which handles among other tasks navigation and restaurant and event recommendation, is also taking into account real-time air tester and CO2 data to only recommend the safest venues. The data is provided by many thousands of participants and devices using ZK-SNARKs and differential privacy to minimize the risk that the data can be leaked or abused for any other purpose (if you want to contribute data to these datasets, there's other personal assistant AIs that verify formal proofs that these cryptographic gadgets actually work).
Two months later, the pandemic disappeared: it seems like 60% of people following the basic protocol of putting on a mask if the air tester beeps and shows the virus present, and staying home if they test positive personally, was enough to push the transmission rate, already heavily reduced due to passive heavy air filtering, to below 1. A disease that simulations show might have been five times worse than Covid twenty years ago turns out to be a non-issue today.
Devcon d/acc day
One of the most positive takeaways from the d/acc event at Devcon was the extent to which the d/acc umbrella successfully brought people together from very different fields, and got them to actually be interested in each other's work.
Creating events with "diversity" is easy, but making different people with different backgrounds and interests actually relate to each other is hard. I still have memories of being forced to watch long operas in middle school and high school, and personally finding them boring. I knew that I was "supposed to" appreciate them, because if I did not then I would be an uncultured computer science slob, but I did not connect with the content on a more genuine level. d/acc day did not feel like that at all: it felt like people actually enjoyed learning about very different kinds of work in different fields.
If we want to create a brighter alternative to domination, deceleration and doom, we need this kind of broad coalition building. d/acc seemed to be actually succeeding at it, and that alone shows the value of the idea.
The core idea of d/acc is simple: decentralized and democratic differential defensive acceleration. Build technologies that shift the offense/defense balance toward defense, and do so in a way that does not rely on handing over more power to centralized authorities. There is an inherent tie between these two sides: any kind of decentralized, democratic or liberal political structure thrives best when defense is easy, and suffers the most challenge when defense is hard - in those cases, the far more likely outcome is some period of war of all against all, and eventually an equilibrium of rule by the strongest.
The core principle of d/acc extends across many domains:
Chart from My Techno-Optimism, last year
One way to understand the importance of trying to be decentralized, defensive and acceleration-minded at the same time, is to contrast it with the philosophy that you get when you give up each of the three.
Decentralized acceleration, but don't care about the "differential defensive" part. Basically, be an e/acc, but decentralized. There are plenty of people who take this approach, some who label themselves d/acc but helpfully describe their focus as "OFFENSE", but also plenty of others who are excited about "decentralized AI" and similar topics in a more moderate way, but in my view put insufficient attention on the "defensive" aspect.
In my view, this approach may avoid the risk of global human dictatorship by the specific tribe you're worried about, but it doesn't have an answer to the underlying structural problem: in an offense-favoring environment, there's constant ongoing risk of either catastrophe, or someone positioning themselves as a protector and permanently establishing themselves at the top. In the specific case of AI, it also doesn't have a good answer to the risk of humans as a whole being disempowered compared to AIs.
Differential defensive acceleration, but don't care about "decentralized and democratic". Embracing centralized control for the sake of safety has permanent appeal to a subset of people, and readers are undoubtedly already familiar with many examples, and the downsides of them. Recently, some have worried that extreme centralized control is the only solution to the extremes of future technologies: see this hypothetical scenario where "Everybody is fitted with a ‘freedom tag' – a sequent to the more limited wearable surveillance devices familiar today, such as the ankle tag used in several countries as a prison alternative ... encrypted video and audio is continuously uploaded and machine-interpreted in real time". However, centralized control is a spectrum. One milder version of centralized control that's usually overlooked, but is still harmful, is resistance to public scrutiny in biotech (eg. food, vaccines), and the closed source norms that allow this resistance to go unchallenged.
The risk of this approach is, of course, that the center is often itself the source of risk. We saw this in Covid, where gain-of-function research funded by multiple major world governments may have been the source of the pandemic, centralized epistemology led to the WHO not acknowledging for years that Covid is airborne, and coercive social distancing and vaccine mandates led to political backlash that may reverberate for decades. A similar situation may well happen around any risks to do with AI, or other risky technologies. A decentralized approach would better address risks from the center itself.
Decentralized defense, but don't care about acceleration - basically, attempting to slow down technological progress, or economic degrowth.
The challenge with this strategy is twofold. First, on balance technology and economic growth have been massively good for humanity, and any delay to it imposes costs that are hard to overstate. Second, in a non-totalitarian world, not advancing is unstable: whoever "cheats" the most and finds plausibly-deniable ways to advance anyway will get ahead. Decelerationist strategies can work to some extent in some contexts: European food being healthier than American food is one example, the success of nuclear non-proliferation so far is another. But they cannot work forever.
With d/acc, we want to:
Another way to think about d/acc is to go back to a frame from the Pirate Party movements in Europe in the late 00s: empowerment.
The goal is to build a world where we preserve human agency, achieving both the negative freedom of avoiding active interference (whether from other people acting as private citizens, or from governments, or from superintelligent bots) with our ability to shape our own destinies, and the positive freedom of ensuring that we have the knowledge and resources to. This echoes a centuries-long classical liberal tradition, which also includes Stewart Brand's focus on "access to tools" and John Stuart Mill's emphasis on education alongside liberty as key components of human progress - and perhaps, one might add, Buckminster Fuller's desire to see the process of global solving be participatory and widely distributed. We can see d/acc as a way of achieving these same goals given the technological landscape of the 21ˢᵗ century.
The third dimension: survive and thrive
In my post last year, d/acc specifically focused on the defensive technologies: physical defense, bio defense, cyber defense and info defense. However, decentralized defense is not enough to make the world great: you also need a forward-thinking positive vision for what humanity can use its newfound decentralization and safety to accomplish.
Last year's post did contain a positive vision, in two places:
But these two points felt disconnected from the d/acc argument: "here are some ideas for creating a more democratic and defense-favoring world at the base layer, and by the way here are some unrelated ideas for how we might do superintelligence".
However, I think in reality there are some very important connections between what labelled above as "defensive" and "progressive" d/acc technology. Let's expand the d/acc chart from last year's post, by adding this axis (also, let's relabel it "survive vs thrive") to the chart and seeing what comes out:
There is a consistent pattern, across all domains, that the science, ideas and tools that can help us "survive" in one domain are closely related to the science, ideas and tools that can help us "thrive". Some examples:
In addition to this, there are also important cross-dependencies between subject areas:
The hard question: AI safety, short timelines and regulation
Different people have very different AI timelines. Chart from Zuzalu in Montenegro, 2023.
The argument against my post last year that I found most compelling was a critique from the AI safety community. The argument goes: "sure, if we have half a century until we get strong AI, we can concentrate our energies and build all of these good things. But actually it's looking likely we have three year timelines until AGI, and another three years until superintelligence. And so if we don't want the world to be destroyed or otherwise fall into an irreversible trap, we can't just accelerate the good, we also have to slow down the bad, and this means passing powerful regulations that may make powerful people upset". In my post last year, I did indeed not call for any specific strategy to "slow down the bad", beyond vague appeals to not build risky forms of superintelligence. And so here, it's worth addressing the question directly: if we are living in the least convenient world, where AI risk is high and timelines are potentially five years away, what regulation would I support?
First, the case for caution around new regulations
Last year, the main proposed AI regulation was the SB-1047 bill in California. SB-1047 required developers of the most powerful models (those that take over $100M to train, or over $10M in the case of fine-tunes) to take some safety-testing measures before releasing. In addition, it imposed liability on developers of AI models if they take insufficient care. Many detractors argued that the bill was "a threat to open source"; I disagreed, because the cost thresholds meant that it affected only the most powerful models: even LLama3 was probably under the threshold. Looking back, however, I think there was a larger issue with the bill: like most regulation, it was overfitted to the present-day situation. The focus on training cost is proving fragile in the face of new technology already: the recent state-of-the-art quality Deepseek v3 model was trained at a cost of only $6 million, and in new models like o1 costs are shifting from training to inference more generally.
Second, the most likely actors who would actually be responsible for an AI superintelligence doom scenario are realistically militaries. As we have seen in the last half-century of biosecurity (and beyond), militaries are willing to do scary things, and they can easily make mistakes. AI military use is advancing rapidly today (see Ukraine, Gaza). And any safety regulation that a government passes, by default would exempt their own military, and corporations that cooperate closely with the military.
That said, these arguments are not reasons to throw up our hands and do nothing. Rather, we can use them as a guide, and try to come up with rules that would trigger these concerns the least.
Strategy 1: Liability
If someone acts in some way that causes legally actionable damage, they could be sued. This does not solve the problem of risks from militaries and other "above-the-law" actors, but it is a very general-purpose approach that avoids overfit, and is often supported by libertarian-leaning economists for this exact reason.
The primary targets for liability that have been considered so far are:
Putting liability on users feels most incentive-compatible. While the link between how a model is developed and how it ends up being used is often unclear, the user decides exactly how the AI is used. Liability on users creates a strong pressure to do AI in what I consider the right way: focus on building mecha suits for the human mind, not on creating new forms of self-sustaining intelligent life. The former responds regularly to user intent, and so would not cause catastrophic actions unless the user wanted them to. The latter would have the greatest risk of going off and creating a classic "AI going rogue" scenario. Another benefit of putting liability as close to end usage as possible is that it minimizes the risk that liability will lead to people taking actions that are harmful in other ways (eg. closed source, KYC and surveillance, state/business collusion to clandestinely restrict users as with eg. debanking, locking out large regions of the world).
There is a classic argument against putting liability solely on users: users may be regular individuals without too much money, or even anonymous, leaving no one that could actually pay for a catastrophic harm. This argument can be overstated: even if some users are too small to be held liable, the average customer of an AI developer is not, and so AI developers would still be incentivized to build products that can give their users assurance that they won't face high liability risk. That said, it is still a valid argument, and needs to be addressed. You need to incentivize someone in the pipeline who has the resources to take the appropriate level of care to do so, and deployers and developers are both easily available targets who still have a lot of influence over how safe or unsafe a model is.
Deployer liability seems reasonable. A commonly cited concern is that it would not work for open-source models, but this seems manageable, especially since there is a high chance that the most powerful models will be closed source anyway (and if they turn out to be open, then while deployer liability does not end up very useful, it also does not cause much harm). Developer liability has the same concern (though with open source models there is some speed-bump of needing to fine-tune a model to cause it to do some originally disallowed thing), but the same counterargument applies. As a general principle, putting a "tax" on control, and essentially saying "you can build things you don't control, or you can build things you do control, but if you build things you do control, then 20% of the control has to be used for our purposes", seems like a reasonable position for legal systems to have.
One idea that seems under-explored is putting liability on other actors in the pipeline, who are more guaranteed to be well-resourced. One idea that is very d/acc friendly is to put liability on owners or operators of any equipment that an AI takes over (eg. by hacking) in the process of executing some catastrophically harmful action. This would create a very broad incentive to do the hard work to make the world's (especially computing and bio) infrastructure as secure as possible.
Strategy 2: Global "soft pause" button on industrial-scale hardware
If I was convinced that we need something more "muscular" than liability rules, this is what I would go for. The goal would be to have the capability to reduce worldwide available compute by ~90-99% for 1-2 years at a critical period, to buy more time for humanity to prepare. The value of 1-2 years should not be overstated: a year of "wartime mode" can easily be worth a hundred years of work under conditions of complacency. Ways to implement a "pause" have been explored, including concrete proposals like requiring registration and verifying location of hardware.
A more advanced approach is to use clever cryptographic trickery: for example, industrial-scale (but not consumer) AI hardware that gets produced could be equipped with a trusted hardware chip that only allows it to continue running if it gets 3/3 signatures once a week from major international bodies, including at least one non-military-affiliated. The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices.
This feels like it "checks the boxes" in terms of maximizing benefits and minimizing risks:
Hardware regulation is being strongly considered already, though generally through the frame of export controls, which inherently have a "we trust our side, but not the other side" philosophy. Leopold Aschenbrenner has famously advocated that the US should race to get a decisive advantage and then essentially force China to sign a protocol limiting how many boxes they are allowed to run. To me, this approach seems risky, and could combine the flaws of multipolar races and centralization. If we have to limit people, it seems better to limit everyone on an equal footing, and do the hard work of actually trying to cooperate to organize that instead of one party seeking to dominate everyone else.
d/acc technologies in AI risk
Both of these strategies (liability and the hardware pause button) have holes in them, and it's clear that they are only temporary stopgaps: if something becomes possible to do on a supercomputer at time T, it will likely be possible on a laptop at time T + 5 years. And so we need something more stable to buy time for. Many d/acc technologies are relevant here. We can look at the role of d/acc tech as follows: if AI takes over the world, how would it do so?
As briefly mentioned above, liability rules are a naturally d/acc-friendly style of regulation, because they can very efficiently motivate all parts of the world to adopt these defenses and take them seriously. Taiwan has been experimenting with liability for false advertising recently, which can be viewed as one example of using liability to encourage info defense. We should not be too enthusiastic about putting liability everywhere, and remember the benefits of plain old freedom in enabling the little guy to participate in innovation without fear of lawsuits, but where we do want a stronger push to be secure, liability can be quite flexible and effective.
The role of crypto in d/acc
Much of d/acc goes far beyond typical blockchain topics: biosecurity, BCI and collaborative discourse tools seem far away from things that a crypto person normally talks about. However, I think there are some important ties between crypto and d/acc, particularly:
Cursive, an app that uses fully homomorphic encryption (FHE) to allow users to identify areas of common interest with other users, while preserving privacy. This was used at Edge City, one of the many offshoots of Zuzalu, in Chiang Mai.
In addition to these direct intersections, there is also another crucial shared point of interest: funding mechanisms.
d/acc and public goods funding
One of my ongoing interests is coming up with better mechanisms to fund public goods: projects that are valuable to very large groups of people, but that do not have a naturally accessible business model. My past work on this includes my contributions to quadratic funding and its use in Gitcoin Grants, retro PGF, and more recently deep funding.
Many people are skeptical of public goods as a concept. The skepticism generally comes from two sources:
These are important critiques, and good critiques. However, I would argue that strong decentralized public goods funding is essential to a d/acc vision, because a key d/acc goal (minimizing central points of control) inherently frustrates many traditional business models. It is possible to build successful businesses on open source - several Balvi grantees are doing so - but in some situations it is hard enough that important projects needs extra ongoing support. Hence, we have to do the hard thing, and figure out how to do public goods funding in a way that addresses both of the above critiques.
The solution to the first problem is basically credible neutrality and decentralization. Central planning is problematic because it gives control to elites who might turn abusive, and because it often overfits to the present-day situation and becomes less and less effective over time. Quadratic funding and similar mechanisms were precisely about funding public goods in a way that is as credibly neutral and (architecturally and politically) decentralized as possible.
The second problem is more challenging. With quadratic funding, a common critique has been that it quickly becomes a popularity contest, requiring project funders to spend a lot of effort publicly campaigning. Furthermore, projects that are "in front of people's eyeballs" (eg. end-user applications) get funded, but projects that are more in the background (the archetypal "dependency maintained by a guy in Nebraska") don't get any funding at all. Optimism retro funding relies on a smaller number of expert badge holders; here, popularity contest effects are diminished, but social effects of having close personal ties with the badge holders are magnified.
Deep funding is my own latest effort to solve this problem. Deep funding has two primary innovations:
But deep funding is only the latest example; there have been other public goods funding mechanism ideas before, and there will be many more in the future. allo.expert does a good job of cataloguing them. The underlying goal is to create a societal gadget that can fund public goods with a level of accuracy, fairness and open entry that at least approximates the way that markets fund private goods. It does not have to be perfect; after all, markets are far from perfect themselves. But it should be effective enough that developers working on top-quality open-source projects that benefit everyone can afford to keep doing so without feeling the need to make unacceptable compromises.
Today, the leading projects in most d/acc subject areas: vaccines, BCI, "borderline BCI" like wrist myoelectrics and eye tracking, anti-aging medicines, hardware, etc, are proprietary. This has big downsides in terms of securing public trust, as we have seen in many of the above areas already. It also shifts attention toward competitive dynamics ("OUR TEAM must win this critical industry!"), and away from the larger competition of making sure these technologies come fast enough to be there to protect us in a world of superintelligent AI. For these reasons, robust public goods funding can be a strong booster of openness and freedom. This is another way in which the crypto community can help d/acc: by putting serious effort into exploring these funding mechanisms and making them work well within its own context, preparing them for much wider adoption for open-source science and technology more generally.
The future
The next few decades bring important challenges. There are two challenges that have recently been on my mind:
However, each of these challenges has a silver lining. First, we now have very powerful tools to do our remaining work more quickly:
Second, now that many principles that we hold dear are no longer occupied by a few particular segments of the old guard, they can be reclaimed by a broad coalition that anyone in the world is welcome to join. This is probably the biggest upside of recent political "realignments" around the world, and one that is worth taking advantage of. Crypto has already done an excellent job taking advantage of this and finding global appeal; d/acc can do the same.
Access to tools means that we are able to adapt and improve our biologies and our environments, and the "defense" part of d/acc means that we are able to do this without infringing on others' freedom to do the same. Liberal pluralist principles mean we can have a lot of diversity on how this is done, and our commitment to common humanity goals means it should get done.
We, humans, continue to be the brightest star. The task ahead of us, of building an even brighter 21ˢᵗ century that preserves human survival and freedom and agency as we head toward the stars, is a challenging one. But I am confident that we are up to it.