"I support it only if it's open source" should be a more common viewpoint
2025 Aug 12
See all posts
"I support it only if it's open source" should be a more common viewpoint
One concern that we often hear about certain kinds of radical
technologies is the possibility that they will exacerbate power
inequalities because they will inevitably be available only to the
wealthy and powerful.
Here
is a quote from someone concerned about the consequences of life
extension:
"Are some people going to be left behind? Are we going to make
society far more unequal than it is now?" he asked. Tuljapurkar
predicted that the lifespan boom will be confined to wealthy countries,
where citizens can afford anti-aging technology and governments can
afford to sponsor scientific research. This disparity complicates the
current debate over access to healthcare, as the rich become
increasingly distanced from the poor, not only in quality but length of
life.
"Big pharmaceutical companies have a well-established track record of
being very difficult when it comes to making things available to those
who can't pay for them," he said.
If anti-aging technologies are distributed in the unchecked free
market, "it's entirely likely to me that we'll wind up with permanent
global underclasses, countries that will get locked into today's
mortality conditions," Tuljapurkar said ... "If that happens, you get
negative feedback, a vicious circle. Those countries that get locked out
stay locked out."
Here are some equally strong words from
an article worrying about the consequences of human genetic
enhancement:
Early this month, scientists announced that they had edited
genes in a human embryo to remove a disease-causing mutation. The
work was astounding and the answer to prayers of many parents. Who
wouldn't want a chance to prevent what would now be needless suffering
by their children?
But that wouldn't be the end of it. Many parents would want to ensure
their children had the best of advantages through genetic improvement.
Those with means could obtain them. With the ability comes ethical
questions beyond the ultimate safety of such techniques. Expense of
procedures will produce scarcity and aggravate income inequality that
already continues to grow.
And similar viewpoints in other technology areas:
You can find this theme in many criticisms of new technology. A
somewhat related, but importantly different, theme is that of
technological products being used as a vehicle for data collection,
vendor lock-in, deliberately hidden side effects (eg. modern vaccines
have been criticized this way), and other forms of abuse. Newer
technologies tend to create more opportunities to give someone a thing
without giving them the rights to a thing or the full information about
a thing, and so from this lens too older technologies often seem safer.
This is also a form of technology strengthening the powerful at others'
expense, but it's an issue of manufacturer-against-user power
projection through the technology rather than the concern in
the previous examples, which is inequality of
access.
I personally am very
pro-technology, and if it were a binary option of "go further" vs
"stay where we are", I would gladly push forward on everything except
for a very small list (eg. gain of function research, weapons and
superintelligent AI) despite the risks. This is because on the whole the
benefits - much longer and healthier lives, a more prosperous and
society, preserving more human relevance in an era of rapidly improving
AI, maintaining cultural continuity through older generations surviving
as people and not just as memoirs in history books - are much
larger than the downsides (which often end up
overrated).
But what if I put myself into the shoes of someone who is either less
sunny on the positive implications, or more concerned that powerful
people will use new technologies to perpetuate their economic dominance
and exert control, or both? For example, I already feel that way toward
"smart home" stuff - the benefit of being able to talk to the light bulb
is outweighed by not wanting my personal life to be streamed to Google
or Apple. If I had more pessimistic assumptions, I could also see myself
feeling that way toward some media technologies: if they enable powerful
people to broadcast messages more effectively than everyone else, then
they can be used to exert control and drown out others, and for many
such technologies the gains from us having better information or better
entertainment are not large enough to compensate for the way they
reallocate power.
Open source as the third way
One viewpoint that I think is heavily under-valued in these
situations is: supporting a technology being developed only if
it's open
source.
There's a very plausible case that open source accelerates progress:
it makes it much easier for people to build on each other's innovations.
There's also a very plausible case that requiring open source
decelerates progress: it prevents people from using a large class of
potential strategies to make money. But the most interesting
consequences of open source as those that pull in directions unrelated
to the "faster vs slower progress" axis:
- Open source improves equality of access: if
something is open source, it is naturally accessible to anyone in any
country. For physical goods and services, people still have to pay
marginal (per-item) costs, but in a large number of cases,
prices of proprietary products are high because the fixed costs
(eg. NRE)
of coming up with the thing are too high to invite more competition, and
so the marginal costs are often quite cheap (eg. this is definitely true
in pharma).
- Open source improves equality of access to being a
producer. One criticism of giving people free access to end
products (even unquestionably good ones, like healthcare) is that
it doesn't help those people gain skills and experience and climb up the
global economy into prosperity, which is the best truly reliable
guarantor of lasting access to high-quality life (see eg. Magatte Wade
complaining about this regarding aid to Africa). Open source is not
like this: it is fundamentally about enabling people anywhere in the
world to be producers at all parts of the supply chain, and not just
consumers.
- Open source improves verifiability: if something is
open source (which ideally should include not just the output, but also
the process used to come up with it, make parameter choices,
etc), then it's much easier to verify that what you're getting is what
the provider claims you're getting, and for third parties to do research
to identify hidden downsides.
- Open source removes opportunities for vendor
lock-in. If something is open source, then the manufacturer
cannot render it useless by remotely removing features, or simply by
going bankrupt (eg. see concerns about highly computerized/networked
cars no
longer working if the manufacturer shuts down). You always have the
right to
repair things yourself (or by asking a different provider).
We can analyze this from the perspective of some of the more radical
technologies that I listed near the beginning of the article:
- If we have proprietary life extension technology, then it may be
only accessible to billionaires and political leaders (I personally
expect that this technology will drop in price quickly, but your opinion
on this may be more skeptical than mine). But if it's open source, then
anyone can go and use it and offer it to others cheaply.
- If we have proprietary human genetic enhancement, then it may be
only accessible to billionaires and political leaders, creating an
overclass. (Again, I personally think such tech will diffuse, but there
will definitely be some delta between what the wealthiest get
and what the average person gets). But if it's open source, the delta
between what the well-connected and powerful get and what everyone else
gets will be much smaller.
- For any biotech in general, an open-science
safety testing ecosystem may well be more effective and honest than a
company endorsing the safety of its own product and getting
rubber-stamped by a pliant regulator.
- If only a few people can go to space, depending on how politics goes
there's some chance one of them will take an entire planet or
moon for themselves. If the technology is more widely distributed, they
will have less opportunity to do so.
- If your smart car is open source, then you can verify that the
manufacturer is not spying on you, and you are not dependent on the
manufacturer to be able to keep using your car.
We can sum the argument up in a chart:

Note that the "build it only if it's open source" bubble
is wider, reflecting larger uncertainty in just how much progress open
source will lead and just how many power concentration risks it will
prevent. But even still, on average it's a good deal in a large variety
of situations.
Open source and misuse risk
One major argument against open sourcing powerful technologies that
sometimes gets brought up is the risk of zero-sum behavior and
non-hierarchical forms of abuse. Giving everyone nukes would certainly
end nuke inequality (which is a real problem; we see multiple instances
of powerful states using asymmetry of access to nukes to bully others as
we speak), but it would also almost certainly lead to billions of
deaths. To give an example of negative social consequences without
deliberate harm, giving everyone access to plastic surgery may well lead
to a zero-sum competitive game where everyone spends a lot of resources
and even takes health risks to look more beautiful than everyone else,
but at the end we all get used to the higher levels of beauty and
society ends up not really being much better. Some forms of biotech
could end up having these kinds of effects on a larger scale. Many
technologies (in fact, lots of biotech) are in between these two
extremes.
This is a valid argument for wanting to go the opposite way: "I
support it only if it's carefully controlled by trustworthy
gatekeepers". Gatekeepers could allow positive use cases of a
technology while keeping out negative use cases. Gatekeepers could even
be given a public mandate to ensure non-discriminatory access to
everyone who does not break certain rules. However, I have a strong
default skepticism of this approach. The biggest reason why is that I am
generally skeptical that, especially in the modern world, trustworthy
gatekeepers truly exist. Many of the most zero-sum and risky use cases
of technology are military use cases, and militaries have a poor history
of constraining themselves.
A good example is the Soviet
bioweapons program:
Given his restraint with regard to SDI and nuclear weapons,
Gorbachev's actions related to the Soviet's illicit germ weapons program
are puzzling, noted Hoffman.
When Gorbachev came to power in 1985, the Soviet Union had an
extensive biological weapons program initiated by Brezhnev, despite
being a signatory of the Biological Weapons Convention. In addition to
anthrax, the Soviets also were working on smallpox, plague and
tularemia, but their intentions and targets for such weapons are not
clear.
"Kateyev's papers showed there were multiple Central Committee
resolutions about the biowarfare program issued in the mid- to late-80s.
It's hard to believe these were all signed and issued without
Gorbachev's knowledge," Hoffman said.
"There's even a May 1990 memo to Gorbachev about the biological
weapons program – a memo that still didn't tell the whole story. The
Soviets misled the world and they misled their own leaders.
Oh, and see this
link arguing that this bioweapons program may have ended up being
made available to other countries (!!) after the Soviet collapse.
Other countries have large mistakes to answer for themselves. I need
not link to everything that has been uncovered regarding many countries'
participation in gain-of-function research and the risks that it implies
(this
book is good though). In the realm of digital software (eg.
finance), the history of weaponized
interdependence shows how what was meant as abuse prevention easily
slides into one-sided power projection by the operator.
This is another weakness of gatekeepers: by default, they will be
controlled by national governments, and these countries' political
systems may well have an incentive to ensure equality of access
within the country, but there is no powerful entity with a
mandate to ensure equality of access between countries.
To be clear, I am not saying "the gatekeepers are bad too, so let's
have a free for all" (at least, not for gain-of-function research).
Rather, I'm saying two things:
- If something has enough "all-against-all abuse"
risks that you would only feel comfortable seeing it done in a
locked-down way with centralized gatekeepers, consider that the
correct solution may be not doing it at all (and investing in
alternative technologies with better risk profiles)
- If something has enough "power dynamics" risks that
you currently do not feel comfortable seeing it done at all, consider
that the correct solution may be doing it, but doing it open
source so that everyone has a fair chance to understand and
participate.

Note also that "open source" does not imply "free for all". For
example, I would favor geoengineering being done in an open-source and
open-science way. But this is not the same as "anyone can go redirect
any rivers and sprinkle what they want into the atmosphere", and it will
not lead to that in practice: laws and international diplomacy exist,
and such actions are easy to detect making any agreements quite
enforceable. The value of openness is (i) ensuring that it's
more democratized (eg. usable by many countries instead of just
one), and (ii) increasing the accessibility of information, so people
can more effectively form their own judgements of whether or not what is
being done is effective and safe.
Most fundamentally, I see open source as the strongest possible Schelling
point for how technology can be done with less risk of concentrating
wealth and power and asymmetric information. One can try to construct
more clever institutions that try to split apart the beneficial and
negative use cases of a technology, but in the modern chaotic world, the
approach that is most likely to stick is a very easily
publicly-understandable guarantee that things are happening in the open
and anyone can go and understand what's going on and participate.
In many cases, these concerns less important than the extreme value
of making technology go faster (or, in a few cases, the importance of
slowing it down as much as possible, until either countermeasures or
alternative ways of achieving the same goal are available). On the
margin, however, the third option - focusing less on rate of
progress, and more on style of progress, and using a norm of
expecting open source as an easily legible lever to push things in a
better direction - is an underrated approach.
"I support it only if it's open source" should be a more common viewpoint
2025 Aug 12 See all postsOne concern that we often hear about certain kinds of radical technologies is the possibility that they will exacerbate power inequalities because they will inevitably be available only to the wealthy and powerful.
Here is a quote from someone concerned about the consequences of life extension:
Here are some equally strong words from an article worrying about the consequences of human genetic enhancement:
And similar viewpoints in other technology areas:
You can find this theme in many criticisms of new technology. A somewhat related, but importantly different, theme is that of technological products being used as a vehicle for data collection, vendor lock-in, deliberately hidden side effects (eg. modern vaccines have been criticized this way), and other forms of abuse. Newer technologies tend to create more opportunities to give someone a thing without giving them the rights to a thing or the full information about a thing, and so from this lens too older technologies often seem safer. This is also a form of technology strengthening the powerful at others' expense, but it's an issue of manufacturer-against-user power projection through the technology rather than the concern in the previous examples, which is inequality of access.
I personally am very pro-technology, and if it were a binary option of "go further" vs "stay where we are", I would gladly push forward on everything except for a very small list (eg. gain of function research, weapons and superintelligent AI) despite the risks. This is because on the whole the benefits - much longer and healthier lives, a more prosperous and society, preserving more human relevance in an era of rapidly improving AI, maintaining cultural continuity through older generations surviving as people and not just as memoirs in history books - are much larger than the downsides (which often end up overrated).
But what if I put myself into the shoes of someone who is either less sunny on the positive implications, or more concerned that powerful people will use new technologies to perpetuate their economic dominance and exert control, or both? For example, I already feel that way toward "smart home" stuff - the benefit of being able to talk to the light bulb is outweighed by not wanting my personal life to be streamed to Google or Apple. If I had more pessimistic assumptions, I could also see myself feeling that way toward some media technologies: if they enable powerful people to broadcast messages more effectively than everyone else, then they can be used to exert control and drown out others, and for many such technologies the gains from us having better information or better entertainment are not large enough to compensate for the way they reallocate power.
Open source as the third way
One viewpoint that I think is heavily under-valued in these situations is: supporting a technology being developed only if it's open source.
There's a very plausible case that open source accelerates progress: it makes it much easier for people to build on each other's innovations. There's also a very plausible case that requiring open source decelerates progress: it prevents people from using a large class of potential strategies to make money. But the most interesting consequences of open source as those that pull in directions unrelated to the "faster vs slower progress" axis:
We can analyze this from the perspective of some of the more radical technologies that I listed near the beginning of the article:
We can sum the argument up in a chart:
Note that the "build it only if it's open source" bubble is wider, reflecting larger uncertainty in just how much progress open source will lead and just how many power concentration risks it will prevent. But even still, on average it's a good deal in a large variety of situations.
Open source and misuse risk
One major argument against open sourcing powerful technologies that sometimes gets brought up is the risk of zero-sum behavior and non-hierarchical forms of abuse. Giving everyone nukes would certainly end nuke inequality (which is a real problem; we see multiple instances of powerful states using asymmetry of access to nukes to bully others as we speak), but it would also almost certainly lead to billions of deaths. To give an example of negative social consequences without deliberate harm, giving everyone access to plastic surgery may well lead to a zero-sum competitive game where everyone spends a lot of resources and even takes health risks to look more beautiful than everyone else, but at the end we all get used to the higher levels of beauty and society ends up not really being much better. Some forms of biotech could end up having these kinds of effects on a larger scale. Many technologies (in fact, lots of biotech) are in between these two extremes.
This is a valid argument for wanting to go the opposite way: "I support it only if it's carefully controlled by trustworthy gatekeepers". Gatekeepers could allow positive use cases of a technology while keeping out negative use cases. Gatekeepers could even be given a public mandate to ensure non-discriminatory access to everyone who does not break certain rules. However, I have a strong default skepticism of this approach. The biggest reason why is that I am generally skeptical that, especially in the modern world, trustworthy gatekeepers truly exist. Many of the most zero-sum and risky use cases of technology are military use cases, and militaries have a poor history of constraining themselves.
A good example is the Soviet bioweapons program:
Oh, and see this link arguing that this bioweapons program may have ended up being made available to other countries (!!) after the Soviet collapse.
Other countries have large mistakes to answer for themselves. I need not link to everything that has been uncovered regarding many countries' participation in gain-of-function research and the risks that it implies (this book is good though). In the realm of digital software (eg. finance), the history of weaponized interdependence shows how what was meant as abuse prevention easily slides into one-sided power projection by the operator.
This is another weakness of gatekeepers: by default, they will be controlled by national governments, and these countries' political systems may well have an incentive to ensure equality of access within the country, but there is no powerful entity with a mandate to ensure equality of access between countries.
To be clear, I am not saying "the gatekeepers are bad too, so let's have a free for all" (at least, not for gain-of-function research). Rather, I'm saying two things:
Note also that "open source" does not imply "free for all". For example, I would favor geoengineering being done in an open-source and open-science way. But this is not the same as "anyone can go redirect any rivers and sprinkle what they want into the atmosphere", and it will not lead to that in practice: laws and international diplomacy exist, and such actions are easy to detect making any agreements quite enforceable. The value of openness is (i) ensuring that it's more democratized (eg. usable by many countries instead of just one), and (ii) increasing the accessibility of information, so people can more effectively form their own judgements of whether or not what is being done is effective and safe.
Most fundamentally, I see open source as the strongest possible Schelling point for how technology can be done with less risk of concentrating wealth and power and asymmetric information. One can try to construct more clever institutions that try to split apart the beneficial and negative use cases of a technology, but in the modern chaotic world, the approach that is most likely to stick is a very easily publicly-understandable guarantee that things are happening in the open and anyone can go and understand what's going on and participate.
In many cases, these concerns less important than the extreme value of making technology go faster (or, in a few cases, the importance of slowing it down as much as possible, until either countermeasures or alternative ways of achieving the same goal are available). On the margin, however, the third option - focusing less on rate of progress, and more on style of progress, and using a norm of expecting open source as an easily legible lever to push things in a better direction - is an underrated approach.