Skip to content
Option | How to Help

Option | How to Help

Thank you for considering helping the Option project! Option has many people excited because it’s an ambitious attempt to redesign governance and economics from the ground up. Our question is, is it possible to design political economies that are free from:
monopoly on resources and permission (elitism, aristocracy, nepotism, etc)
onerous governance processes (like voting, filing taxes, and applying for permits)
tyranny of the uninformed majority
Many people in web3 have found this to be a promising project due to a growing recognition that good governance systems are desperately needed to enable decentralized autonomous organizations and other collectives to function. But it’s not just web3, we also have a sampling of economists, game theorists, political scientists, journalists, and lawyers all interested in whether there are alternative ways to partition the problem of “how to make wise decisions collectively” which might bear more fruit than the tired frameworks that have already been tried.


From a high level, there are two tracks you can contribute to. These tracks are called “permission” and “inference”. You can think of these as efforts to rethink economics and governance. Work on permission is all about trying to design systems that answer the question, “how do we make it profitable for companies to do good, and only profitable to do good?” Inference, on the other hand, is the sensemaking system. It’s concerned with the question, “what is happening, and what would happen if we did things differently?”
There are many familiar ways to do both of these things. Inference can be done by committees, and think tanks, and journalists, and people shouting at one another on social media. Work on inference is about enabling the insights of individuals and groups to be aggregated into a collective view. Permission meanwhile is about equipping people with the energy they need, and clearing obstacles, for them to pursue a particular endeavor. So, favorable legal environments, grant and investment sources, political representation, social approbation, and supporting infrastructure are all forms of permission. Money is just a type of permission, which enables this framing to address economic constraints while also considering broader context.
These two tracks make up the two broad domains of available work in the Option project.


You might be interested in working on inference if you enjoy prediction markets, information theory, causal modeling, algorithm design, game theory and mechanism design, if you have fun tackling complex problems with vague design constraints, if you’re inspired by the work of Karl Popper, Douglas Hofstadter, Kanjun Qui, David Deutsch, and Karl Friston, if you’ve ever wondered whether it’s possible to implement a machine learning algorithm as a Proof of Stake type social system, if you think there’s a better alternative to everyone making Twitter clones, if you think science is the sort of thing that should be designed so that anyone can contribute to it, if you’re worried that we’ll come to collectively rely on artificial intelligence whose confabulation and discrimination we don’t understand.
For the inference track, we’re interested in contrasting two categories of collective inference mechanisms:
Algorithmic collective inference
Game theoretic collective inference
By “collective inference” we’re talking about tools for multiple parties (who may be adversarial) to come to a shared understanding. Some people also call this sensemaking.
These aren’t perfectly independent categories, but rather general themes of exploration. In the end, both are needed, but there is a particular path of game theoretic inference that we’re interested in exploring.

Algorithmic Collective Inference

Algorithmic collective inference is all about building a core algorithm which evaluates various competing models against data, then uses that ensemble of the trustworthy models to identify which data to trust, and uses the ensemble of trustworthy data to figure out which models to trust. It’s like collaborative machine learning, where multiple models compete on the basis of which is most useful.
By algorithmic inference I’m not merely talking about machine learning techniques, but rather algorithms for taking input from multiple different players and adjudicating them
Some examples of this sort of system:
NumerAI () which is like a distributed hedge fund
Digital Gaia () which is building a system for distributed science (hi, author here, I joined their team because in my opinion they have the most credible approach to solving algorithmic inference)
Perhaps RLHF fits into this due to the feedback from many users
does work like this using evolutionary algorithms
You might notice that algorithmic inference has a potential flaw: if the data to trust is based on the models, and the models to trust is based on the data, then you could reasonably worry that cabals could form which capture the system by colluding to manufacture models that fit their data and manufacture data to fit their models. This naturally motivates the need for any algorithmic system to consider the incentives of its participants, and brings us naturally to consider game theoretic inference.

Game Theoretic Collective Inference

Game theoretic inference is about using ideas from game design and mechanism design to invite players into games of inference.
Some examples of this sort of system:
Markets, esp. prediction markets
Consensus Protocols like Proof of Stake and Proof of Work
Legal arbitration processes
Whistleblower bounties and whistleblower protection laws
Twitter’s Community Notes
Web3 experiments like Golden () and Kleros ()
As you can see, this category has much more tattered boundaries than algorithmic inference. Somehow Wikipedia and whistleblower protection laws are the same sort of thing? The point is just to consider that whistleblower protection laws and Wikipedia share a common goal: come to represent an accurate view of something. Whistleblower protection laws make it more likely that regulators will find out about abuses of workers or laws that are being broken. Wikipedia uses a sort of byzantine system of service awards, community escalation, and just sheer force of will as part of the decision criteria to figure out which edits are permissible. But they both result in the same general outcome: surface useful information.
You may notice that Consensus Protocols like Proof of Stake and Proof of Work show up in this list. Proof of Work uses game theory to figure out which chain will be chosen by creating an incentive landscape where the Schelling point hovers over only the longest chain, ensuring that there will tend to only be one chain. And yet, in order to achieve this the game theory must consider the algorithmic properties of the SHA-256 hash function and the various message broadcast algorithms that could be employed. This highlights that in a real system game theoretic and algorithmic considerations are intimately intertwined.

Inference in Option

From the perspective of Option, there is a particular approach to game theoretic inference that seems alluring and yet is still very much a “butterfly idea”, an idea in such an early stage that if you tried to grab it too tightly you might crush it.
Here’s the general shape:
Make a tree of points somewhat akin to Kialo (), but with only invalidating points (only cons)
Allow players to express their beliefs by staking against their preferred points
Compute a score for each point according to how much people stake on it and its invalidating points
Allow players to indicate the points that would change their mind, give them additional score for doing so
Let players yield their stake to a point that changed their mind and receive a newly minted asset that represents their funding of that information provision
The idea here would be to design an incentive landscape that can identify and reward behavior that is sincere and self-critical. This should feel familiar, it’s similar to how what allowed us to accept The General Theory of Relativity was all the ways in which it made statements about how it could be wrong, and then the fact that those statements weren’t invalidated gave us a sense that the ideas were deserving of much more credence than we otherwise would have thought.
There are many properties that seem nice about this approach:
It’s not a popularity contest
It rewards falsifiability
It sidesteps the bribery problem
It can be a totally open protocol without gatekeepers
Those are the technical properties I like. But I also think it could just make for a better social and psychological experience than our existing platforms. In principle, this could be a Twitter-like surface into which you cast your musings, but rather than watch them disappear into the ether, where your best hope is someone will angrily reply, your points could be linked up to a vast epistemic web of points made by other people, not only linking you to points you invalidate, but linking in points that seem like they might invalidate yours. And that alone would be much more satisfying, because it would seem like you were contributing to a great structure of knowledge instead of just trying to emit a viral quip. Furthermore, much of the toxicity of social media arises because attention is the totalizing measure of value, and for whatever cruel evolutionary reason this means that negativity reigns. Perhaps the alchemy of incentives is sufficient to transform our relationship with the negative. What would it be like to live in an information environment where all the ideas that seem to threaten you are felt as opportunities, both financially and socially?
Idk but I want to try it.
This brings us to the two projects that support the work in this domain of inference:
The Algorithm – there’s an early prototype of an algorithm for assigning scores to a network of invalidating points. But it’s difficult to follow, its properties aren’t well understood, and it has terrible performance. Explore and improve it
The Game – using what’s known about The Algorithm, find a game (as in game theory) which signal boosts self-invalidating behavior, financially and socially rewards contributors, makes admitting error / being wrong / belief updating feel like an opportunity, and does so without entrenching monied nor powerful players


You may enjoy contributing to the permission track if you are concerned about wealth inequality and power differentials, if you’re interested in economics, Impact Markets, Retroactive Funding for Public Goods, cryptocurrencies, if you like to run large scale simulations, to prove properties of complex systems, if you care about how systems and mechanisms manifest as human experiences, if you like smart contract design, and video game design, if you wonder why more things can’t be open source, if you think Harberger taxation is cool, land taxes make sense but are too hard to implement, and you wonder what it might look like to build a system based on Rawlsian ideas.
For the permission track, the Option project is primarily interested in Index Wallets, you can read more about them at .
Index Wallets are a concrete, promising permission mechanism that show early signs of offering:
funding for public goods
voluntary taxation
power decentralizing dynamics
Please read it for a deep dive, or check out the various links it hosts for more accessible introduction.
The article has two major weaknesses:
It considers only a specific worst-case scenario (which is not very realistic) in order to construct its proofs
It assumes that game theoretic assumptions are applicable to the players of the game. For various reasons (psychology, behavioral economics, information and compute horizons, etc.) it may either be incompatible with human players or just behave differently in practice

This is why we have these two active projects under the permission heading:
Deposing the Dictator – use simulations or analytic approaches to extend the game theory beyond the unrealistic worst-case assumption (the dictator assumption) of the original article
The Index Wallet Game – build a multiplayer game which allows players to use Index Wallets in a simulated economy so as to see how it feels and what dynamics play out. This can then be compared with a control experiment using traditional wallets.

You can learn more about each of these projects here:


Ok, there is one final way you can help. I’m great at teaming up with nerds that are interested in the mechanics and dynamics of these things. I’m really world-class bad at taking the progress made and turning that into public facing documents at any reasonable rate which usefully summarizes our results. So, if anyone would like to set up and tend to a digital space for conversations about this, or put content into an email newsletter, or to capture podcasts so as to invite new people into the project, these are all things that would be helpful. Otherwise we’ll continue at my current abysmal pace of communication.

Now go pick a project :)

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.