Vitalik published a post this week proposing that personal AI agents could solve DAO governance failures. The core idea: most token holders don't vote because they don't have the time or expertise to evaluate every proposal. An LLM trained on your writing, preferences, and stated values could vote on your behalf - automatically, consistently, at scale.
It's a serious proposal from a serious thinker and it deserves serious engagement.
But I've been sitting with it for a few days and I keep coming back to a set of questions the proposal doesn't fully answer. Not to dismiss the idea - the voter apathy problem is real and brutal - but because I think these risks deserve more discussion before the space starts treating AI voting as the default next step.
Average voting participation in major DAOs sits at 15–25% of token supply. The proposals that do pass often do so with a tiny fraction of circulating tokens actually engaged. This isn't apathy exactly - it's a genuine attention problem. Reading a complex treasury proposal, tracing the calldata, understanding the downstream effects on tokenomics - that's hours of work per proposal, for people who hold tokens in dozens of protocols.
That's accurate. And continuous governance is already here whether we like it or not.
This is the question I keep coming back to. A personal AI agent is only as good as the system it runs on. If an agent is hosted by a third party, that third party has the ability - in practice if not in principle - to influence what the agent does. This recreates the delegation problem, but worse: delegates at least have reputations and can be publicly pressured. An AI agent's decision process is opaque by default.
Vitalik proposes MPC/TEEs and ZK proofs to address this. Those are real technologies. But they require sophisticated deployment, and most DAOs (and most token holders) are not in a position to verify that an agent is actually running in a trusted execution environment. Trust has to come from somewhere.
Beanstalk lost $180 million in a governance exploit in 2022. The attack worked partly because execution happened faster than the community could respond - a malicious proposal passed and was immediately executed before humans could coordinate a response. Timelocks were specifically introduced to create a buffer: time for humans to notice, alert, and organize a counterresponse.
AI agents voting and executing at machine speed could erode that buffer. A compromised or manipulated agent doesn't sleep, doesn't wait for a Telegram notification, and doesn't need a weekend to organize. If a large share of voting power is delegated to AI systems, and those systems are manipulated via adversarial inputs or data poisoning, the time-to-damage shrinks considerably.
This is the Sybil-adjacent risk that doesn't get discussed enough. If multiple token holders delegate to agents trained on similar data, or hosted on similar infrastructure, those agents may vote in coordinated ways without any explicit collusion. The governance outcome looks decentralized - many wallets voted - but the actual decision was made by whoever influenced the underlying model or training data.
You can't detect this by looking at the on-chain vote. Everything looks legitimate.
If AI voting solves the participation problem, governance quorum thresholds become less meaningful as a protection mechanism. Today, low quorum is a vulnerability (small coalitions can win). But it's also a natural signal: if a proposal can't get 4% of tokens to engage, that's data. High AI participation could mask genuine community disengagement behind apparent consensus.
Some things still hold regardless of what AI agents do:
I'm not opposed to AI-assisted governance. The attention problem is real and I don't have a better solution for it. But I think the space is moving fast toward implementation without working through the threat model first.
The optimistic version of this future: AI agents dramatically increase effective participation, surface better-informed votes, and distribute governance power more evenly. The pessimistic version: whoever controls the most widely adopted agent infrastructure quietly becomes the most powerful actor in on-chain governance - with no wallet, no proposal, no transaction that makes that visible.
Would be interested in where others land on this. A few open questions:
Not rhetorical - genuinely curious what people who've thought about this longer than I have think.
It's a serious proposal from a serious thinker and it deserves serious engagement.
But I've been sitting with it for a few days and I keep coming back to a set of questions the proposal doesn't fully answer. Not to dismiss the idea - the voter apathy problem is real and brutal - but because I think these risks deserve more discussion before the space starts treating AI voting as the default next step.
The problem AI voting is trying to solve is real
Average voting participation in major DAOs sits at 15–25% of token supply. The proposals that do pass often do so with a tiny fraction of circulating tokens actually engaged. This isn't apathy exactly - it's a genuine attention problem. Reading a complex treasury proposal, tracing the calldata, understanding the downstream effects on tokenomics - that's hours of work per proposal, for people who hold tokens in dozens of protocols.
Vitalik Buterin said:There are many thousands of decisions to make, involving many domains of expertise.
That's accurate. And continuous governance is already here whether we like it or not.
So what's actually worrying me?
1. Who controls the agent is who controls the vote
This is the question I keep coming back to. A personal AI agent is only as good as the system it runs on. If an agent is hosted by a third party, that third party has the ability - in practice if not in principle - to influence what the agent does. This recreates the delegation problem, but worse: delegates at least have reputations and can be publicly pressured. An AI agent's decision process is opaque by default.
Vitalik proposes MPC/TEEs and ZK proofs to address this. Those are real technologies. But they require sophisticated deployment, and most DAOs (and most token holders) are not in a position to verify that an agent is actually running in a trusted execution environment. Trust has to come from somewhere.
2. Compromised agents compress the attack window
Beanstalk lost $180 million in a governance exploit in 2022. The attack worked partly because execution happened faster than the community could respond - a malicious proposal passed and was immediately executed before humans could coordinate a response. Timelocks were specifically introduced to create a buffer: time for humans to notice, alert, and organize a counterresponse.
AI agents voting and executing at machine speed could erode that buffer. A compromised or manipulated agent doesn't sleep, doesn't wait for a Telegram notification, and doesn't need a weekend to organize. If a large share of voting power is delegated to AI systems, and those systems are manipulated via adversarial inputs or data poisoning, the time-to-damage shrinks considerably.
3. Coordinated agents that appear independent
This is the Sybil-adjacent risk that doesn't get discussed enough. If multiple token holders delegate to agents trained on similar data, or hosted on similar infrastructure, those agents may vote in coordinated ways without any explicit collusion. The governance outcome looks decentralized - many wallets voted - but the actual decision was made by whoever influenced the underlying model or training data.
You can't detect this by looking at the on-chain vote. Everything looks legitimate.
4. AI participation doesn't fix low quorum - it changes who sets quorum
If AI voting solves the participation problem, governance quorum thresholds become less meaningful as a protection mechanism. Today, low quorum is a vulnerability (small coalitions can win). But it's also a natural signal: if a proposal can't get 4% of tokens to engage, that's data. High AI participation could mask genuine community disengagement behind apparent consensus.
What does this mean in practice for DAOs deploying today?
Some things still hold regardless of what AI agents do:
- Timelocks remain non-negotiable. Any buffer time between a proposal passing and execution is time for humans to catch a manipulation. On OpenZeppelin Governor-based systems (which is what CreateDAO deploys, and what Compound, Uniswap, and most serious DAOs use), a 48–72 hour timelock is a baseline protection. AI voting pressure shouldn't change that calculus - if anything, it should increase it.
- Voting periods matter. A 3-day voting window isn't just for humans who need time to read. It's a circuit breaker that lets the community react to suspicious voting patterns before an outcome is finalized.
- Agent transparency should be a governance norm, not a nice-to-have. If a wallet is delegating to an AI, other token holders arguably deserve to know. This isn't a technical problem - it's a community standard problem.
Where I land
I'm not opposed to AI-assisted governance. The attention problem is real and I don't have a better solution for it. But I think the space is moving fast toward implementation without working through the threat model first.
The optimistic version of this future: AI agents dramatically increase effective participation, surface better-informed votes, and distribute governance power more evenly. The pessimistic version: whoever controls the most widely adopted agent infrastructure quietly becomes the most powerful actor in on-chain governance - with no wallet, no proposal, no transaction that makes that visible.
Would be interested in where others land on this. A few open questions:
- Is there a version of AI voting that doesn't recreate the delegation capture problem in a new form?
- Should DAOs require disclosure when a wallet's votes are AI-generated?
- Does the timelock period need to be reconsidered (lengthened?) specifically in response to AI voting speed?
- Is ZK + TEE actually a sufficient trust root, or does it just move the trust problem upstream to chip manufacturers and cloud providers?
Not rhetorical - genuinely curious what people who've thought about this longer than I have think.