Prediction markets are one of the best tools we’ve ever built for understanding what’s actually happening in the world. Putting real money behind your beliefs, and the aggregate signal, reflects what people actually think rather than what they’re willing to say. This produces forecasts that consistently outperform polls, pundits, and expert panels. Polymarket called the 2024 election with more accuracy than every major polling outfit. Kalshi’s markets on Fed rate decisions have been better predictors than the bond market on multiple occasions. When people have money at stake, the bullshit drops fast, and what you’re left with is remarkably close to the truth. The growth of these platforms over the past two years is one of the most important developments in how we process information as a society.

But, let’s be honest about what’s also happening. Last December, an anonymous trader on Polymarket bought a large position predicting the ouster of Venezuelan President Maduro. A few days later, the U.S. military captured him, and the trader walked away with over $400,000. In February, an Israeli Air Force reservist was indicted for placing bets using classified information about upcoming strikes on Iran. Around the same time, a political candidate was caught trading on his own election on Kalshi — betting on himself — and a YouTube channel editor was buying contracts tied to his own channel’s upcoming videos before they were published. The CFTC issued its first formal advisory on prediction market insider trading in February 2026. The DOJ’s Southern District of New York recently met with Polymarket to discuss whether existing fraud laws even apply. Congress introduced the Prediction Markets Security and Integrity Act of 2026, which would prohibit trading on material nonpublic information in prediction markets — a bill whose existence tells you that, until now, it wasn’t clear that was illegal.

Whether any of this constitutes a problem or is simply the market doing what markets do — surfacing information, including information that some participants have and others don’t — is a debate I’m not looking to have. Most of the insider trading examples above aren’t fixable with better system design. If a military reservist has classified knowledge about an upcoming strike, that’s a state secrets problem. If a YouTube editor knows what’s in next week’s video, that’s an employment relationship problem (though if you’re betting on what’s going to be in a YouTube video, there’s probably a stronger argument that’s your problem). Those are information asymmetries that exist in the world, and you can try to prosecute people who exploit them after the fact, but you can’t architecturally prevent someone from knowing something they already know.

Then awards season came around. At the Golden Globes, Polymarket had live odds on screen during the broadcast, and the lines were moving to winners before the envelopes opened. Someone knew the results ahead of time and was taking free money. At the Oscars, bets on best picture and best director were shifting in the minutes leading up to the reveals. This is a different kind of problem than the military intelligence cases, because the underlying event isn’t a geopolitical development or a content release schedule. It’s a vote. A group of people marked ballots, someone counted them, and that someone (or someone adjacent to them) had information that the rest of the market didn’t. The prediction market, doing exactly what prediction markets are supposed to do, made that information asymmetry visible and monetizable.

This pattern extends well beyond awards shows. The chair of a board vote sees the tally before reading it out. A union election committee counts behind closed doors. Election officials see returns before they’re certified — and after 2020, a large portion of the electorate believed the presidential election was stolen. You can argue about whether that belief was justified, but the system couldn’t produce a mathematical proof that it wasn’t. It could provide procedures, affidavits, audits, assurances. But not proof. Every system where people vote runs on the assumption that the people running it will behave honestly, but there’s no mechanism to verify that they did, and no way to prove it after the fact.

Prediction markets didn’t create this problem. They diagnosed it. And unlike a military reservist with classified intelligence, or a YouTube editor who knows next week’s upload schedule, the information asymmetry in an election isn’t a fact about the world we have to accept. It’s a design choice. Someone counts first. Someone knows first. That’s not inherent to voting. It’s inherent to how we’ve built voting systems. And you can build them differently — so that the decryption key for the ballots doesn’t exist until the moment everyone is supposed to see the results, and nobody counts because the counting is a deterministic function that anyone can run on public data.

That’s what ZeroVote does.


When a voter opens their ballot link, their browser encrypts their vote to a future round of drand, a distributed randomness beacon operated by Cloudflare, Protocol Labs, and about a dozen other independent organizations. drand publishes a new value every 3 seconds using threshold BLS signatures. The value for any given round is produced collectively by the network at the moment that round arrives, and before that moment, it does not exist. Not on my server, not in a hardware security module, not in an envelope. When a ballot is encrypted to a drand round scheduled for next Tuesday at midnight, decryption before then is not prevented by an access control policy or a promise from the server administrator. It’s prevented by the fact that the required cryptographic value hasn’t been generated yet, and no single party in the drand network can generate it unilaterally.

When the round arrives, drand publishes its signature, and that signature is the decryption key for every ballot encrypted to that round. Everyone in the world gets it at the same instant. A background service decrypts every ballot and runs the tallying algorithm (plurality for single-choice elections, instant-runoff for ranked-choice), deterministically, with no human involvement.

The person who created the election has no more access to the results than a stranger on the internet.

That handles who can see results and when. But timelock encryption alone leaves three gaps.

How do you reject an invalid ballot without reading it? Every ballot includes a zero-knowledge proof — Bulletproofs R1CS with Pedersen commitments on Curve25519 — that proves the vote is well-formed without revealing what it contains. The server can verify that a ballot selects exactly one candidate (or provides a valid ranking) and reject anything malformed, without ever learning what the vote says. The Pedersen commitment blinding factors — the values that could theoretically reverse-engineer the vote — never leave the voter’s device.

How do you know the server didn’t tamper with the ballots it stored? Every accepted ballot goes into a BLAKE3 Merkle tree, and every voter gets an inclusion proof as a receipt. If the server adds, removes, or modifies any ballot after the fact, the root hash changes and every receipt holder can detect it.

How do you keep ballots secret when the server knows who submitted them? For elections that need secret ballots, RSA blind signatures break the link between a voter’s identity and their ballot. The server signs a blinded credential without seeing it, the voter unblinds it and submits their vote anonymously, and the server can verify the credential is legitimate without knowing which voter it belongs to.

Everything — every encrypted ballot, every zero-knowledge proof, every Merkle node, the drand beacon value — is published as public data. I built an open-source verifier, a Rust crate available as both a CLI and a library, that fetches the bulletin board, retrieves the drand beacon, checks its BLS signature, validates every zero-knowledge proof, rebuilds the Merkle tree from scratch, decrypts every ballot, and recomputes the tally. It runs the exact same cryptographic pipeline my server does, on the exact same public inputs. If you don’t trust my verifier, you can read the source and write your own.

This is what it looks like to verify an election:

zerovote-verify --slug oscars-2026 --api https://api.zerovote.app
{
  "beacon_verified": true,
  "zkp_passed": 302,
  "zkp_failed": 0,
  "merkle_root_match": true,
  "tally": { "winner": 7, "totals": [18, 42, 31, 55, 12, 67, 29, 48] }
}

You create an election, you email ballot links, people vote in their browser, and results appear at the scheduled time with a mathematical proof that they’re correct. The person running the election can’t see votes early, can’t see partial tallies, can’t tamper with the bulletin board without detection, and can’t produce results any sooner than anyone else. The cryptographic tools to build this have been available for years — I just put them into a web app that a union chapter or a board of directors or an awards committee can use without understanding any of the underlying math.

For prediction markets, a ZeroVote election is an event you can price without worrying about who has inside access to the count, because nobody does. The verification pipeline is deterministic, runs on public inputs, and is available as a Rust library you can embed directly in a settlement system. A market on a ZeroVote election can settle at the exact moment drand publishes the beacon — same instant, same math, same answer, regardless of who runs it. No centralized oracle. No API to poll. No announcement to wait for.

But this isn’t really about prediction markets. It’s about every vote that’s ever been counted in a back room by someone the rest of us had to trust. Board votes, union elections, awards shows, general elections — the same design flaw, over and over. Someone counts first.

ZeroVote is live. The verifier is open source. The API is public.