Enter Demo →
Hackathon Prototype

Funding open‑source AI with cryptographic proofs of contribution

There’s no sustainable way to fund community AI models, development is costly, and it’s hard to prove who really contributed. We use Pedersen Vector Commitments (PVC) to make model validation, blending, and payouts verifiable.

What’s broken today

  • No good funding mechanisms for open‑source AI — small teams bear the cost without upside.
  • High costs to develop, train, and evaluate models — bounties exist, but trust is manual.
  • No way to prove model validation or attribute value to people helping with inference — contributions are invisible.

What we’re proposing

  • On‑chain contests funded with stablecoins.
  • Off‑chain model training; submit a PVC commitment instead of raw weights.
  • Deterministic scoring → weights for aggregation and payout.
  • Homomorphic PVC check proves the blended model used the stated proportions.
  • Payouts split proportionally to verified contribution — no trusted coordinator.

Why PVC (Pedersen Vector Commitments)?

PVCs commit to a vector of model weights with strong properties:
  • Binding: after submission, weights can’t be changed.
  • Hiding: weights stay secret until reveal (or never revealed, if only proofs are needed).
  • Homomorphic: a weighted sum of commitments equals a commitment to the weighted‑sum model — the key to provable blending and fair payouts.

How it works

1Create contest

Post a dataset (IPFS) and goal metric. Fund a stablecoin pool.

Contest(name, datasetCID, goalError)

2Submit models (PVC)

Participants train off‑chain and submit a commitment Cᵢ and error eᵢ. No raw weights revealed.

Cᵢ = rᵢ·H + Σ wᵢ[j]·Gⱼ

3Blend & verify

Compute weighted sums off‑chain; verify on‑chain the PVC equality that proves the blend proportions.

Commit(W_sum, R_sum) == Σ αᵢ · Cᵢ

Proportional payouts: Use the same αᵢ to split the pool pᵢ = Pool × αᵢ / Σα. Because the equality holds, payouts match the exact blending proportions.

Who benefits?

Open‑source builders

Monetize contributions without giving up IP; receive verifiable, proportional rewards.

Companies & researchers

Run fair, auditable contests. Blend top submissions and pay automatically, with public proofs.

Community validators

Extend the mechanism to evaluation or inference helpers with PVC/zk proofs for transparent attribution.

Roadmap

  • Today: Commit–reveal with PVC, off‑chain federated averaging, on‑chain homomorphic check, proportional payouts.
  • Next: ZK validation of accuracy without revealing models; Merkle commitments for large vectors.
  • Future: Proof‑of‑inference attribution — reward nodes that verifiably contribute inference work.

FAQ

What problem does this actually solve?

Open‑source AI lacks sustainable incentives. By turning model improvement into cryptographically verifiable contributions, you can fund progress while keeping evaluation and payouts transparent.

Is the model public?

Commitments are public; raw weights can stay private until winners are selected or forever if only the blended commitment is needed. The homomorphic check ensures integrity regardless.

How are payouts determined?

Scores (e.g., error rate) deterministically map to weights αᵢ. The same weights are used for the blended model and to split the pool — cryptographic equality ties them together.

What runs on-chain?

Contest creation, commitment storage, and a single homomorphic equality check for the blended result. Training and evaluation remain off‑chain.

Enter the Interactive Demo
No wallet required in the demo. The on‑chain version uses BN254 precompiles.