Product · AI provenance

Themisra.

An AI provenance protocol: hash any prompt and output pair, quorum-attest it on-chain, verify it anywhere. Model-agnostic.

Themisra is the first of two native Ligate products. It wraps any LLM interaction in a cryptographic receipt — model ID, prompt hash, output hash, temperature, timestamp, wallet, quorum signature — anchored on Celestia, verifiable in under 200ms.

Note
Most of this page is an outline. The detailed sub-pages (API reference, SDK usage, integration recipes) land as the devnet matures.

Who uses it

  • AI providers who want verifiable output attestation as a feature
  • Enterprises under AI compliance mandates (EU AI Act, internal audit)
  • Prompt engineers who want to prove authorship and earn royalties on reuse
  • Content platforms that need provenance on AI-generated media
  • Agent developers who need audit trails for autonomous actions

Two integration paths

1. Native chat

Users chat with any supported AI model directly on Themisra. Every message is auto-attested. Gas is paid in $LGT (~$0.0009 per message). Runs at themisra.xyz.

2. External proof API

For existing apps: submit a prompt + output from ChatGPT, Claude, or any OpenAI-compatible endpoint, and Themisra returns a receipt. No migration, no switching models. Drop-in.

Feature surface

Themisra ships 32 features across 5 categories. This doc captures only the core ten; the full feature list lives on the roadmap and whitepaper.

Core infrastructure

  • Prompt Registry — hash + timestamp on-chain
  • AI Model Agnostic — Claude, GPT, Gemini, Llama, Mistral, any API
  • Native Chat Interface — like ChatGPT but provable
  • External Proof API — notary for AI interactions done elsewhere
  • Sovereign SDK Rollup on Celestia
  • $LGT Token — gas, staking, royalties
  • Confidential Prompts — ZK proofs via SP1 / RISC Zero (Phase 2)
  • Cross-chain Bridges — Hyperlane to Solana, Ethereum, any chain
  • Privy Wallet Auth — email sign-in, no MetaMask required
  • API Gateway — middleware between apps and AI models

Next up

The API reference and SDK integration guides are the next sections we're writing. For now, the fastest path to a working integration is:

  1. Install the CLI (Getting started)
  2. Claim $LGT from the faucet
  3. Run ligate attest with your prompt + output
  4. Verify the receipt with ligate verify <id>