Skip to main content

White paper

DecAIHub: A Verifiability Registry for Decentralized Artificial Intelligence

Version 1.0

Abstract

The intersection of Artificial Intelligence (AI) and blockchain technologies has generated a rapidly expanding ecosystem, accompanied by pervasive "AI-washing" and "onchain-washing"—the strategic overstatement of AI capabilities and blockchain dependency. The lack of standardized, verifiable disclosures creates severe information asymmetry, forcing market participants to rely on marketing narratives rather than substantive evidence. DecAIHub proposes a rigorous verifiability registry designed to bridge this gap. By introducing a structured "Project Passport" and a tiered evidence framework, DecAIHub systematically separates verifiable facts from informational noise. This whitepaper outlines the architectural principles of the registry, detailing our taxonomy, evidence classification, necessity assessment, and the mathematical foundation of the DecAI Fit index. Analogous to how cryptographic proofs secure network states, our framework secures the informational state of the AI-crypto ecosystem, offering a robust defense against manipulation and a reliable screening mechanism for stakeholders.

1. Introduction

The decentralized artificial intelligence (DeAI) space suffers from an acute verifiability crisis. As the market for AI-crypto projects expands, the term "Decentralized AI" has frequently degraded from a set of verifiable technological properties into a hollow marketing narrative. Projects routinely claim profound AI integration and strict blockchain necessity without providing the underlying artifacts required to substantiate these assertions. This environment of unconstrained self-reporting amplifies information asymmetry between project insiders and external stakeholders—including investors, researchers, and regulators.

The core problem is not merely a lack of information, but a lack of comparable, verified information. Existing market aggregators and decentralized finance (DeFi) trackers rely on metadata that reflects commercial positioning rather than technical substance. Consequently, the ecosystem is polluted by "AI-washing"—where trivial API wrappers are marketed as proprietary AI models—and "token-washing," where tokens are retrofitted into ecosystems without a genuine functional requirement.

To resolve this, the ecosystem requires an authoritative clearinghouse for verifiable claims. DecAIHub operates as a registry of verifiability rather than a traditional encyclopedia. It is built on the premise that claims without verifiable artifacts are indistinguishable from marketing noise. Our approach shifts the evaluation paradigm from "trusting the whitepaper" to "verifying the evidence trail," providing a fast, strict, and comparable signal of a project's true alignment with the Decentralized AI thesis.

2. The Verifiability Framework

At the heart of the DecAIHub architecture is the "Project Passport"—a unified, structured entity that serves as the baseline unit of verification (analogous to a transaction in a blockchain network). The Passport standardizes disparate project data into a machine-readable data contract encompassing normalized metadata, canonical links, and, crucially, an Evidence Layer.

To systematically separate signal from noise, DecAIHub employs a tiered evidence classification system that ranks sources based on their independent verifiability:

  • Tier-1 (Primary Evidence - Highest Weight): Artifacts that are directly and independently verifiable. This includes on-chain facts (verified smart contracts, blockchain explorer data), official public repositories with active release histories (e.g., GitHub), formal technical documentation, and published third-party security audits. Tier-1 sources are the fundamental building blocks of trust in the registry.
  • Tier-2 (Contextual Evidence - Medium Weight): Secondary sources that provide contextual support but cannot serve as independent proof of technological claims. These include data from established aggregators (CoinGecko, CoinMarketCap), ecosystem catalogues, and on-chain analytics dashboards.
  • Tier-3 (Informational Noise - Low Weight): Tertiary sources such as social media posts, influencer promotions, press releases, and unverified community claims. These sources are largely excluded from the formal verification scoring process.

The framework further distinguishes between Architecturally Embedded Transparency (AET)—signals forced by the technology, such as on-chain transaction logs—and Voluntarily Produced Transparency (VPT), which requires deliberate organizational investment, such as security audits and comprehensive documentation. Strategic bundling operates predominantly on the voluntary margin, serving as a credible indicator of project substance. Through conservative adjudication protocols, any claim lacking Tier-1 backing defaults to "unverified," bounding false-positive risks and ensuring manipulation-resistant assessments against adversarial tactics like selective disclosure and ambiguity injection.

3. Taxonomy and Scope

Defining the boundaries of the Decentralized AI ecosystem is a prerequisite for systematic screening. The AI-crypto domain exhibits extreme structural ambiguity and label co-occurrence; projects frequently span multiple categories, operating simultaneously as infrastructure, compute providers, and DeFi protocols.

DecAIHub resolves this through a strictly normalized, multi-label taxonomy that categorizes projects into core functional segments (e.g., Infrastructure, AI/Agents, AI/Compute, AI/Data, AI/Inference). However, our diagnostic analysis reveals that structural label ambiguity is distinct from documentation-quality uncertainty. A segment may have fluid boundaries but rigorous documentation (e.g., AI/Compute), or it may be structurally simple but severely lacking in verifiable evidence (e.g., Meme tokens).

Therefore, the scope of DecAIHub's verification explicitly targets the intersection of AI Reality and On-chain Necessity. We define a valid Decentralized AI project not merely by its self-assigned labels, but by its ability to demonstrate:

  1. AI Reality: The existence of a genuine AI capability—whether model inference, training, data labeling, or autonomous agents—supported by technical architectures, benchmarks, or reproducible demos.
  2. On-chain Necessity: A demonstrable requirement for blockchain infrastructure that extends beyond token issuance, typically encompassing trustless settlement, verifiable compute, censorship resistance, or decentralized state management.

By enforcing these boundaries, DecAIHub filters out both conventional AI projects that gratuitously issue tokens and traditional crypto projects that superficially adopt AI terminology. The resulting taxonomy provides a robust scaffold for the subsequent calculation of the DecAI Fit index.

4. Evidence and Verification

Just as Proof-of-Work relies on computational expenditure to establish consensus, DecAIHub relies on an Economy of Evidence to establish verifiability. In a trustless environment, project quality is signaled through costly, hard-to-fake artifacts rather than mere assertions.

Our framework introduces a two-stage screening process that evaluates the "evidence gap"—the discrepancy between asserted claims and Tier-1 verifiable artifacts. This gap is structurally assessed by differentiating between:

  • Architecturally Embedded Transparency (AET): Disclosures that exist by default due to the underlying technology, such as on-chain data accessible via public block explorers. These are non-discretionary and structurally low-cost.
  • Voluntarily Produced Transparency (VPT): Disclosures that require deliberate and costly organizational effort, such as formal security audits, actively maintained GitHub repositories, and comprehensive technical documentation.

By measuring the gap between narrative claim intensity and VPT artifacts, the registry systematically flags "low-verifiability risk" projects. The verifiability framework employs conservative adjudication protocols—where ambiguous or conflicting evidence defaults to "unverified"—thus defending against adversarial manipulation strategies like selective disclosure, ambiguity injection, and claim inflation.

5. On-chain & Token Necessity

A defining characteristic of AI-crypto projects is the compound necessity claim: asserting that both blockchain infrastructure and a native token are indispensable to the system. DecAIHub unpacks these assertions through explicit verification testing.

On-chain Necessity & The Replace-the-Chain Test: We evaluate whether a project utilizes blockchain for substantive logic (e.g., decentralized compute verification, state management) beyond mere token issuance. The ultimate standard is the "Replace-the-Chain" test: Would removing the blockchain component break the project's core functionality or trust model? Projects that claim pervasive on-chain necessity but fail to provide verifiable evidence fall into the "Claimed-Critical" risk zone, heavily indicative of onchain-washing.

Token Utility: Similarly, the framework assesses whether a native token serves a required economic or governance function (e.g., staking, network access, incentive distribution) or if it is functionally replaceable by existing mechanisms like stablecoins or base-layer assets (e.g., ETH). A high "token verifiability deficit" reveals token-washing, where the token serves primarily as a speculative financing vehicle rather than a functional necessity.

6. Governance and Open Data

To maintain systemic integrity, DecAIHub adheres to a formal data governance structure that bridges the gap between project narratives and verifiable reality. The registry's design enforces minimum traceability components (e.g., documentation, block explorers, repositories) to mitigate the risks of both false positives and false negatives in project evaluation.

Crucially, DecAIHub is committed to radical transparency and Open Data, adhering to FAIR (Findable, Accessible, Interoperable, and Reusable) data principles. The foundational unit of our platform—the Project-Card Schema—is formalized as a machine-readable JSON data contract, comprising three core modules:

  1. Passport: Normalized metadata describing the project's infrastructure, AI components, and token models.
  2. Links: Canonical URLs routing to primary sources.
  3. Evidence: The structured assessment table explicitly connecting claims to Tier-1/2/3 artifacts.

To empower the broader research and analytical community, the underlying methodologies and data are open-sourced. Our research repository, containing the full verification schema and related methodologies, is available on GitHub (https://github.com/DecAIHub/papers), and the foundational project-card dataset has been deposited on Zenodo (https://zenodo.org/records/18900950). This open-data infrastructure enables independent researchers, investors, and regulators to replicate our analyses, conduct cross-ecosystem comparisons, and build custom screening tools atop the DecAIHub framework.

7. The DecAI Fit Metric (Calculations)

The ultimate output of the DecAIHub verification process is the DecAI Fit metric—a composite index (scaled 0 to 6) representing the project's alignment with decentralized AI principles, constrained strictly by evidence quality. The index is designed to resist inflation and incorporates explicitly defined governance caps.

7.1 Component Scores

The evaluation is divided into four underlying components, each scoring between 0 and 3:

  1. AI Score ($a$): Evaluates the depth of AI integration based on binary flags (intent declaration, functional description, demo availability, benchmarks, and architecture documentation). $a = \min(3, A_0 + A_1 + A_2 + A_3 + A_4)$
  2. On-chain Score ($o$): Evaluates blockchain dependency using flags for contract existence, substantive on-chain logic, and the "Replace-the-Chain" criticality test. $o = \min(3, O_1 + O_2 + O_3)$
  3. Token Score ($t$): Evaluates token utility (payment, staking, governance) minus a penalty for replaceability. Projects without a token receive a neutral score of 2. $t = \text{clamp}_{[0,3]}(T_1 + T_2 + T_3 - T_4)$
  4. Evidence Score ($e$): Measures the presence of Tier-1 evidence supporting the aforementioned components. $e = \min(3, E^{AI}{T1} + E^{OC}{T1} + E^{TK}_{T1})$

7.2 Base Calculation and Governance Caps

The preliminary sum of the components is $S = a + o + t + e \in {0, \dots, 12}$. The Base Score ($B$) is then calculated as: $$B = \max\left(1, \lceil S/2 \rceil\right) \in {1, \dots, 6}$$

To ensure that mathematical aggregation does not obscure fundamental verifiability gaps, the Base Score is subjected to sequential governance caps. Each rule can only lower the final score:

  • R0 (AI-Reality Zero-Forcing): If a project demonstrates no verifiable AI capabilities ($a = 0$), the final score is immediately forced to 0. A project without AI cannot be classified as Decentralized AI.
  • R1 (Informational Noise Cap): If a project relies exclusively on Tier-3 evidence, its score is capped at 2.
  • R2 (Missing Core Evidence Cap): If Tier-1 evidence is missing for either the AI or On-chain dimensions, the score is capped at 3.
  • R3 (Token Verifiability Cap): If a token exists but lacks Tier-1 evidence for its utility, the score is capped at 4.
  • R4 (AI-Reality Cap): The final score cannot outpace the project's foundational AI score: if $a = 1$, the cap is 3; if $a = 2$, the cap is 5.

This structural conservatism ensures that high scores (5 and 6) are exclusively reserved for projects that provide comprehensive, multi-dimensional, Tier-1 verified disclosures.

8. Conclusion

The Decentralized AI ecosystem holds profound technological potential, yet it is currently undermined by rampant narrative inflation and inadequate disclosure norms. DecAIHub addresses this systemic vulnerability by establishing a rigorous, evidence-aware verifiability framework. By operationalizing concepts like "onchain-washing" and implementing the strictly capped DecAI Fit metric, the registry filters out speculative noise and highlights projects backed by demonstrable engineering reality.

DecAIHub is not a financial advisory platform; it is a specialized informational clearinghouse. As the ecosystem matures, the requirement for independent, empirical verification will only intensify. The methodologies and open-data schema outlined in this whitepaper provide the foundational infrastructure required to restore trust, transparency, and accountability to the intersection of artificial intelligence and blockchain.

References

The empirical and methodological foundations of the DecAIHub framework are derived from a series of seven comprehensive scientific articles (A–G). The primary datasets, classification rules, project-card schemas, and adversarial verification models detailed in these studies are open-sourced for public review and replication.

  • DecAIHub Research Repository: Full technical methodologies, replication scripts, and structured data contracts (JSON schemas) are available at: https://github.com/DecAIHub/papers
  • Project-Card Dataset: The foundational census of 845 AI-blockchain project profiles, complete with tiered evidence linkage, is publicly deposited on Zenodo at: https://zenodo.org/records/18900950