How it works
Last updated
Last updated
Arbiter AI is combining the latest advancements in AI and blockchain to create a truly unique on-chain experience. Under the hood, the game is powered by a sophisticated interplay between on-chain and off-chain components, each serving a crucial role in ensuring the game's smooth operation, fairness, and security.
The overall workflow of Arbiter AI can be summarized as follows:
A user deposits at least 10 ArbAI tokens into the Arbiter Smart Contract to initiate a new round of the game.
The Arbiter Smart Contract emits an event, alerting the off-chain agents to commence the conversational challenge.
The user engages in a three-prompt conversational challenge with Chat-GPT3.5, wherein they must convincingly demonstrate their worthiness to be rewarded with ArbAI tokens.
After the three prompts are consumed, the off-chain agents collect the user's inputs and pass them to the custom machine learning model for evaluation.
The machine learning model processes the user's inputs and generates a "worthiness score," which is then processed by a ZKML neural network built using EZKL.
The off-chain agents generate a zero-knowledge proof of computation, proving the correctness of the model's output without revealing the weights used in the computation.
The zero-knowledge proof and the worthiness score are submitted to the Arbiter Smart Contract for on-chain verification using Neural's verify
precompile.
If the zero-knowledge proof is valid and the worthiness score exceeds the game's predetermined tolerance threshold, the user is rewarded with 20 ArbAI tokens, representing a successful round.
The following diagram illustrates the flow of information during the process described above.
The on-chain components of Arbiter AI form the backbone of the game's decentralized infrastructure, leveraging the power of the Neural blockchain.
Arbiter Smart Contract: This contract acts as the central authority, managing the state of the game, processing user inputs, and coordinating the responses from the off-chain agents. It serves as a trustless arbiter, ensuring that the rules of the game are enforced fairly and transparently. The Arbiter Smart Contract emits events at various stages of the game, acting as triggers for the off-chain agents to respond and fulfill their respective roles. Additionally, the Arbiter Smart Contract handles the crucial tasks of token transfers and scoring, ensuring that users' deposits are securely managed and that rewards are distributed accurately based on the outcome of each round.
EZKL Precompile: One of the unique aspects of Arbiter AI is its use of the Neural verify
precompile instead of the traditional Halo2Verifier Solidity file generated by EZKL. This innovative approach is particularly beneficial for larger machine learning models, which may breach the limitations of the Solidity verifier. The Neural verify
precompile allows Arbiter AI to leverage the power of zero-knowledge proofs, verifying the correctness of the computations performed by the machine learning model on-chain, without revealing the private inputs or weights used in the model. This not only ensures the integrity of the scoring process but also preserves the privacy and confidentiality of the machine learning model's inner workings.
While the on-chain components provide the decentralized infrastructure and ensure the game's fairness and transparency, the off-chain components are responsible for powering the AI capabilities that drive the core gameplay experience.
Off-Chain Agents: The off-chain agents play a critical role in facilitating the communication and coordination between the on-chain and off-chain components of Arbiter AI. These intelligent agents continuously monitor the Arbiter Smart Contract for specific events, acting as intermediaries between the blockchain and the AI systems. When a user deposits the required ArbAI tokens to start a round, the Arbiter Smart Contract emits an event, triggering the off-chain agents to initiate the conversational challenge. These agents relay the user's prompts to Chat-GPT3.5, facilitating the dynamic interaction between the user and the AI system. Once the three prompts are consumed, the off-chain agents collect the user's inputs and pass them to the custom machine learning model for evaluation. The model processes these inputs and generates a "worthiness score," which is then converted to a ZKML neural network using EZKL. With the ZKML neural network in hand, the off-chain agents generate a zero-knowledge proof of computation, demonstrating the correctness of the model's output without revealing the private inputs or weights used in the computation. Finally, the off-chain agents submit the zero-knowledge proof and the worthiness score to the Arbiter Smart Contract, triggering the on-chain verification process using Neural's verify
precompile.
Chat-GPT3.5: At the forefront of the off-chain ecosystem is the powerful Chat-GPT3.5, a state-of-the-art large language model (LLM). This advanced AI system is responsible for generating intelligent and contextual responses during the conversational challenge phase of the game. When users engage in the three-prompt conversational challenge, their inputs are processed by Chat-GPT3.5, which leverages its vast knowledge base and natural language processing capabilities to generate thoughtful and meaningful responses. This dynamic interaction between the user and the AI system forms the core gameplay experience, as users must employ their linguistic skills, reasoning abilities, and persuasiveness to convince the AI of their worthiness.
Worthiness Model: While Chat-GPT3.5 handles the conversational aspect of the game, a custom-built machine learning model plays a crucial role in evaluating the users' inputs and determining their overall "worthiness score." This machine learning model was meticulously trained using a publicly available data, leveraging advanced text vectorization techniques and standardization methods to quantify and sanitize the model's inputs effectively. The model's architecture consists of five layers, and its training process employed a binary cross-entropy loss function to optimize its performance. Once the training process was complete, the model was converted to the ONNX format, a widely-adopted open standard for machine learning models. This conversion step was essential for the subsequent integration with EZKL, a powerful tool for ZK-ifying machine learning models. Using EZKL, the ONNX model was transformed into a ZKML neural network, enabling the model's computations to be verified using zero-knowledge proofs. This process not only preserves the privacy of the model's inputs and weights but also ensures the integrity and verifiability of the scoring process on-chain.