ONNX

NeuralOnnx precompile is available on 0x00000000000000000000000000000044. It allows for atomic on chain inference of any small onnx model. The intended use case is for smartcontracts to store the onnx model and allow for atomic inference. The precompile supports input/output data in all formats that onnx runtime supports. This includes:

  • uint8

  • int8

  • uint16

  • int16

  • uint32

  • int32

  • uint64

  • int64

  • float32

  • float64

Using atomic inference is easy. You will need the following:

  • Your onnx model that you encoded as hex

  • Input and output types of your model

  • Off chain or solidity pre-inference to transfrom your input to format understandable by the precompile

  • Off chain or solidity post-inference to transfrom the model output to your desired format

Then you can call the precompile like Infer{InputType}{OutputType}({inputType}[] input, bytes model) public returns({outputType}[]) If both the model input and output types are floats then Infer{InputType}{OutputType}(int[] input, bytes model, uint8 precision) public returns (int[]) If only the model input is a float then Infer{InputType}{OutputType}(int[] input, bytes model, uint8 precision) public returns ({outputType}[]) If only the model output is a float then Infer{InputType}{OutputType}({inputType}[] input, bytes model, uint8 precision) public returns (int[]) Precision represtents the number od decimal places between 0-18 and is applied to both the input and the output of Infer functions that use floats.

Example of InferFloat32 is showed below. If you are interested in how other types are implemented have a look at the neural client source code.

    /*
        ...
    */
func (c *NeuralOnnx) InferFloat32(gas *big.Int, input []*big.Int, model []byte, precision uint8) ([]*big.Int, error, *big.Int) {
	p, err := validatePrecision(precision)
	if err != nil {
		return nil, err, nil
	}
	inputF, err := toFloats32(input, p)
	out, err := customonnx.CustomClasifier[float32, float32](inputF, model)
	fmt.Println(err)

	if err != nil {
		return nil, err, nil
	}
	output := toBigArray(out, p)

	return output, nil, nil
}
pragma solidity >=0.4.21 <0.9.0;

interface NeuralOnnx {
    function CustomFloat32(
    /*
        ...
    */
        int calldata data,
        bytes calldata model,
        uint8 precisoin,
    ) external returns (int[] calldata);

Last updated