Logo
AI Agents

TensorCommitments: A Lightweight Verifiable Inference for Language Models

Abstract

Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM without any adversarial tampering. This paper critically asks how to achieve verifiable LLM inference, where a prover (the service) must convince a verifier (the client) that an inference was run correctly without rerunning the LLM. Existing cryptographic works are too slow at the LLM scale, while non-cryptographic ones require a strong verifier GPU. The authors propose TensorCommitments (TCs), a tensor-native proof-of-inference scheme. TCs bind the LLM inference to a commitment — an irreversible tag that breaks under tampering — organized in a novel multivariate Terkle Tree structure. For LLaMA2, TCs add only 0.97% prover and 0.12% verifier overhead over inference while improving robustness to tailored LLM attacks by up to 48% over the best prior method (TOPLOC, ICML'25), at matched or lower cost.

Introduction and Motivation

As LLMs move from chatbots to decision-makers, we increasingly have to trust remote inference runs that we cannot see or replay. Current deployments provide almost no traceability of the intermediate activations or hidden states of these billion-parameter models; only the final text is visible, and re-running the full model just to check every answer is prohibitively expensive. As LLM usage scales to persistent, tool-using, and multi-agent systems, the gap between the impact of a single faulty inference and our ability to cheaply verify it is widening. This motivates the central question: can we design a practical mechanism that allows lightweight clients to check whether a complex LLM inference was executed correctly, without rerunning the model or exposing its internal states? Existing works on verifiable LLMs only partially address this and run into sharp trade-offs between scalability, privacy, and verifier cost. Cryptographic systems based on zero-knowledge proofs can certify end-to-end correctness of an inference while hiding model parameters, but by compiling every tensor operation into a large constraint system, they lead to minutes of prover run-time per query. Learning-based works like SVIP train auxiliary models to detect perturbed outputs, but their guarantees are statistical rather than cryptographic.

Background: Vector Commitments and Merkle Trees

The paper builds on two primitives. Vector Commitments (VCs) bind a vector v=(v1,,vd)Fd\mathbf{v} = (v_1, \ldots, v_d) \in \mathbb{F}^d to a single group element CGC \in G via univariate polynomial interpolation: given evaluation points Ω={ω1,,ωd}\Omega = \{\omega_1, \ldots, \omega_d\}, a polynomial fF[X]f \in \mathbb{F}[X] of degree <d< d satisfying f(ωi)=vif(\omega_i) = v_i is computed, and C:=gf(τ)C := g^{f(\tau)} is output using a structured reference string srs=(gτ0,,gτd)\text{srs} = (g^{\tau^0}, \ldots, g^{\tau^d}). Opening at position ii produces a proof πi:=ghi(τ)\pi_i := g^{h_i(\tau)} where hi(X):=(f(X)vi)/(Xωi)h_i(X) := (f(X) - v_i)/(X - \omega_i), and the verifier checks via a bilinear pairing e(Cgy,g)=?e(πi,gτωi)e(C \cdot g^{-y}, g) \stackrel{?}{=} e(\pi_i, g^{\tau - \omega_i}). Merkle Trees (MTs) use hash functions to commit to data vectors with O(logBn)O(\log_B n) proof size. Verkle Trees (VTs) replace hashes with VCs for smaller proofs. The key bottleneck for LLMs is that existing schemes are fundamentally vector-centric: they encode long flat vectors with univariate polynomials, causing the polynomial degree and interpolation cost to grow with every layer and token.

TensorCommitments: Multivariate Polynomial Scheme

The key technical insight is that commitment schemes are conceptually well matched to verifiable inference — if each hidden state and parameter block is bound to a succinct commitment, a remote run can be verified with a few proofs rather than replaying the full model. However, existing constructions encode tensors as flat vectors, destroying the multi-dimensional structure. TensorCommitments are tensor-native generalizations of VCs that commit to a multivariate polynomial. For a tensor TT with order mm and shape d=(d1,,dm)\mathbf{d} = (d_1, \ldots, d_m), TC interpolates TT into a multivariate polynomial fTF[X1,,Xm]f_T \in \mathbb{F}[X_1, \ldots, X_m] with degree djd_j in each variable XjX_j. The commitment is computed as:

CT=gfT(τ1,,τm)GC_T = g^{f_T(\tau_1, \ldots, \tau_m)} \in G

using per-axis trapdoors τ1,,τm\tau_1, \ldots, \tau_m. Opening at a challenge point ω=(ω1,,ωm)\boldsymbol{\omega} = (\omega_1, \ldots, \omega_m) produces proofs πωi\pi_{\omega_i} by iteratively dividing the polynomial by (Xiωi)(X_i - \omega_i), yielding quotients whose evaluations at the trapdoors serve as opening proofs. The verifier checks via pairings. The critical advantage is that for fixed data, multivariate interpolation becomes substantially more efficient than univariate as dimensionality increases — moving from 1D to 2D cuts runtime from 4.1s to 0.125s (over 30× speedup), with further reductions as dimension grows.

Terkle Trees: Tensor-Native Authentication

Terkle Trees (TTs) extend TCs from a single inference to authenticate an entire LLM dialogue. Unlike Merkle Trees that flatten data into one dimension, TTs organize evolving states as tensors: each internal node uu stores a commitment Cu:=ComTC(Zu)C_u := \text{Com}_{\text{TC}}(\mathbf{Z}_u) to a tensor-shaped grid of its BB children. New states and their proofs can reuse the structure of previous ones, instead of forcing a full recomputation. The root acts as a single global commitment for the entire dialogue. The prover opens only a few root-to-leaf paths selected by a layer policy, and the verifier checks consistency from leaves to the root. Terkle Trees match MTs' prover cost while verifying up to 1416× and 14× faster than MTs and VTs respectively, structurally aligning with modern LLM workloads while keeping the evolving state verification succinct and scalable.

Layer Selection Algorithm

LLMs have many blocks, and verifying the whole model (e.g., 70B parameters) is prohibitive. Layer-wise sensitivity to noise is highly non-uniform: only a subset of layers induces large shifts in outputs. The paper designs a robustness-aware layer selection scheme. For each block ii, the weight correlation matrix Xi=WiWiRdi×diX_i = W_i^\top W_i \in \mathbb{R}^{d_i \times d_i} is formed, eigenvalues are sorted, and a power-law tail p(Λ)Λα^ip(\Lambda) \propto \Lambda^{-\hat{\alpha}_i} is fitted via a Hill estimator. Smaller α^i\hat{\alpha}_i (heavier tail) empirically corresponds to layers that strongly affect the output. These scores are normalized, weighted by parameter size, and inverted to produce a benefit score νi\nu_i. The layer selection is then formulated as an integer linear program maximizing total verified robustness k,iνiγk,i\sum_{k,i} \nu_i \gamma_{k,i} subject to budget, no-overlap, and contiguity constraints. A dynamic program solves this in O(ML)O(M\mathcal{L}) time.

Experimental Evaluation

TensorCommitments are benchmarked against four methods on LLaMA 2-13B inference on an A100 with 10.165s base inference. zkLLM is a fully cryptographic approach requiring 23.1GB prover GPU memory and 3950ms verification time. SVIP is a learned inspector requiring 261 minutes of model training. TOPLOC (ICML'25) uses lightweight late-layer top-kk activation hashing with 81ms verification. TensorCommitments achieve: 0 GB verifier GPU utilization (vs. 71.32 GB for TOPLOC), 12 msec verification time, 2 B commitment size per token, and 96.02% attack detection accuracy — the highest among all methods. Under tailored attacks, TC improves median detection over TOPLOC by 12% under noise injection and 48% under prompt tampering. TOPLOC misses "slow-burn" manipulations (early-layer noise, low-rank weight edits, prompt tampering with entity/adjective swaps) because it only checks a last-layer top-kk activation signature. TC, by contrast, cryptographically verifies selected layers without exposing full activations.

Conclusion

TensorCommitments introduce a tensor-native commitment scheme and Terkle Trees for verifiable LLM inference. By committing directly to activation tensors via multivariate interpolation, TC reduces prover overhead to 0.97% and verifier overhead to 0.12% of a single forward pass, while avoiding any model re-execution. Coupled with a robustness-aware layer selector, TC improves robustness to tailored attacks by up to 48% over the strongest prior method at matched or lower cost. The work demonstrates that practical, lightweight verifiable inference is achievable for production-scale LLMs, opening the door to trustworthy AI inference in cloud, multi-agent, and safety-critical settings.

Stay Informed

Sign up for updates and never miss important announcements.

Subscribe

Join the community

Join our Discord server to get support or connect with the Pi² community.

Join discord

Follow us

Learn about company and product updates, upcoming events, rewards, and more.

TwitterTelegramDiscordLinkedInGitHubYouTube