For the past few years, zero-knowledge proofs on blockchains have been useful for two key purposes: (1) scaling compute-constrained networks by processing transactions off-chain and verifying the results on mainnet; and (2) protecting user privacy by enabling shielded transactions, viewable only to those who possess the decryption key. Within the context of blockchains, it’s clear why these properties are desirable: a decentralized network like Ethereum can’t increase throughput or block size without untenable demands on validator processing power, bandwidth, and latency (hence the need for validity rollups), and all transactions are visible to anyone (hence the demand for on-chain privacy solutions).
But zero-knowledge proofs are also useful for a third class of capabilities: efficiently verifying that any kind of computation (not just those within an off-chain instantiation of the EVM) has run correctly. This has implications far beyond blockchains.
Advancements in systems that leverage the ability of zero-knowledge proofs to succinctly verify computation are now making it possible for users to demand the same degree of trustlessness and verifiability assured by blockchains from every digital product in existence, most crucially from machine learning models. High demand for blockchain compute has incentivized zero-knowledge proof research, creating modern proving systems with smaller memory footprints and faster proving and verification times — making it now possible to verify certain small machine learning algorithms on-chain today.
We’ve all by now likely experienced the potential of interacting with an extremely powerful machine learning product. A few days ago, I used GPT-4 to help me create an AI that consistently beats me at chess. It felt like a poetic microcosm of all of the advances in machine learning that have occurred over the past few decades: it took the developers at IBM twelve years to produce Deep Blue, a model running on a 32-node IBM RS/6000 SP computer and capable of evaluating up to nearly 200 million chess moves per second, which beat the chess champion Gary Kasparov in 1997. By comparison, it took me a few hours – with minimal coding on my part – to create a program that could triumph over me.
Admittedly, I doubt the AI I created would be able to beat Garry Kasparov at chess, but that’s not the point. The point is anyone playing around with GPT-4 has likely had a similar experience gaining superpowers: with little effort, you can create something that approaches or surpasses your own capabilities. We are all researchers at IBM; we are all Garry Kasparov.
Obviously, this is thrilling and a bit daunting to consider. And for anyone working in the crypto industry, the natural impulse (after marveling at what machine learning can do) is to consider potential vectors of centralization, and ways those vectors can be decentralized into a network that people can transparently audit and own. Current models today are made by ingesting an enormous amount of publicly available text and data, but only a small number of people right now control and own those models. More specifically, the question isn’t “will AI be tremendously valuable,” the question is “how do we build these systems in such a way that anyone interacting with them will be able to reap its economic benefits and, if they so desire, ensure that their data is used in a way that honors their right to privacy.”
Recently, there has been a vocal effort to pause or mitigate the advancement of major AI projects like Chat-GPT. Halting progress is likely not the solution here: it would instead be better to push for models that are open-source, and in cases where model providers want their weights or data to be private, to secure them with privacy-preserving zero-knowledge proofs that are on-chain and fully auditable. Today, the latter use-case around private model weights and data is not yet feasible on-chain, but advances in zero-knowledge proving systems will make it possible in the future.
A chess AI like the one I built using Chat-GPT feels relatively benign at this point: a program with a fairly uniform output, which doesn’t use data that violates valuable intellectual property or infringes on privacy. But what happens when we want assurance that the model we are told is being run behind an API is indeed the one that ran? Or if I wanted to ingest attested data into a model that lives on-chain, with assurance that the data is indeed coming from a legitimate party? And what if I wanted assurance that the “people” submitting data were in fact people, and not bots seeking to sybil-attack my network? Zero-knowledge proofs, with their ability to succinctly represent and verify arbitrary programs are a way to do this.
It’s important to note that today, the primary use-case for zero-knowledge proofs in the context of machine learning on-chain is to verify correct computation. In other words, zero-knowledge proofs, and more specifically SNARKs (Succinct Non-Interactive Arguments of Knowledge), are most useful for their succinctness properties in the ML context. This is because zero-knowledge proofs protect the privacy of the prover (and of the data it processed) from a prying verifier. Privacy-enhancing technologies like Fully-Homomorphic Encryption (FHE), Functional Encryption, or Trusted Execution Environments (TEE) are more applicable for letting an untrusted prover run computations over private input data (exploring those more deeply falls outside the scope of this piece).
Let’s take a step back and understand at a high-level the kinds of machine learning applications you could represent in zero-knowledge. (For a deeper dive on ZK specifically, see our piece on improvements in zero-knowledge proving algorithms and hardware, Justin Thaler’s work on SNARK performance here and here, or our zero-knowledge canon.) Zero-knowledge proofs typically represent programs as arithmetic circuits: using these circuits, the prover generates a proof from public and private inputs, and the verifier mathematically computes that the output of this statement is correct — without obtaining any information about the private inputs.
We’re still at a very early stage of what is computationally practical to verify using zero-knowledge proofs on-chain, but improvements in algorithms are expanding the realm of what is feasible. Here are five ways zero knowledge proofs can be applied in machine learning.
1. Model Authenticity: You want assurance that the machine learning model some entity claims has been run is indeed the one that ran. Examples include a case where a model is accessible behind an API, and the purveyor of a particular model has multiple versions – say, a cheaper, less accurate one, and a more expensive, higher-performance one. Without proofs, you have no way of knowing whether the purveyor is serving you the cheaper model when you’ve actually paid for the more expensive one (e.g., the purveyor wants to save on server costs and boost their profit margin).
To do this, you’d want separate proofs for each instantiation of a model. A practical way to accomplish this is through Dan Boneh, Wilson Nguyen, and Alex Ozdemir’s framework for functional commitments, a SNARK-based zero-knowledge commitment scheme that allows a model owner to commit to a model, which users can input their data into and receive verification that the committed model has run. Some applications built on top of Risc Zero, a general purpose STARK-based VM, are also enabling this. Other research conducted by Daniel Kang, Tatsunori Hashimoto, Ion Stoica, and Yi Sun has demonstrated that it’s possible to verify valid inference on the ImageNet dataset, with 92% accuracy (which is on par with the highest performing non-ZK verified ImageNet models).
But just receiving proof that the committed model has run is not necessarily enough. A model may not accurately represent a given program, so one would want the committed model to be audited by a third party. Functional commitments allow the prover to establish that it used a committed model, but they don’t guarantee anything about the model that has been committed. If we can make zero-knowledge proofs performative enough for proving training (see example #4, below), we could one day start to get those guarantees as well.
2. Model Integrity: You want assurance that the same machine learning algorithm is being run on different users’ data the same way. This is useful in areas where you don’t want arbitrary bias applied, like credit scoring decisions and loan applications. You could use functional commitments for this as well. To do this, you would commit to a model and its parameters, and allow people to submit data. The output would verify that the model ran with the committed parameters for each user’s data. Alternatively, the model and its parameters could be made public and the users themselves could prove that they applied the appropriate model and parameters to their own (authenticated) data. This might be especially useful in the medical field, where certain information about patients is required by law to remain confidential. In the future, this could enable a medical diagnosis system that is able to learn and improve from realtime user data that remains completely private.
3. Attestations: You want to integrate attestations from external verified parties (e.g., any digital platform or piece of hardware that can produce a digital signature) into a model or any other kind of smart contract running on-chain. To do this, you would verify the signature using a zero-knowledge proof, and use the proof as an input in a program. Anna Rose and Tarun Chitra recently hosted an episode of the Zero Knowledge podcast with Daniel Kang and Yi Sun where they explored recent advancements in this field.
Specifically, Daniel and Yi recently released work on ways to verify that images taken by cameras with attested sensors were subject to transformations like cropping, resizing, or limited redactions – useful in cases where you want to prove that an image wasn’t deepfaked but did undergo some legitimate form of editing. Dan Boneh and Trisha Datta have also done similar work around verifying provenance of an image using zero-knowledge proofs.
But, more broadly, any digitally attested piece of information is a candidate for this form of verification: Jason Morton, who is working on the EZKL library (more on this in the following section) calls this “giving the blockchain eyes.” Any signed endpoint: (e.g., Cloudflare’s SXG service, third party notaries) produce digital signatures that can be verified, which could be useful for proving provenance and authenticity from a trusted party.
4. Decentralized Inference or Training: You want to perform machine-learning inference or training in a decentralized way, and allow people to submit data to a public model. To do this, you might deploy an already-existing model on-chain, or architect an entirely new network, and use zero-knowledge proofs to compress the model. Jason Morton’s EZKL library is creating a method for ingesting ONXX and JSON files, and converting them into ZK-SNARK circuits. A recent demo at ETH Denver showed that this can be used in applications like creating an image-recognition-based on-chain scavenger hunt, where creators of the game can upload a photo, generate a proof of the image, and players can upload images; the verifier checks whether the image the user uploads sufficiently matches the proof generated by the creator. EZKL now can verify models of up to 100 million parameters, implying that it could be used to verify ImageNet-sized models (which have 60 million parameters) on-chain.
Other teams, like Modulus Labs are benchmarking different proof systems for on-chain inference. Modulus’s benchmarks ran up to 18 million parameters. On the training side, Gensyn is building a decentralized compute system, where users can input public data, and have their models trained by a decentralized network of nodes, with verification for correctness of training.
5. Proof of Personhood: You want to verify that someone is a unique person without compromising their privacy. To do this, you would create a method of verification – for example, biometric scanning, or a method for submitting government ID in an encrypted manner. Then you would use zero-knowledge proofs to check that someone has been verified, without revealing any information about that person’s identity, whether that identity is fully recognizable, or pseudonymous, like a public key.
Worldcoin is doing this through their proof-of-personhood protocol, a way to ensure sybil-resistance by generating unique iris codes for users. Crucially, private keys created for the WorldID (and the other private keys for the crypto wallet created for Worldcoin users) are completely separate from the iris code generated locally by the project’s eye-scanning orb. This separation completely decouples biometric identifiers from any form of users’ keys that could be attributable to a person. Worldcoin also permits applications to embed an SDK that allows users to log in with the WorldID, and leverages zero-knowledge proofs for privacy, by allowing the application to check that the person has a WorldID, but does not enable individual user tracking (for more detail, see this blogpost).
This example is a form of combatting weaker, more malicious forms of artificial intelligence with the privacy-preserving properties of zero-knowledge proofs, so it’s quite different from the other examples listed above (e.g., proving that you are a real human, not a bot, without revealing any information about yourself).
Breakthroughs in proving systems that implement SNARKs (Succinct Non-Interactive Arguments of Knowledge) have been key drivers in putting many machine learning models on-chain. Some teams are making custom circuits in existing architectures (including Plonk, Plonky2, Air, and more). On the custom circuit side, Halo 2 has become a popular backend used by both Daniel Kang et. al. in their work, and Jason Morton’s EZKL project. Halo 2’s prover times are quasilinear, proof sizes are usually just a few kilobytes, and verifier times are constant. Perhaps more importantly, Halo 2 has strong developer tooling, making it a popular SNARK backend used by developers. Other teams, like Risc Zero, are aiming for a generalized VM strategy. And others are creating custom frameworks using Justin Thaler’s super-efficient proof systems based on the sum-check protocol.
Proof generation and verifier times depend, in absolute terms, on the hardware generating and checking the proofs as well as the size of the circuit for proof generation. But the crucial thing to note here is that regardless of the program being represented, the proof size will always be relatively small, so the burden on the verifier checking the proof is constrained. There are, however, some subtleties here: for proof systems like Plonky2 which use a FRI-based commitment scheme, proof size may increase. (Unless it is wrapped in a pairing-based SNARK like Plonk or Groth16 at the end, which don’t grow in size with the complexity of the statement being proven.)
The implication here for machine learning models is that once you have designed a proof system that accurately represents a model, the cost of actually verifying outputs will be quite cheap. The thing that developers have to make the most considerations of are prover time and memory: representing models in a way that they can be relatively quickly proven, and with proof sizes ideally around a few kilobytes. To prove the correct execution of machine learning models in zero knowledge, you need to encode model architecture (layers, nodes, and activation functions), parameters, constraints, and matrix multiplication operations and represent them as circuits. This involves breaking down these properties into arithmetic operations that can be performed over a finite field.
The area is still nascent. Accuracy and fidelity may suffer in the process of converting a model into a circuit. When a model is represented as an arithmetic circuit, those aforementioned model parameters, constraints, and matrix multiplication operations may need to be approximated and simplified. And when arithmetic operations are encoded as elements in the proof’s finite field, some precision might be lost (or the cost to generate a proof without these optimization with current zero-knowledge frameworks would be untenably high). Additionally, parameters and activations of machine learning models are often encoded as 32-bits for precision, but zero-knowledge proofs today can’t represent 32-bit floating point operations in the necessary arithmetic circuit format without massive overheads. As a result, developers may choose to use quantized machine learning models, whose 32-bit integers have already been converted into 8-bit precision. These types of models are favorable to representation as zero-knowledge proofs, but the model being verified might be a crude approximation of the higher-quality initial model.
At this stage, it’s admittedly a game of catch-up. As zero-knowledge proofs become more optimized, machine learning models grow in complexity. There are a number of promising areas for optimizations already: proof recursion could reduce overall proof size by allowing proofs to be used as inputs for the next proof, unlocking proof compression. There are emerging frameworks too, like Linear A’s fork of Apache’s Tensor Virtual Machine (TVM), which advances a transpiler for converting floating-point numbers into zero-knowledge friendly integer representations. And finally, we at a16z crypto are optimistic that future work will make it much more reasonable to represent 32-bit integers in SNARKs.
Zero-knowledge proofs scale through compression: SNARKs allow you to take an enormously complex system (a virtual machine, a machine learning model) and mathematically represent it so that the cost of verifying it is less than the cost of running it. Machine learning, on the other hand, scales through expansion: models today get better with more data, parameters, and GPUs/TPUs involved in the training and inference process. Centralized companies can run servers at a pretty much unbounded magnitude: charge a monthly fee for API calls, and cover the costs of operation.
The economic realities of blockchain networks operate almost in the inverse: developers are encouraged to optimize their code to make it computationally feasible (and inexpensive) to run on-chain. This asymmetry has a tremendous benefit: it has created an environment where proof systems need to become more efficient. We should be pushing for ways to demand the same benefits blockchains provide – namely, verifiable ownership, and a shared notion of truth – in machine learning as well.
While blockchains have incentivized optimizing zk-SNARKs, every field in computing will benefit.
Acknowledgements: Justin Thaler, Dan Boneh, Guy Wuollet, Sam Ragsdale, Ali Yahya, Chris Dixon, Eddy Lazzarin, Tim Roughgarden, Robert Hackett, Tim Sullivan, Jason Morton, Peiyuan Liao, Tarun Chitra, Brian Retford, Daniel Kang, Yi Sun, Anna Rose, Modulus Labs, DC Builder.
Elena Burger is a deal partner at a16z crypto, with a focus on games, NFTs, web3 media, and decentralized infrastructure. Prior to joining the team, she spent four years as an equities analyst at Gilder, Gagnon, Howe, and Co. She has a Bachelor’s degree from Barnard College, Columbia University, where she majored in history.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.
*This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.*
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
在过去的几年里,区块链上的零知识证明对两个关键目的很有用:(1)通过处理链下交易并在主网上验证结果来扩展计算受限的网络;(2) 通过启用屏蔽交易来保护用户隐私,只有拥有解密密钥的人才能查看。在区块链的背景下,很明显为什么这些属性是可取的:像以太坊这样的去中心化网络不能增加吞吐量或块大小而不需要对验证器处理能力、带宽和延迟(因此需要有效性汇总)以及所有任何人都可以看到交易(因此需要链上隐私解决方案)。
但是零知识证明对于第三类功能也很有用:有效地验证任何类型的计算(不仅仅是 EVM 的链下实例中的计算)是否正确运行。这的影响远远超出了区块链。
利用零知识证明的能力来简洁地验证计算的系统的进步现在使用户有可能要求区块链从现有的每个数字产品(最重要的是来自机器学习模型)确保相同程度的去信任和可验证性。对区块链计算的高需求激励了零知识证明研究,创建了内存占用更小、证明和验证时间更快的现代证明系统——现在可以在链上验证某些小型机器学习算法。
到目前为止,我们都可能体验过与极其强大的机器学习产品进行交互的潜力。几天前,我使用 GPT-4 帮助我创建了一个在国际象棋中始终击败我的人工智能。感觉就像是过去几十年机器学习所有进步的诗意缩影:IBM 的开发人员花了 12 年时间才开发出 Deep Blue,这是一个运行在 32 节点 IBM RS/6000 SP 计算机上的模型并且能够每秒评估近 2 亿步国际象棋走法,这在 1997 年击败了国际象棋冠军加里卡斯帕罗夫。相比之下,我花了几个小时 - 我的编码很少 - 创建了一个可以战胜我的程序.
诚然,我怀疑我创造的 AI 能否在国际象棋中击败 Garry Kasparov,但这不是重点。关键是任何玩弄 GPT-4 的人都可能有过获得超能力的类似经历:只需很少的努力,您就可以创造出接近或超过您自己能力的东西。我们都是 IBM 的研究人员;我们都是加里·卡斯帕罗夫。
显然,考虑到这一点令人兴奋,也有点令人望而生畏。对于任何在加密行业工作的人来说,自然的冲动(在惊叹机器学习的能力之后)是考虑潜在的中心化向量,以及这些向量可以去中心化到人们可以透明地审计和拥有的网络中的方式。目前的模型是通过吸收大量公开可用的文本和数据而制作的,但目前只有少数人控制和拥有这些模型。更具体地说,问题不是“AI 是否会具有巨大的价值”,而是“我们如何构建这些系统,使任何与它们交互的人都能从中获得经济利益,如果他们愿意,确保以尊重他们隐私权的方式使用他们的数据。”
最近,有人呼吁暂停或减缓 Chat-GPT 等主要人工智能项目的进展。停止进步可能不是这里的解决方案:最好推动开源模型,并且在模型提供者希望其权重或数据保密的情况下,通过保护隐私的零知识证明来保护它们链上且完全可审计。今天,后一种围绕私有模型权重和数据的用例在链上尚不可行,但零知识证明系统的进步将在未来使其成为可能。
像我使用 Chat-GPT 构建的国际象棋 AI 在这一点上感觉相对良性:一个输出相当统一的程序,它不使用侵犯有价值的知识产权或侵犯隐私的数据。但是,当我们想要确保我们被告知的模型在 API 后面运行时确实是运行的模型时会发生什么?或者,如果我想将经过证明的数据提取到链上模型中,并确保数据确实来自合法方?如果我想确保提交数据的“人”确实是人,而不是试图对我的网络进行女巫攻击的机器人,该怎么办?零知识证明及其简洁地表示和验证任意程序的能力是实现这一目标的一种方式。
重要的是要注意,今天,在链上机器学习环境中零知识证明的主要用例是验证正确的计算。换句话说,零知识证明,更具体地说是SNARK(简洁的非交互式知识论证),因其在 ML 上下文中的简洁性属性而最有用。这是因为零知识证明保护证明者**(**及其处理的数据)的隐私免受窥探验证者的侵害。隐私增强技术,例如全同态加密(FHE)、函数式加密或可信执行环境(TEE) 更适用于让不受信任的证明者对私有输入数据运行计算(更深入地探索那些超出本文范围的内容)。
让我们退后一步,从更高层次理解您可以用零知识表示的机器学习应用程序的种类。(要更深入地了解 ZK,请参阅我们关于零知识证明算法和硬件改进的文章、Justin Thaler在此处和此处关于 SNARK 性能的工作,或我们的零知识标准。)零知识证明通常将程序表示为算术电路:使用这些电路,证明者从公共和私人输入生成证明,而验证者通过数学计算该语句的输出是正确的——无需获得有关私人输入的任何信息。
我们仍处于使用链上零知识证明进行验证的计算实用性的早期阶段,但算法的改进正在扩大可行的范围。以下是零知识证明应用于机器学习的五种方式。
**1. 模型真实性:**您希望确保某些实体声称已经运行的机器学习模型确实是运行过的模型。示例包括可以在 API 后面访问模型的情况,并且特定模型的供应商有多个版本——比如,一个更便宜、不太准确的版本,以及一个更昂贵、更高性能的版本。如果没有证据,您就无法知道供应商是否为您提供更便宜的型号,而您实际上已经为更昂贵的型号付费了(例如,供应商想要节省服务器成本并提高他们的利润率)。
为此,您需要为模型的每个实例化提供单独的证明。实现这一目标的一种实用方法是通过 Dan Boneh、Wilson Nguyen 和 Alex Ozdemir 的功能承诺框架,这是一种基于 SNARK 的零知识承诺方案,允许模型所有者承诺模型,用户可以将他们的数据输入并收到已提交模型已运行的验证。一些构建在Risc Zero(一种基于 STARK 的通用虚拟机)之上的应用程序也支持这一点。由 Daniel Kang、Tatsunori Hashimoto、Ion Stoica 和 Yi Sun 进行的其他研究表明,可以在 ImageNet 数据集上验证有效推理,准确率为 92%(与性能最高的非 ZK 验证 ImageNet 模型)。
但是仅仅收到提交的模型已经运行的证据是不够的。模型可能无法准确地表示给定的程序,因此人们希望提交的模型由第三方审核。功能承诺允许证明者确定它使用了承诺模型,但它们不保证已提交模型的任何内容。如果我们可以使零知识证明的性能足以证明训练(参见下面的示例 #4),那么有一天我们也可以开始获得这些保证。
**2.模型完整性:**您希望确保相同的机器学习算法以相同的方式在不同用户的数据上运行。这在您不希望应用任意偏见的领域很有用,例如信用评分决策和贷款申请。您也可以为此使用功能承诺。为此,您将提交模型及其参数,并允许人们提交数据。输出将验证模型是否使用每个用户数据的提交参数运行。或者,模型及其参数可以公开,用户自己可以证明他们将适当的模型和参数应用于他们自己的(经过身份验证的)数据。这在医疗领域可能特别有用,因为法律要求有关患者的某些信息保密。将来,
**3. 证明:**您希望将来自外部验证方(例如,任何数字平台或可以生成数字签名的硬件)的证明集成到模型或链上运行的任何其他类型的智能合约中。为此,您将使用零知识证明来验证签名,并将该证明用作程序的输入。Anna Rose和Tarun Chitra最近与Daniel Kang和Yi Sun一起主持了一集零知识播客,他们在节目中探讨了该领域的最新进展。
具体来说,Daniel 和 Yi 最近发布了关于如何验证由带有经过验证的传感器的相机拍摄的图像是否经过裁剪、调整大小或有限编辑等转换的方法的工作——在你想证明图像不是深度伪造但确实进行了深度伪造的情况下很有用进行某种合法形式的编辑。Dan Boneh 和 Trisha Datta也围绕使用零知识证明验证图像来源 做了类似的工作。
但是,更广泛地说,任何经过数字认证的信息都可以进行这种形式的验证:Jason Morton,他正在研究 EZKL 库(在下一节中详细介绍)称之为“给区块链眼睛”。任何签名的端点:(例如,Cloudflare 的 SXG 服务、第三方公证人)生成可以验证的数字签名,这对于证明可信方的出处和真实性很有用。
**4. 分散推理或训练:**您想以分散的方式进行机器学习推理或训练,并允许人们将数据提交给公共模型。为此,您可以在链上部署一个已经存在的模型,或者构建一个全新的网络,并使用零知识证明来压缩模型。Jason Morton 的EZKL库正在创建一种方法来摄取 ONXX 和 JSON 文件,并将它们转换为 ZK-SNARK 电路。最近在 ETH Denver 的演示表明这可以用于创建基于图像识别的链上寻宝游戏等应用程序,游戏的创建者可以上传照片,生成图像证明,玩家可以上传图像;验证者检查用户上传的图像是否与创建者生成的证明充分匹配。EZKL 现在可以验证多达 1 亿个参数的模型,这意味着它可用于在链上 验证ImageNet 大小的模型(具有 6000 万个参数)。
其他团队,如Modulus Labs,正在对不同的链上推理证明系统进行基准测试。Modulus 的基准测试运行了多达 1800 万个参数。在训练方面,Gensyn正在构建一个去中心化的计算系统,用户可以在其中输入公共数据,并通过去中心化的节点网络训练他们的模型,并验证训练的正确性。
**5. 身份证明:**您想在不损害其隐私的情况下验证某人是独一无二的人。为此,您需要创建一种验证方法——例如,生物识别扫描,或一种以加密方式提交政府 ID 的方法。然后,您将使用零知识证明来检查某人是否已通过验证,而不会透露有关此人身份的任何信息,无论该身份是否完全可识别,或假名,如公钥。
Worldcoin 通过他们的身份证明协议来做到这一点,这是一种通过为用户生成独特的虹膜代码来确保抵抗女巫攻击的方法。至关重要的是,为 WorldID 创建的私钥(以及为 Worldcoin 用户创建的加密钱包的其他私钥)与项目的眼睛扫描球在本地生成的虹膜代码完全分开。这种分离将生物识别标识符与任何形式的可归因于个人的用户密钥完全分离。Worldcoin还允许应用程序嵌入允许用户使用 WorldID 登录的 SDK,并利用零知识证明保护隐私,允许应用程序检查该人是否拥有 WorldID,但不启用个人用户跟踪(更多信息详细看这个博文)。
这个例子是一种用零知识证明的隐私保护特性来对抗更弱、更恶意的人工智能形式的形式,所以它与上面列出的其他例子有很大的不同(例如,证明你是一个真正的人,而不是机器人,而不会透露有关您自己的任何信息)。
实现 SNARK(简洁的非交互式知识论证)的证明系统的突破一直是将许多机器学习模型放到链上的关键驱动因素。一些团队正在现有架构(包括 Plonk、Plonky2、Air 等)中制作定制电路。在自定义电路方面,Halo 2已成为 Daniel Kang 等人使用的流行后端。阿尔。在他们的工作中,以及 Jason Morton 的 EZKL 项目。Halo 2 的证明时间是准线性的,证明大小通常只有几千字节,并且验证者时间是常数。也许更重要的是,Halo 2 拥有强大的开发人员工具,使其成为开发人员使用的流行 SNARK 后端。其他团队,如 Risc Zero,旨在制定通用的 VM 策略。其他人正在使用 Justin Thaler基于和校验协议的 超高效证明系统创建自定义框架。
证明生成和验证器时间绝对取决于生成和检查证明的硬件以及用于证明生成的电路的大小。但这里要注意的关键一点是,无论被表示的程序如何,证明大小总是相对较小,因此验证者检查证明的负担受到限制。然而,这里有一些微妙之处:对于像 Plonky2 这样使用基于 FRI 的承诺方案的证明系统,证明大小可能会增加。(除非它最后被包裹在基于配对的 SNARK 中,例如 Plonk 或 Groth16,它们的大小不会随着被证明的陈述的复杂性而增加。)
这里对机器学习模型的含义是,一旦你设计了一个准确表示模型的证明系统,实际验证输出的成本就会非常低廉。开发人员必须考虑最多的是证明时间和内存:以一种可以相对快速证明模型的方式表示模型,并且证明大小最好在几千字节左右。为了在零知识下证明机器学习模型的正确执行,您需要对模型架构(层、节点和激活函数)、参数、约束和矩阵乘法运算进行编码,并将它们表示为电路。这涉及将这些属性分解为可以在有限域上执行的算术运算。
该地区仍处于起步阶段。在将模型转换为电路的过程中,准确性和保真度可能会受到影响。当模型表示为算术电路时,可能需要对上述模型参数、约束和矩阵乘法运算进行近似和简化。当算术运算被编码为证明的有限域中的元素时,一些精度可能会丢失(或者在没有这些优化的情况下使用当前的零知识框架生成证明的成本将高得无法承受)。此外,机器学习模型的参数和激活通常被编码为 32 位精度,但今天的零知识证明无法在没有大量开销的情况下以必要的算术电路格式表示 32 位浮点运算。因此,开发人员可以选择使用量化的机器学习模型,其 32 位整数已经转换为 8 位精度。这些类型的模型有利于表示为零知识证明,但被验证的模型可能是更高质量初始模型的粗略近似。
在这个阶段,这无疑是一场追赶游戏。随着零知识证明变得更加优化,机器学习模型变得越来越复杂。已经有许多有前途的优化领域:证明递归可以通过允许将证明用作下一个证明的输入来减少整体证明大小,解锁证明压缩。还有新兴的框架,例如 Linear A 的 Apache 张量虚拟机 (TVM) 的分支,它改进了一个转译器,用于将浮点数转换为零知识友好的整数表示形式。最后,我们 a16z crypto 乐观地认为,未来的工作将使在 SNARK 中表示 32 位整数变得更加合理。
零知识证明通过压缩进行扩展:SNARK 允许您采用极其复杂的系统(虚拟机、机器学习模型)并以数学方式表示它,以便验证它的成本低于运行它的成本。另一方面,机器学习通过扩展来扩展:如今的模型随着更多数据、参数和 GPU/TPU 参与训练和推理过程而变得更好。集中式公司可以以几乎无限的规模运行服务器:按月收取 API 调用费用,并支付运营成本。
区块链网络的经济现实几乎是相反的:鼓励开发人员优化他们的代码,使其在计算上可行(且成本低廉)以在链上运行。这种不对称有一个巨大的好处:它创造了一个证明系统需要变得更高效的环境。我们应该推动在机器学习中要求区块链提供相同好处的方法——即可验证的所有权和共享的真理概念。
虽然区块链激励优化 zk-SNARK,但计算领域的每个领域都将受益。
致谢:Justin Thaler、Dan Boneh、Guy Wuollet、Sam Ragsdale、Ali Yahya、Chris Dixon、Eddy Lazzarin、Tim Roughgarden、Robert Hackett、Tim Sullivan、Jason Morton、廖培元、Tarun Chitra、Brian Retford、Daniel Kang、Yi Sun、 Anna Rose,Modulus Labs,DC Builder。
Elena Burger是 a16z crypto 的交易合伙人,专注于游戏、NFT、web3 媒体和去中心化基础设施。在加入团队之前,她在 Gilder, Gagnon, Howe, and Co 担任了四年的股票分析师。她拥有哥伦比亚大学巴纳德学院的学士学位,主修历史。
此处表达的观点是引用的个人 AH Capital Management, LLC (“a16z”) 人员的观点,而不是 a16z 或其附属公司的观点。此处包含的某些信息是从第三方来源获得的,包括来自 a16z 管理的基金的投资组合公司。虽然从被认为可靠的来源获取信息,a16z 并未独立核实此类信息,也不对信息当前或持久的准确性或其对特定情况的适用性做出任何陈述。此外,此内容可能包含第三方广告;a16z 没有审查过此类广告,也不认可其中包含的任何广告内容。
此内容仅供参考,不应作为法律、商业、投资或税务建议。您应该就这些事项咨询您自己的顾问。对任何证券或数字资产的引用仅供说明之用,并不构成投资建议或提供投资咨询服务的要约。此外,本内容不针对或旨在供任何投资者或潜在投资者使用,并且在任何情况下都不得在决定投资 a16z 管理的任何基金时予以依赖。(投资 a16z 基金的要约只能通过任何此类基金的私募备忘录、认购协议和其他相关文件进行,并且应完整阅读。)提及的任何投资或投资组合公司,提及或描述的并不代表对 a16z 管理的车辆的所有投资,并且不能保证这些投资将盈利或未来进行的其他投资将具有类似的特征或结果。由 Andreessen Horowitz 管理的基金进行的投资清单(不包括发行人未允许 a16z 公开披露的投资以及未宣布的公开交易数字资产投资)可在 https://a16z.com/investments 获取/.
其中提供的图表仅供参考,在做出任何投资决定时不应依赖。过去的表现并不预示未来的结果。内容仅在指定日期有效。这些材料中表达的任何预测、估计、预测、目标、前景和/或意见如有更改,恕不另行通知,并且可能与其他人表达的意见不同或相反。请参阅 https://a16z.com/disclosures 了解更多重要信息。