Worldcoin Use-Case
- IrisCode self-custody and upgrade ability.
- Making the Orb trustless, provide proof that fraud filters are applied.
Deep learning networks
layer | output shape | #parameters | #ops
-------------------- | --------------- | --------------- | ---------------
conv 32x5x5x3 | (116, 76, 32) | 2400 | 21158400
max-pool | (58, 38, 32) | 0 | 282112
relu | (58, 38, 32) | 0 | 70528
conv 32x5x5x32 | (54, 34, 32) | 25600 | 47001600
max-pool | (27, 17, 32) | 0 | 58752
relu | (27, 17, 32) | 0 | 14688
flatten | (14688,) | 0 | 0
full 1000x14688 | (1000,) | 14689000 | 14688000
relu | (1000,) | 0 | 1000
full 5x1000 | (5,) | 5005 | 5000
normalize | (5,) | 0 | 6
- Linear operations
- Matrix multiplications: A⋅x
- Convolutions
- Biases: x+b
- Pooling
- Max pooling: maxi**xi
- Sum/Avg pooling: ∑i**xi
- Activation functions:
- ReLU: max(0,x)
- PReLU: max(α⋅x,x) with 0≤α<1
- Logistic: (1+e−x)−1
- Polynomial: x2+x,
- Argmax: iargmaxxi
- Softmax: exi and rescaling ∥x∥1=1
- Normalize: Rescale such that 2-norm ∥x∥2=1.
Relevant ML evaluation work
Generally: lot's of similarities with embedded implementations: use fixed-point to avoid floating point math, model is held constant, can do precompute.
Differences: Fixed-point numbers in ZK can be large (very very large in some proof systems). Simple comparisons are expensive. Can do inverses trivially. (i.e. zk development we are all familiar with).
Prior ZK-ML work
- Justin Thaler (2013). "Time-Optimal Interactive Proofs for Circuit Evaluation" https://eprint.iacr.org/2013/351
- Pengtao Xie, Misha Bilenko, Tom Finley, Ran Gilad-Bachrach, Kristin Lauter, Michael Naehrig (2014). "Crypto-Nets: Neural Networks over Encrypted Data" https://arxiv.org/abs/1412.6181
- Nathan Dowlin, Ran Gilad-Bachrach, Kim Laine, Kristin Lauter, Michael Naehrig, John Wernsing (2016). "CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy" https://www.microsoft.com/en-us/research/publication/cryptonets-applying-neural-networks-to-encrypted-data-with-high-throughput-and-accuracy/
- Zahra Ghodsi, Tianyu Gu, Siddharth Garg (2017). "SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud" https://arxiv.org/abs/1706.10268
- Payman Mohassel and Yupeng Zhang (2017). "SecureML: A System for Scalable Privacy-Preserving Machine Learning" https://eprint.iacr.org/2017/396
- Jian Liu, Mika Juuti, Yao Lu, and N. Asokan (2017). "Oblivious Neural Network Predictions via MiniONN transformations" https://eprint.iacr.org/2017/452
- Seunghwa Lee, Hankyung Ko, Jihye Kim, and Hyunok Oh (2020). "vCNN: Verifiable Convolutional Neural Network based on zk-SNARKs" https://eprint.iacr.org/2020/584
- Ramy E. Ali, Jinhyun So, A. Salman Avestimehr (2020). "On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks" https://arxiv.org/abs/2011.05530
- Boyuan Feng, Lianke Qin, Zhenfei Zhang, Yufei Ding, and Shumo Chu (2021). "ZEN: An Optimizing Compiler for Verifiable, Zero-Knowledge Neural Network Inferences" https://eprint.iacr.org/2021/087
- Tianyi Liu, Xiang Xie, and Yupeng Zhang (2021). "zkCNN: Zero Knowledge Proofs for Convolutional Neural Network Predictions and Accuracy" https://eprint.iacr.org/2021/673
- Chenkai Weng, Kang Yang, Xiang Xie, Jonathan Katz, and Xiao Wang (2021). "Mystique: Efficient Conversions for Zero-Knowledge Proofs with Applications to Machine Learning" https://eprint.iacr.org/2021/730 slides
- Zahra Ghodsi, Tianyu Gu, Siddharth Garg (2017). "SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud" https://arxiv.org/abs/1706.10268
- @liaopeiyuan (2021). "zk-ml/demo" https://github.com/zk-ml/demo
- @socathie (2022). "CircomLib-ML" https://github.com/socathie/circomlib-ml GitCoin Grant Proposal
- @horacepan @sunfishstanford @henripal (2022). "ZK-MNIST" https://github.com/0xZKML/zk-mnist https://0xparc.org/blog/zk-mnist
- @recmo (2022). "proto-neural-zkp" https://github.com/worldcoin/proto-neural-zkp
Fix point
a=⌊232a⌋
a⋅b=⌊232a⌋
Strategy ideas
- Results in low level optimized implementation of matrix multiplications and convolutions.
- Recursion around layers. Vectors passed between layers are smallish, so can be committed to.
- Target 16-bit fixpoint so we can use PLookup for Relu.
- dot product without renormalization.
- Flip max-pool and relu 😁
- Clever algorithms to compute convolutions (from simple tricks to FFT based)
- Efficient non-zk convolution algorithms: https://arxiv.org/abs/1903.01521 http://people.ece.umn.edu/users/parhi/SLIDES/chap8.pdf
Worldcoin用例
- IrisCode 自我托管和升级能力。
- 将 Orb 变的无需信任,并提供应用了欺诈过滤器的证明。
深度学习网络
layer | output shape | #parameters | #ops
-------------------- | --------------- | --------------- | ---------------
conv 32x5x5x3 | (116, 76, 32) | 2400 | 21158400
max-pool | (58, 38, 32) | 0 | 282112
relu | (58, 38, 32) | 0 | 70528
conv 32x5x5x32 | (54, 34, 32) | 25600 | 47001600
max-pool | (27, 17, 32) | 0 | 58752
relu | (27, 17, 32) | 0 | 14688
flatten | (14688,) | 0 | 0
full 1000x14688 | (1000,) | 14689000 | 14688000
relu | (1000,) | 0 | 1000
full 5x1000 | (5,) | 5005 | 5000
normalize | (5,) | 0 | 6
- Linear operations
- Matrix multiplications: A⋅x
- Convolutions
- Biases: x+b
- Pooling
- Max pooling: maxi**xi
- Sum/Avg pooling: ∑i**xi
- Activation functions:
- ReLU: max(0,x)
- PReLU: max(α⋅x,x) with 0≤α<1
- Logistic: (1+e−x)−1
- Polynomial: x2+x,
- Argmax: iargmaxxi
- Softmax: exi and rescaling ∥x∥1=1
- Normalize: Rescale such that 2-norm ∥x∥2=1.
- Matrix multiplications: A⋅x
- Convolutions
- Biases: x+b
相关ML评估工作
通常:与嵌入式实现有很多相似之处:采用固定点数避免浮点数运算,模型保持不变,可以进行预计算。
区别:ZK 中固定点数可能会很大(在某些证明系统中非常非常大),简单的比较操作成本很高,但可以很容易地进行逆运算(比如我们熟知的零知识证明开发)。
之前的 ZK-ML 工作
固定点
a=⌊232a⌋
a⋅b=⌊232a⌋
战略思路
From: https://2π.com/22/zk-ml/#deep-learning-networks