Worldcoin Use-Case

Deep learning networks

               layer |    output shape |     #parameters |            #ops
-------------------- | --------------- | --------------- | ---------------
       conv 32x5x5x3 |   (116, 76, 32) |            2400 |        21158400
            max-pool |    (58, 38, 32) |               0 |          282112
                relu |    (58, 38, 32) |               0 |           70528
      conv 32x5x5x32 |    (54, 34, 32) |           25600 |        47001600
            max-pool |    (27, 17, 32) |               0 |           58752
                relu |    (27, 17, 32) |               0 |           14688
             flatten |        (14688,) |               0 |               0
     full 1000x14688 |         (1000,) |        14689000 |        14688000
                relu |         (1000,) |               0 |            1000
         full 5x1000 |            (5,) |            5005 |            5000
           normalize |            (5,) |               0 |               6

Relevant ML evaluation work

Generally: lot's of similarities with embedded implementations: use fixed-point to avoid floating point math, model is held constant, can do precompute.

Differences: Fixed-point numbers in ZK can be large (very very large in some proof systems). Simple comparisons are expensive. Can do inverses trivially. (i.e. zk development we are all familiar with).

Prior ZK-ML work

Fix point

a=⌊232a

a⋅b=⌊232a

Strategy ideas

Worldcoin用例

深度学习网络

               layer |    output shape |     #parameters |            #ops
-------------------- | --------------- | --------------- | ---------------
       conv 32x5x5x3 |   (116, 76, 32) |            2400 |        21158400
            max-pool |    (58, 38, 32) |               0 |          282112
                relu |    (58, 38, 32) |               0 |           70528
      conv 32x5x5x32 |    (54, 34, 32) |           25600 |        47001600
            max-pool |    (27, 17, 32) |               0 |           58752
                relu |    (27, 17, 32) |               0 |           14688
             flatten |        (14688,) |               0 |               0
     full 1000x14688 |         (1000,) |        14689000 |        14688000
                relu |         (1000,) |               0 |            1000
         full 5x1000 |            (5,) |            5005 |            5000
           normalize |            (5,) |               0 |               6

相关ML评估工作

通常:与嵌入式实现有很多相似之处:采用固定点数避免浮点数运算,模型保持不变,可以进行预计算。

区别:ZK 中固定点数可能会很大(在某些证明系统中非常非常大),简单的比较操作成本很高,但可以很容易地进行逆运算(比如我们熟知的零知识证明开发)。

之前的 ZK-ML 工作

固定点

a=⌊232a

a⋅b=⌊232a

战略思路

From: https://2π.com/22/zk-ml/#deep-learning-networks