当前位置: 华文星空 > 知识

AI研究方向:对抗攻击研究前景怎么样?

2019-12-03知识

这是一个很好的问题,我这里就summarize一下这个领域的问题

工业没有落地应用这是另外一个回答提到的,但是这里我想给adversarial example一些落地场景可提供的研究问题【感兴趣可以合作啊啊啊啊

问题1: 我们为什么需要adversarial robustness

adversarial robust的neural network看起来更加具有可解释性

ICLR2019有一篇用l0 norm对抗样本做可解释性的paper

Tianyuan Zhang, @Zhanxing Zhu. Interpreting Adversarial Trained Convolutional Neural Networks. 36th International Conference on Machine Learning.

Santurkar S, Tsipras D, Tran B, et al. Computer Vision with a Single (Robust) classifier[J]. arXiv preprint arXiv:1906.09453, 2019.

Madry组定义的robust non-robust feature 很有意思, 退回linear classifier,l inf adversarial training 等价于lasso,能做到一个feature selection的作用

所以adversarial example能不能做到false discover rate contorl或者causal discover?

Bernhard Schölkopf在上周挂arxiv的一篇review Causality for Machine Learning Causality for Machine Learning 里面提到了 adversarial会不会是causality造成的 【ICML2017的talk就提到了,为啥没人做。。。估计就是没应用。。。

这样可以的应用就是 adversarial example能不能帮我们domain adaptation,从这个角度看Xie C, Tan M, Gong B, et al. Adversarial Examples Improve Image Recognition[J]. arXiv preprint arXiv:1911.09665, 2019.拆两个bn的做法就是一个标准的domain adaptation的trick 【问adabn作者 @Naiyan Wang

包括今年nips的feature scattering adversarial training https:// arxiv.org/abs/1907.1076 4 其实就是transfer learning里面的惩罚两个domian feature 的ipm

【 p.s feature scattering adversarial training换attack可以攻破,看iclr 这篇的discussion区 https:// openreview.net/forum? id=Syejj0NYvr&noteId=Syejj0NYvr

其实这个t用domain adaptation的观点很早就有人提了

Song C, He K, Wang L, et al. Improving the generalization of adversarial training with domain adaptation[J]. arXiv preprint arXiv:1810.00740, 2018. ICLR2019

今年iclr也有888投稿用causal来做adversarial example

https://openreview.net/forum?id=Hkxvl0EtDH

现在SOTA很可能是

Zhang H, Yu Y, Jiao J, et al. Theoretically principled trade-off between robustness and accuracy[J]. arXiv preprint arXiv:1901.08573, 2019.

问题2 achieve adversarial robustness我们最大的困难在哪里

difficulty1 过拟合

实验上和理论上都发现容易过拟合

Schmidt L, Santurkar S, Tsipras D, et al. Adversarially robust generalization requires more data[C]//Advances in Neural Information Processing Systems. 2018: 5014-5026.

现在有想法是利用semi supervised training 解决,这个idea有四篇paper在arxiv三天被挂出来,请大家自行explore

当然理论上madry那个model太toy,到底为啥容易过拟合(madry的解释是只能用nonrobust feature,但是feature space小了不是更容易generalize吗?),是训练算法的锅还是data的还是model的,还是未解之谜

同时大家会问lower bound 是咋样呢,这里尝试也不是很多

Bhagoji A N, Cullina D, Mittal P. Lower Bounds on Adversarial Robustness from Optimal Transport[C]//Advances in Neural Information Processing Systems. 2019: 7496-7508.

Lower Bounds for Adversarially Robust PAC Learning

刻画adversarial robustness的generalization bound

Yin D, Ramchandran K, Bartlett P. Rademacher complexity for adversarially robust generalization[J]. arXiv preprint arXiv:1810.11914, 2018.

Improved Sample Complexities for Deep Networks and Robust classification via an All-Layer Margin

当然还有colt今年best student paper 虽然我没读懂。。。。

VC classes are Adversarially Robustly Learnable, but Only Improperly

Omar Montasser, Steve Hanneke, Nathan Srebro

包括也有theory认为数据增广的时候用了off manifold 的data【adversrail as an example】 会hurt generalization【为什么data augmentation会help generalization也是一个有意思的问题】

https:// openreview.net/pdf? id=ByxduJBtPB

difficulty2 训练速度

因为容易过拟合+网络 需要记住所有对抗样本,capacity就要很大,所以现在都有10x宽resnet,加上对抗攻击也不快,所以整体就非常慢,导致这个领域除了大公司都还在cifar上玩。怎样让大家能轻松scale到大数据集,包括imagenet上benchmark是多少。。。【现在大家报点特别乱】都很有意义

这个我也做过很多尝试

当然也有人认为就是不能加速的。。。

Bubeck S, Price E, Razenshteyn I. Adversarial examples from computational constraints[J]. arXiv preprint arXiv:1805.10204, 2018.

一些方向

用differential privacy来验证做certify

Cohen J M, Rosenfeld E, Kolter J Z. Certified adversarial robustness via randomized smoothing[J]. arXiv preprint arXiv:1902.02918, 2019.

。。。待补充

Also semidefinite programming can be used to certificate a neural network is safe from a class of adversaries

训练收敛性

Wang Y, Ma X, Bailey J, et al. On the Convergence and Robustness of Adversarial Training[C]//International Conference on Machine Learning. 2019: 6586-6595.

Gao R, Cai T, Li H, et al. Convergence of Adversarial Training in Overparametrized Neural Networks[C]//Advances in Neural Information Processing Systems. 2019: 13009-13020.

@赵拓 老师的 Inductive Bias of Gradient Descent based Adversarial Training on Separable Data

adversarial example 和维数灾难关系

是不是因为data维数太高,维数灾难,想象一下高维的正方体是一个刺猬,是不是特容易对抗攻击

DRO

为什么会和domain adaptation有关系吗

因为adversarial training和DRO(distributionally robust optimization)有关系

Certifying Some Distributional Robustness with Principled Adversarial Training, Aman Sinha, Hongseok Namkoong, John C. Duchi. Sixth International Conference on Learning Representations

当然在nlp里面也有应用

Distributionally robust language modeling. Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, Percy Liang. Empirical Methods in Natural Language Processing (EMNLP), 2019.

这方面可以看我校今年新ap https:// thashim.github.io/

最后总结一下,这个领域有没有东西做不是你看一眼paper就能知道的,如果只扫两三篇当然会觉得这个领域水,特别是这个领域发cvpr的paper,但是深刻挖掘还是有很多东西的