當前位置: 華文星空 > 知識

AI研究方向:對抗攻擊研究前景怎麽樣?

2019-12-03知識

這是一個很好的問題,我這裏就summarize一下這個領域的問題

工業沒有落地套用這是另外一個回答提到的,但是這裏我想給adversarial example一些落地場景可提供的研究問題【感興趣可以合作啊啊啊啊

問題1: 我們為什麽需要adversarial robustness

adversarial robust的neural network看起來更加具有可解釋性

ICLR2019有一篇用l0 norm對抗樣本做可解釋性的paper

Tianyuan Zhang, @Zhanxing Zhu. Interpreting Adversarial Trained Convolutional Neural Networks. 36th International Conference on Machine Learning.

Santurkar S, Tsipras D, Tran B, et al. Computer Vision with a Single (Robust) classifier[J]. arXiv preprint arXiv:1906.09453, 2019.

Madry組定義的robust non-robust feature 很有意思, 退回linear classifier,l inf adversarial training 等價於lasso,能做到一個feature selection的作用

所以adversarial example能不能做到false discover rate contorl或者causal discover?

Bernhard Schölkopf在上周掛arxiv的一篇review Causality for Machine Learning Causality for Machine Learning 裏面提到了 adversarial會不會是causality造成的 【ICML2017的talk就提到了,為啥沒人做。。。估計就是沒套用。。。

這樣可以的套用就是 adversarial example能不能幫我們domain adaptation,從這個角度看Xie C, Tan M, Gong B, et al. Adversarial Examples Improve Image Recognition[J]. arXiv preprint arXiv:1911.09665, 2019.拆兩個bn的做法就是一個標準的domain adaptation的trick 【問adabn作者 @Naiyan Wang

包括今年nips的feature scattering adversarial training https:// arxiv.org/abs/1907.1076 4 其實就是transfer learning裏面的懲罰兩個domian feature 的ipm

【 p.s feature scattering adversarial training換attack可以攻破,看iclr 這篇的discussion區 https:// openreview.net/forum? id=Syejj0NYvr&noteId=Syejj0NYvr

其實這個t用domain adaptation的觀點很早就有人提了

Song C, He K, Wang L, et al. Improving the generalization of adversarial training with domain adaptation[J]. arXiv preprint arXiv:1810.00740, 2018. ICLR2019

今年iclr也有888投稿用causal來做adversarial example

https://openreview.net/forum?id=Hkxvl0EtDH

現在SOTA很可能是

Zhang H, Yu Y, Jiao J, et al. Theoretically principled trade-off between robustness and accuracy[J]. arXiv preprint arXiv:1901.08573, 2019.

問題2 achieve adversarial robustness我們最大的困難在哪裏

difficulty1 過擬合

實驗上和理論上都發現容易過擬合

Schmidt L, Santurkar S, Tsipras D, et al. Adversarially robust generalization requires more data[C]//Advances in Neural Information Processing Systems. 2018: 5014-5026.

現在有想法是利用semi supervised training 解決,這個idea有四篇paper在arxiv三天被掛出來,請大家自行explore

當然理論上madry那個model太toy,到底為啥容易過擬合(madry的解釋是只能用nonrobust feature,但是feature space小了不是更容易generalize嗎?),是訓練演算法的鍋還是data的還是model的,還是未解之謎

同時大家會問lower bound 是咋樣呢,這裏嘗試也不是很多

Bhagoji A N, Cullina D, Mittal P. Lower Bounds on Adversarial Robustness from Optimal Transport[C]//Advances in Neural Information Processing Systems. 2019: 7496-7508.

Lower Bounds for Adversarially Robust PAC Learning

刻畫adversarial robustness的generalization bound

Yin D, Ramchandran K, Bartlett P. Rademacher complexity for adversarially robust generalization[J]. arXiv preprint arXiv:1810.11914, 2018.

Improved Sample Complexities for Deep Networks and Robust classification via an All-Layer Margin

當然還有colt今年best student paper 雖然我沒讀懂。。。。

VC classes are Adversarially Robustly Learnable, but Only Improperly

Omar Montasser, Steve Hanneke, Nathan Srebro

包括也有theory認為數據增廣的時候用了off manifold 的data【adversrail as an example】 會hurt generalization【為什麽data augmentation會help generalization也是一個有意思的問題】

https:// openreview.net/pdf? id=ByxduJBtPB

difficulty2 訓練速度

因為容易過擬合+網絡 需要記住所有對抗樣本,capacity就要很大,所以現在都有10x寬resnet,加上對抗攻擊也不快,所以整體就非常慢,導致這個領域除了大公司都還在cifar上玩。怎樣讓大家能輕松scale到大數據集,包括imagenet上benchmark是多少。。。【現在大家報點特別亂】都很有意義

這個我也做過很多嘗試

當然也有人認為就是不能加速的。。。

Bubeck S, Price E, Razenshteyn I. Adversarial examples from computational constraints[J]. arXiv preprint arXiv:1805.10204, 2018.

一些方向

用differential privacy來驗證做certify

Cohen J M, Rosenfeld E, Kolter J Z. Certified adversarial robustness via randomized smoothing[J]. arXiv preprint arXiv:1902.02918, 2019.

。。。待補充

Also semidefinite programming can be used to certificate a neural network is safe from a class of adversaries

訓練收斂性

Wang Y, Ma X, Bailey J, et al. On the Convergence and Robustness of Adversarial Training[C]//International Conference on Machine Learning. 2019: 6586-6595.

Gao R, Cai T, Li H, et al. Convergence of Adversarial Training in Overparametrized Neural Networks[C]//Advances in Neural Information Processing Systems. 2019: 13009-13020.

@趙拓 老師的 Inductive Bias of Gradient Descent based Adversarial Training on Separable Data

adversarial example 和維數災難關系

是不是因為data維數太高,維數災難,想象一下高維的正方體是一個刺猬,是不是特容易對抗攻擊

DRO

為什麽會和domain adaptation有關系嗎

因為adversarial training和DRO(distributionally robust optimization)有關系

Certifying Some Distributional Robustness with Principled Adversarial Training, Aman Sinha, Hongseok Namkoong, John C. Duchi. Sixth International Conference on Learning Representations

當然在nlp裏面也有套用

Distributionally robust language modeling. Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, Percy Liang. Empirical Methods in Natural Language Processing (EMNLP), 2019.

這方面可以看我校今年新ap https:// thashim.github.io/

最後總結一下,這個領域有沒有東西做不是你看一眼paper就能知道的,如果只掃兩三篇當然會覺得這個領域水,特別是這個領域發cvpr的paper,但是深刻挖掘還是有很多東西的