Follow

RT @quocleix@twitter.com

A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.
arxiv.org/abs/2006.14536

🐦🔗: twitter.com/quocleix/status/12

Sign in to participate in the conversation
Mastodon

Personal server of Lukáš Lánský