RT @quocleix@twitter.com
A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness. http://arxiv.org/abs/2006.14536
🐦🔗: https://twitter.com/quocleix/status/1277668867066621952
Personal server of Lukáš Lánský