Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months.
Paper: arxiv.org/abs/1907.07174
Dataset and code: github.com/hendrycks/natu…pic.twitter.com/pd75CyK54T