Measuring abstract reasoning in neural networks - our latest #ICML2018 paper - takes inspiration from human IQ tests to explore abstract reasoning and generalisation in deep neural networks by , Felix Hill, , , Tim Lillicrap
Interesting that DeepMind is working on a IQ-like test to measure abstract reasoning capabilities. I've been working on a very similar benchmark for the past 6 months (taking a more formal approach). Good to see more action in that space (cc )
Our latest work on ‘Measuring abstract reasoning in neural networks’ has just been published at #icml2018. As always, it was a privilege to collaborate with , Felix Hill, and Tim Lillicrap. Paper: Blog post:
Tomorrow Felix Hill will present our #icml2018 work: Measuring abstract reasoning in neural networks. Paper: Blog:
wow i thought the press release was bad talking about reasoning along the lines of Archimedes and volume (for a paper about recognizing patterns in dots) and then i read the start of the *actual paper* pr: paper:
Interpolation is easier for machines than extrapolation. Commendably clearly written paper on the future of machine reasoning from .
Measuring abstract reasoning in neural networks
Measuring abstract reasoning in NNs