Modern deep learning is a story of learned features outperforming (then replacing!) hand-designed algorithms. But we still use hand designed loss functions and optimizers. Here is a big step towards learned optimizers outperforming existing optimizers:
We have a new paper on learned optimizers! We used thousands of tasks (and a lot of compute 😬) to train general purpose learned optimizers that perform well on never-before-seen tasks, and can even train new versions of themselves. 1/8
Neat to see a mention of AGI in the 'broader impacts' section of 's paper on Learned Optimizers. Writing up paper for Import AI - learning to learn has become learning how to learn tools that learn how to learn efficient training.
3. Foundational research seems to be progressing very well and not slowing down (see e.g. 'A new backpropagation-free deep learning algorithm' or 'Learning to learn' ).