When Recurrent Models Don't Need to be Recurrent
The Berkeley Artificial Intelligence Research Blog
The BAIR Blog
Anyone working with RNNs should give Temporal Convolutional Networks (TCNs) a try. They're simple, fast and hard to beat. I wish I had discovered them earlier
Berkeley AI Research
When and why can feed-forward networks replace recurrent neural networks without a loss in performance? Find out the answer by reading the new BAIR blog!
When Recurrent Models Don't Need to be Recurrent by John Miller (BAIR)
Robert (Munro) Monarch
Interesting article out BAIR as part of backlash against RNNs. I worry about using English LMs as a baseline. English is an outlier in strictness about word order. It's unreported if the results hold up in a typical language. #NLProc
wonders why recurrent doesn't outperform feedforward, and conjectures: Recurrent models trained in practice are effectively feed-forward.