Our team has released a GPT-2 style model for dialog: DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation Draft paper: Github:
MS fine-tuned GPT-2 on 30G of Reddit comments to make a strong conversational model. It's up, but they couldn't sanitize the output so there's no GPT-2 decoder in the repo. First case of 'Decode at your own risk?'🤔 DIALOGPT abs:
DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation. (arXiv:1911.00536v1 []) #NLProc