Update: we've extracted and analyzed 428,799,920 citation statements targeting 31,024,840 different papers. #openscience
Thank you for an amazing conference and for selecting as the 2019 Innovation in Publishing awardee! We're truly humbled, grateful, and excited to dig in and work with the community towards making research more reliable! #alpsp19
What if there was a systematic way to find out if a science paper has been contradicted? They're developing it using deep learning #AI, now w/ 8 million publications, 280 M citation statements
We've added classification tallies to our search results page! Now you can see if a paper has been supported or contradicted before clicking into the report. Try it out!
Happy July 305,972,156 (the latest count of total citations we've extracted from millions of scientific articles). You'll soon see 63,715,449 million more citation statements on !
Are you a PhD student with some extra time that wants to learn about academic publishing and startups as well as help make better? Email us at [email protected]! We're looking for more interns! #openscience #scicomm #phdchat
Citations =/= endorsements. uses machine learning to determine whether citations support, contradict, or merely mention a study - giving a better indication of impact It even has a neat Chrome extension. Give it a try at: #phdchat
We classify citations as supporting, contradicting, and mentioning. We also show where those citations come from (Intro, Methods, Results, Discussions, or other). Want to see how a protocol was done in 10 different papers? Try !
A lot of scientific claims being made at #CollisionConf. Check if they are supported or contradicted at
If you haven't seen us already, you might be interested in checking out . We've ingested/analyzed 8M full-text articles and are currently processing ~300k PDFs a day. The technical challenges you mention are indeed very difficult, but not insurmountable.
I would say that this is an issue but the fact that it is so easily caught makes it tolerable to some extent. A much larger issue (imo) is that there is no easy way to tell how reliable any scientific publication is. Trying to change that at
1970 vs. 2019. Vision & Reality. #openscience
Interesting: vs IF seems to show (noting caveats) that society journals receive on average more positive citations #alpsp19
Next up is Josh Nicholson, co-founder and CEO of , a deep learning platform that evaluates the reliability of scientific claims by citation analysis. #ALPSP19
Inderdaad, vandaar mijn "sentiment". Er worden trouwens wel stappen gezet hierin
Hi // #EBMLive2019: I am interested in citation context analysis in clinical trials. Come talk to me! A good point to start: a tool we developed that tells you HOW and not just HOW OFTEN an article has been cited!
(brief break from brexit focus). Yuri Lazebnik kindly pointed me to , which I 1st heard of on . I'm afraid the e.g. he sent makes me doubt ability of text mining to identify which papers do and don't support prior ones.
do something like this, and a few years ago Springer experimented with it, but it didn't get very far.