From Geek Wire:
In the battle for AI supremacy, the best defense is a good offense.
That’s the philosophy behind Grover, a neural network application that is fighting disinformation by creating its own fake news.
“We have to be one step ahead of potential misuse,” said Yejin Choi, a University of Washington professor and lead author on the project, which was a collaboration by researchers at UW and the Allen Institute for Artificial Intelligence (AI2)]
Grover’s remarkably realistic news-making engine mimics the style and tone of specific publications and authors. The detection tool was able to tell the difference between human-written news and machine-written news 92 percent of the time, researchers said. Rather than looking at a single word that may be out of place, the model identifies statistical patterns in the overall text to decide if a piece of writing is fake or not.
Read the Complete Article
Direct to Grover Prototype
See Also: Defending Against Neural Fake News (Preprint)
Preprint by the Grover team.