News and Opinion from Sisters, Oregon
The Internet and social media make information easier to find than ever before. From pranks to parody to propaganda, we are exposed to misinformation every day.
Companies such as Facebook and Google have pledged to fight back with artificial intelligence, developing algorithms to detect falsehoods automatically. But will it work?
Dr. Daniel Lowd says the answer is complicated. He will speak at The Belfry on Tuesday, April 23 as part of the 2018-19 Frontiers in Science series, sponsored by the Sisters Science Club.
Science news is especially vulnerable to the spread of misinformation, complicated by public unfamiliarity with scientific processes, the inherent uncertainty of research findings and journalistic tendencies to overstate results.
Artificial intelligence is at least part of the answer to detecting fake information, abusive language and spam, Dr. Lowd contends. But humans are going to have to do their part, he says.
Just last week, Facebook announced a comprehensive new policy to stop the spread of fake news. But the company admits that it could never hire or partner with enough human beings to monitor all of the news published on its platform.
Staying even one step ahead of determined “fakers” is a major challenge for artificial intelligence software as well.
“Neural networks can find patterns in text and graphs,” Dr. Lowd explains, “but artificial intelligence itself can be fooled.”
For example, some algorithms consider how many people follow a Twitter account, but such followers can be bought. “Likes” for false Facebook posts can be paid for and mass-produced, lending crowd credibility to inaccurate claims. Reputable news organizations are then prompted to report “tweets” and “likes” as indicators of public opinion.
Machine learning is dependent on large amounts of data, and that data is produced by humans. In a recent widely published article,
Dr. Lowd recounted the story of “Tay,” a bot intended to engage Twitter users in meaningful conversation on behalf of Microsoft.
“Instead, trolls flooded the bot with hateful and abusive messages,” Dr. Lowd wrote. “As the bot analyzed that text, it began to reply in kind — and was quickly shut down.”
Compounding the problem of fake information is the rapid development of easily learned software that can falsify images and videos, changes that are rarely detectable by non-experts.
An associate professor in the Department of Computer and Information Science at University of Oregon,
Dr. Lowd earned his Ph.D. at the University of Washington. A prolific researcher with interests in machine learning, data mining, and artificial intelligence, he collaborates with international colleagues and students.
Dr. Lowd’s lecture, “Algorithms and Artificial Intelligence: Science Takes on Fake News,” starts at 7 p.m. at The Belfry, with an introduction by Jim Cornelius, editor in chief of The Nugget.
Social hour begins at 6 p.m. with light fare, beer, and wine available. Admission is $5; teachers and students are admitted free. The Belfry is located at 302 E. Main Ave., Sisters.
In addition to the evening lecture, Dr. Lowd will meet with Sisters Middle School students enrolled in computer programming classes taught by Wes Estvold.
The Frontiers in Science lecture series will conclude on Tuesday, May 7 with Dr. Robert Collins speaking on “The Brain Makes the Mind,” which argues that even “free will is fake news.”
For more information on these topics and on the Sisters Science Club:
Reader Comments(0)