Scroll Top

Confidence that American’s can spot fake bots plummets to all time low

WHY THIS MATTERS IN BRIEF

If people can longer tell fake content or “people” from what’s real then there’s the threat that trust in each other, and in society and government, could break down.

 

How times have changed. Just a few years ago, before the arrival of fake Chinese news anchors, very real fake news hysteria and the appearance of increasingly life like avatars and virtual bloggers that are snagging millions in followers and hard cold cash, Americans were uncomfortably amused at the presence of social bots but were confident they could tell tofu from prime rib. In a new study though, about half, 47 percent, of people who heard about bots were very or somewhat confident they could recognise these kinds of accounts on social media. In the earlier study, a more substantial 84 percent expressed confidence in spotting made-up news.

 

See also
Step inside the SpaceX capsule that will take astronauts to the ISS

 

This is how the Pew Research Center explained the then-and-now:

“About half of those who have heard of bots (47%) are very or somewhat confident that they can recognise them, and just 7% are very confident. About four-in-ten (38%) are not very confident, and 15% say they are not at all confident. This stands in contrast to the confidence Americans had in their ability to detect made-up news: In a December 2016 survey, 84% of Americans were very or somewhat confident in their ability to recognise made-up news.”

So, fast forward to November 2018 and it appears that most Americans cannot distinguish between a human comment and that delivered by an automated bot. And they are not amused, you would need to scroll all the way south to find social bots’ numbers on a popularity poll.

Galen Stocking, computational social scientist and Nami Sumida, research analyst, wrote the article that reports on the survey and its findings.

The new survey by the Pew Research Center explored American thought on automated accounts on social media platforms and found that many think social bots have a negative impact on how people stay informed. Opposition was apparent toward any organisation or individual using bots to share false information. Majorities also opposed a celebrity using bots to gain more social media followers and a political party using bots to share information that favours or dislikes a candidate.

 

See also
Robot waiters get bussing in Silicon Valley's newest hot restaurant

 

“About two-thirds of American have heard about social media bots, most of whom believe they are used maliciously.

The two article authors said “About eight-in-ten of those who have heard of bots (81%) think that at least a fair amount of the news people get from social media comes from these accounts, including 17% who think a great deal comes from bots. And about two-thirds (66%) think that social media bots have a mostly negative effect on how well-informed Americans are about current events, while far fewer (11%) believe they have a mostly positive effect.”

Stocking and Sumida defined what they mean by social media bots – “accounts that operate on their own, without human involvement, to post and interact with others on social media sites.”

Shannon Liao, from The Verge, noticed something interesting about the naysayers, in that you could not categorise them by age and not by political persuasion; those who disliked bots crossed those lines.

“Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.”

The Pew Research Center’s survey drew from around 4,581 U.S. adults in July and August.

 

See also
US military starts using human gamers brainwaves to train killer robot swarms

 

One question begs for closer analysis; why would a small percentage, though small, find anything positive about the bots, about being lied to, that the information is not free of salary packet or automated word strings? As one comment-giver in The Verge put it, this might simply be human nature. If the plant is for the team or celeb or public official we root for, then we like to agree with the propaganda, plain and simple. Also, one must not confuse being “lied to” with automated messages provided by government agencies to post emergency updates for our safety.

In addition, an issue that begs for closer analysis is that, no matter what the flavour of the lie, we do not like a lie – but the propaganda playing field today is quite layered and confusing. Site visitors rebelling against phony-sounding comments are quick to brand the comments as from “bots” when they may be by human opportunists running fake accounts just to prop their employers or friends or idols. These are from humans so they do not fit easily into the definition of automated accounts.

Will more and more bots be good homework for our ability to ferret out truth versus propaganda and get more savvy with the times? After all, last year in The Atlantic: “Overall, bots – good and bad – are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report.”

Source: Pew Research Center

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This