Scroll Top

Biased AI’s are becoming a problem, this startup is fighting back


The rise of the “Algorithmic society” coupled with biased AI’s and machines means that increasingly bias, if left unchecked, can change the course of people’s lives, as well as viewpoints,  both with potentially life changing and harmful consequences.


Today there is a huge amount of debate about the creation and emergence of biased Artificial Intelligence (AI) agents, and while the topic’s now snowballing into a huge issue we also mustn’t forget that a lot of the content we see and read today has always been biased in one way or another, arguably the CNN versus Fox debate. Furthermore, and again, lest we forget, there’s also an entire industry based on it, and I’m not talking about the press industry, I’m talking about, of course, lobbying.


See also
AI's that can create and infinitely stream their own content are getting closer


Bias is a natural part of everyday life, and it’s been with us ever since we started walking upright, if not before, because, simply put, we all have different viewpoints, some extreme and others not so extreme and we see these in our everyday lives, but the problem of bias becomes more extreme when AI is involved because we are increasingly living in what I call an “Algorithmic society,” one where a biased algorithm can make the difference between getting a mortgage, or not, being stopped and searched by the police or not, or being released from prison early, or not, or being shown different types of content in your news and newsfeeds. And all that’s for starters, what happens, for example, if you have a biased AI judge judging your trial, like the one that was recently trialled in Europe?

As the debate rages on in how best to tackle “algorithmic bias,” with some people advocating training our AI’s on more varied datasets, and others suggesting using platforms like Google DeepMind’s Psylab to analyse and assess AI’s “human” like qualities, another start up called Knowhere News is trying to combine machine learning together with human journalists to deliver the facts of individual news stories with “ice like” literal indifference.

Here’s how it works. First, the site’s AI chooses a story based on what’s popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather more details. Left-leaning sites, right-leaning sites – the AI looks at them all.


See also
EU mulls offline CBDC transactions to help users maintain privacy


Then, the AI writes its own “impartial” version of the story based on what it finds, in sometimes as little as a minute. This take on the news contains the most basic facts, with the AI striving to remove any potential bias. The AI also takes into account the “trustworthiness” of each source, something Knowhere’s co-founders pre-emptively determined. This ensures a site with a stellar reputation for accuracy isn’t overshadowed by one that plays a little fast and loose with the facts.

For some of the more political stories, the AI produces two additional versions labelled “Left” and “Right.” Those skew pretty much exactly how you’d expect from their headlines:

  • Impartial: “US to add citizenship question to 2020 census”
  • Left: “California sues Trump administration over census citizenship question”
  • Right: “Liberals object to inclusion of citizenship question on 2020 census”

Some controversial but not necessarily political stories also receive “Positive” and “Negative” spins:

  • Impartial: “Facebook scans things you send on messenger, Mark Zuckerberg admits”
  • Positive: “Facebook reveals that it scans Messenger for inappropriate content”
  • Negative: “Facebook admits to spying on Messenger, ‘scanning’ private images and links”

Even the images used with the stories occasionally reflect the content’s bias. The “Positive” Facebook story features CEO Mark Zuckerberg grinning, while the “Negative” one has him looking like his dog just died.


See also
Warner Brothers have hired an AI to help them pick future blockbusters


Knowhere’s AI isn’t putting journalists out of work, either, alledgedly.

Editor-in-chief and co-founder Nathaniel Barling told Motherboard that a pair of human editors review every story. This ensures you feel like you’re reading something written by an actual journalist, and not a Twitter chatbot. Those edits are then fed back into the AI, helping it improve over time. Barling himself then approves each story before it goes live. “The buck stops with me,” he said.

This human element could be the tech’s major flaw though. As we’ve seen with other AIs, they tend to take on the biases of their creators, so Barling and his editors will need to be as impartial as humanly possible, literally, in order to ensure the AI retains its impartiality.

Knowhere just raised $1.8 million in seed funding, so clearly investors think it has potential to change how we get our news, but will it be able to reach enough people, and, more importantly, the right people, to really matter?


See also
An AI learned to surf and use tools after playing 500 million games of hide and seek


Impartiality is Knowhere’s selling point, so if you think it sounds like a site you want to visit, you’re probably someone who already values impartiality in news, but the problem of bias, whether it’s human or AI based is the problem that some people are perfectly happy existing in an echo chamber where they get news from a source that reflects what they’re thinking, and then everything they read in that single echo chamber reinforces their viewpoint, skewing it more and more over time. In addition to this though if you’re one of those news sources with a big readership base then it’s also likely you don’t want to alienate your audience, right? So you keep feeding the same comfortable readers the same biased stories, and the cycle continues ad infinitum.

Bias isn’t something that we are ever going to be able to get rid off, but as algorithms play a greater role in our lives, from selecting the adverts and the content we see, to controlling and influencing the types of products and services we do, or do not, have access to, ensuring that we can detect, and then where necessary remove bias, from the system is an increasingly important area of study.

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This