Scroll Top

New study finds people do more social loafing when working alongside robots


Working alongside AI and robots could make us all more productive and even smarter – or we could let them do all the work and get stupider. It’s all down to you and your attitude!


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Just like kids working alongside an iPad a new study has found that people tend to pay less attention to tasks when working alongside a robot, that is according to research that found evidence of “Social Loafing” – where team members work less hard if they think others will cover for them.


See also
A survey of 1.5 Million people shows a third can't tell AI from Humans


Researchers at the Technical University of Berlin said people come to see robots as part of their team. Where they think a colleague – or the technology – performs particularly well, or where they think their own contribution would not be appreciated, people tend to take a more laid-back approach, the scientists suggested.

“Teamwork is a mixed blessing,” said Dietlind Helene Cymek, the first author of the study, which appears in the journal Frontiers in Robotics and AI.

“Working together can motivate people to perform well but it can also lead to a loss of motivation because the individual contribution is not as visible. We were interested in whether we could also find such motivational effects when the team partner is a robot.”

The team tested their hypothesis by asking a cohort of workers to check the quality of a series of tasks; half of whom were told the tasks had been performed by a robot. While they did not work directly with the robot, named Panda, those people had seen it and were able to hear it operating.


See also
BCG AI in the workplace study shows junior employees benefit more than their seniors


The workers were all asked to carry out checks for errors on circuit boards. Their activity was monitored by the researchers, who blurred out the images of the boards the workers received, only showing them an image they could check once they actively opened it.

Initially, they said they found no statistical difference in the time the two groups – those who were told they were working with a robot and those who were not – spent inspecting the circuit boards, or in the area they searched for errors.

However, when the researchers investigated the participants’ error rates, they found those working with Panda were catching fewer defects after they had seen the robot had successfully flagged many errors. They said this could reflect a “looking but not seeing” effect, where people engage less once they feel a colleague or resource is reliable.

While participants – who were asked to rate their own performance – thought they were paying an equivalent amount of attention, the researchers felt that subconsciously they had begun to assume Panda had picked up defects well.


See also
Cornell's new AI is cataloguing nature - starting with trees


“It is easy to track where a person is looking, but much harder to tell whether that visual information is being sufficiently processed at a mental level,” said Dr Linda Onnasch, a senior author of the study.

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This