
Turns out robots can shade humans, and when they do, it makes us sad and unproductive.
So say researchers from Carnegie Mellon University who have released the results of a student-led study from its Robotics Institute.
The whole thing worked like this: Each of the study's 40 subjects played a game called "Guards and Treasures" against the robot, Pepper, 35 times. The game, classified as a Stackleberg game, pits "leaders" against "followers," where a designated leader moves first based on a predetermined strategy, and subsequent players have to respond to that strategy. Still with us? Good.
Researchers typically use this type of game to study "defender-attacker interaction in research on security games." But for this study, they were able to "explore the uses of game theory and bounded rationality in the context of robots." That's a mouthful, we know. But what it essentially means is they were testing to see how humans and robots interact in a non-cooperative environment. While playing each game, the students would either receive praise or taunts from Pepper. Read more...
More about Robots, Carnegie Mellon, Tech, and Artificial Intelligencevia Zero Tech Blog