In the late 90s and into the early 2000s there was a growth in popularity of consumer robotics. Robotic pets like Sony’s Aibo, Sega Toys’ Poo-chi, or Tiger Electronic’s Furby. Medical robots like PETS Robots for paediatric rehabilitation, NurseBot for assisting elderly patients or Roball as a potential educational tool for autistic children. Honda’s ASIMO and NASA’s Robonaut were probably two of the most advanced robots at the time.
With robotics starting to move from labs and factories into homes, schools and hospitals it became more important that these (and future) robots were not only useful but people were able to interact with them in intuitive and purposeful ways.
We are an inherently social species. Our social intelligence is a powerful tool for interacting with and understanding the behaviour of some of the most complex things in our world – living creatures and other people.
Social Robotics aims to understand and leverage that intelligence as well as some innate psychological traits we possess.
Anthropomorphism refers to our tendency to see human-like characteristics and motivations in non-human things such as animals, technology, and objects. There’s too much wrapped up in it from a psychological and philosophical perspective that it really deserves a post all to itself. I’ve tried to pick a few highlights from existing research:
Heider & Simmel (1944) created an experiment where viewers were asked to describe what they observed in a simple animation (the video above). Despite the lack of common human social cues – facial expressions, body language and speech – many were quick to see a story unfold containing characters with emotions, intentions and beliefs.
Premack & Premack (1995) reported that infants interpreted the spontaneous motion of objects as being the result of an internal cause and that they believed those objects did so intentionally exhibiting mental states such as perception and desire.
Reeves & Nass (1996) found that we often apply a social model in order to try to explain and predict the behaviour of complex non-living things where the underlying mechanisms generating the behaviour are not easily understood.
Not to labour the point, but watch this video of Vector’s reaction to being picked up. How would you describe the reaction?
Anthropomorphism helps us to understand an unfamiliar entity in terms of what we know best – ourselves. Right or wrong we leverage our knowledge of our own social and mental models in an effort to make complex behaviour more understandable, intuitive and predictable.
Defining a ‘Social Robot’
Some of the initial definitions of social robots are highlighted below.
Duffy et al (1999): made a distinction between “social robots” and “societal robots”.
A Social robot a “physical entity embodied in a complex, dynamic and social environment sufficiently empowered to behave in a manner conducive to its own goals and those of its community”.
A Societal robot is a robot that is integrated into “the human environment or society”.
They defined a four layer architecture:
- Physical – the robot has a form in its environment, it is embodied.
- Reactive – fundamental “reflex behaviours”. Any motor and sensory information is processed into “events” and transferred to the deliberative layer.
- Deliberative – A “Belief-Desire-Intention” architecture that converts “events” into beliefs the robot uses to maintain an “up to date model of its own current perceived situation”. The robot analyses its current model and communicates via its social layer or physically acts via the reactive layer
- Social – using an artificial language called “Teanga” the robot communicates with others.
Breazeal,C. (2003) defined four subclasses of social robot:
- Socially evocative – designed to encourage people to have to anthropomorphize in order to interact with it even though the robot is not truly socially reciprocating.
- Social interface – models human social behaviour at the interface level through human-like social cues and communication modalities (speech, facial expressions, gestures etc) through pre-canned or reflexive actions and response.
- Socially receptive – actually “benefits” or learns from interactions with humans and use cognitive modelling to alter the robot’s own internal state to learn a gesture demonstrated by a human. They are “socially passive” responding to interactions rather than pro-actively engaging with others.
- Sociable – attempt to model human-like social behaviour as close as possible. The robot has its own internal goals and motivations. They seek to engage with people in a social manner both to benefit the person (e.g. complete a task) and for the robot to improve its own performance. They maintain social and cognitive models of the people interacting with it and makes use of that model to better understand people.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003) gave the following definition:
“Social robots are embodied agents that are part of a heterogeneous group: a society of robots or humans. They are able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate with and learn from each other”
And appended three more classes to Breazeal’s original list:
- Socially situated – the robot is in a social environment it can perceive and react to.
- Socially embedded – a) situated in a social environment and interacting with other agents and humans, b) structurally coupled with their social environment and c) at least partially aware of human interaction structures like taking turns in a conversation for example.
- Socially intelligent – robots that show aspects of human style social intelligence, based on deep models of human cognition and social competence.
Bartneck, C. and Forlizzi, J. (2004)
“A social robot is an autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioral norms expected by the people with whom the robot is intended to interact.“
They defined a five part design-centric framework where the form of the robot balances the “needs of people, the capabilities of technology and the context of use into a single product”.
- Form – abstract, biomorphic or anthropomorphic.
- Modality – uni-modal to multi-modal communication channels such as visual, auditory, haptic etc.
- Social norms – these can be defined by the interactions between people and so it’s reasonable that they be used to define interactions between robots and people.
- Autonomy – the technological capability to act without direct input from people.
- Interactivity – the ability to respond to interactions with people.
This is a very high level overview of some of the early work in the field of social robotics. It was difficult to pull it all together in my own head without writing for pages and pages.
Breazeal and Fong et al were very focused on trying to reproduce and model actual human social and psychological models computationally that would work in any social situation.
Bartneck and Forlizzi’s framework operated more on a spectrum, kind of like a character creation screen in a video game. They reduced some of the complexity by stating it was reasonable to design a social robot based on the people and social environment it was meant to interact with rather than be able to cope with any social situation.
What’s common to all the definitions is the requirement for embodiment, being socially situated and the need for an internal model the robot maintains to distinguish itself from others.
I hope it’s clear how complex and interdisciplinary the field of social robotics really is. The Venn diagram would have overlaps of mechanical engineering, psychology, programming, animation, sociology, interaction design and probably several more. Aside from the some of the interesting questions it generates like “what’s the minimum requirement to be considered social?” there are other practical problems like navigation (“how would an assistive care robot make its way through a hospital corridor?”) and speaker recognition (“who said what in a group?”).
If you want to get in touch about anything I’ve written here, please use the contact form, I’d be delighted to hear from you. Bear in mind as a non-academic I have very limited access to published research if you feel I’ve made a mistake or missed out something important (be gentle! :-P)
Heider, F., Simmel, M., (1944), ‘An Experimental Study of Apparent Behavior’, The American Journal of Psychology Vol. 57, No. 2, pp. 243-259
Premack, D., Premack, A., (1995) ‘Origins of human social competence’, The Cognitive Neurosciences, pp. 205-218
Reeves, B., Nass, C., (1996), ‘The Media Equation’, CSLI Publications.
Dautenhan, K., Billard, A. (1999), ‘Bringing up robots or—the psychology of socially intelligent robots: from theory to implementation’
Duffy, B.R., Rooney, C.F.B., O’Hare, G.M.P., O’Donoghue, R.P.S., (1999), ‘What is a Social Robot?‘
Breazeal, C., (2003), “Toward sociable robots“
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003), “A survey of socially interactive robots”
Bartneck, C., Forlizzi, J., (2004), A Design-Centred Framework for Social Human-Robot Interaction.