In a digitalised era where artificial intelligence and technology usage are becoming more prevalent, an interesting area of development is our relationship with robots — in particular, social robots. In a blogpost Moodley (2017) defines social robots as “robots for which social interaction plays a key role.” Think for example, R2D2 or C3PO from the Star Wars franchise, or for a more real world example there are robots like Sony’s AIBO and PARO the therapeutic robot.
Fong, Nourbakhsh and Dautenhahn (2003, p.145) identifies some key characteristics of social robots:
- expresses and/or perceives emotions
- communicates with high-level dialogue
- learn/recognize models of other agents
- establish/maintain social relationships
- use natural cues (gazes, gestures)
- exhibit distinctive personality and character
- may learn/develop social competencies
Coeckelbergh (2009, p.217) suggest with increasing developments of such robots — pet robots, toy robots and sex robots — there is a high possible of a near future where having these robots will be as common as having mobile phones or internet access. If such a future is indeed on the horizon, an important question to consider is: what are the societal impacts of interactions with such robots?
Human expectations of a robot companion
The first thing to consider is individuals preconceived ideas — if any — of what they expect from social robots. Dautenhahn et. al (2005, p. 1) finds that many individuals wanted humanlike communication in a natural and intuitive way, however humanlike appearance was not a necessity. The research also found that individuals preferred highly predictable behaviour that could be controlled and would feel disappointed and disengaged if a robot did not comply with that expectation (p.3).
Ray, Mondada and Siegwart (2008, p. 3818) present findings that claim individuals tend to see robots as a machine before an artificial being or something ‘alive’ and envision these robots as something that can/should improve their own quality of life. In this instance, it can be seen that individuals prefer robots to be an assistant as opposed to a friend.
Dautenhahn et. al (2005, p. 5) also argues that despite some apprehension around actually having a robot companion, most individuals were susceptible to interactions with robots. Ray, Mondada and Siegwart (2008, p. 3819) also express similar sentiments stating that individuals were more open to the idea of robots being in their cities and homes in the future.
From here, we can conclude that humans are mostly looking forward to having a relationship with robots, but they already have clear ideas of the boundaries that should exist.
Emotional responses to human-robot interaction (HRI)
However, the thing about human nature is that sometimes despite perceived boundaries, humans tend to form relationships through emotional connection with non-human things. This can be seen when humans often form connections with animals, plants or even objects. This stems from our tendency to attribute human characteristics to these non-human things, also known as anthropomorphism. And designers of artificial agents exploit this by creating social robots that engage in social cues and give off the impression that they have these human characteristics (Lakatos et.al, 2014, p. 2).
This video by The Verge highlights how the creation of many social robots tend to focus not so much on the robots looking like a human but more on behaving like one. Friedman, Kahn and Hagman (2003, p.274) suggests that that robotic technologies further blur the boundaries between who or what can possess feelings, establish emotional connection or engage in companionship.
Lee et. al (2006, p. 756) also note that humans tend to naturally respond to anthropomorphic cues that relate to fundamental human characteristics:
- individuals automatically respond [to these robots] socially
- are swayed by the fake human characteristics of machines
- do not process/ignore the factual information that this machine is not human
Lakatos et al (2014, p. 26) conclude that humans readily interact with these social robots based on their expressed emotional behaviour.
Friedman, Kahn and Hagman’s (2003) research on Sony’s AIBO found that the robot dog psychologically engaged users through their interactions with it, as many participants claimed feelings of attachment and seeing AIBO similarly to a real dog. The study notes that it is “not saying that AIBO owners believe literally that AIBO is alive, but rather that AIBO evokes feelings as if AIBO was alive,” (p.277). Participants in their study also expressed rage and discomfort at the thought that someone might mistreat AIBO. Riek and Howard (2014, p. 2) also note that users often develop strong psychological and emotional bonds with their therapy robots (ie PARO).
This goes to show that despite having preconceived boundaries of what robots should be, due to the design-facilitated interaction with a robot these boundaries can change. Individuals might think they prefer a robot assistant in a very clinical way, however when these robots show humanlike characteristics there’s an almost subconscious act of treating them as a friend.
Hancock et. al (2011, p.523) concludes that the way a robots performs [behaves] is one of the biggest factors that facilitates perceived trust in a human-robot interaction.
A look at some of the current social robots we have
- marketed as a ‘family robot’
- is seen to have a sense of humour that’s very humanlike
- talking to Jibo is very conversational and not ‘robotic’
- showed to have many practical benefits for everyday life
- Jibo is ‘one of the family’ and not just a piece of technology
- Introduced in a very human, almost childlike manner: ‘Hi I’m Tapia and this is the story of me and my family’
- Again Tapia uses a very conversational voice and expresses emotion, ‘get home early today, take some rest’
- Showed to have many practical benefits for everyday life
- Sings along with you similar to a friend
- “it’s been two weeks since you last called mum, maybe you should stop quarrelling with each other” a very human way of interacting with a lot of emotional pull
- even the commentator gushes about how cute it looks
- responds to things similar to how a real dog would
- ‘learns and grows with owners’
- Aibo will remember what makes the owner happy
- Aibo develops a unique personality depending on its interaction with the owner
- There are ways to take a break from AIBO and ‘shut it down’
It can be seen from these three social robots that they fit many of the characteristics outlined in the earlier part of this post. And that marketing strategies capitalise on their very humanlike qualities.
A moral dilemma?
As can be seen from the above videos, social robots have the capacity to be helpful and improve our quality of life.
However, in all those videos the social robots are not just marketed as a tool or a machine to help us. These social robots are branded as ‘one of the family’ and a companion as opposed to another machine. We are meant to be emotionally attached to these robots.
The interactions we have with these robots are far more similar to the interactions we have with other people or animals as opposed to the interactions we have with technological devices like phones and laptops. And as we can see from the information above, designers actively seek to make these interactions as close as possible to human-to-human interaction.
A reason to be concerned though is how these interactions with social robots might impact society. De Graaf (2016, p.591) captures this thought with a great question: “how will we share our world with these new social technologies, and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?”
Levy (2009 p.210-211) argues for the ethical treatment of robots by stating that humans treat each other ethically because we are aware of another’s consciousness, therefore if a robot exhibits behavior (look to characteristics of social robots) that is a product of consciousness (despite it being artificial) we should treat it ethically.
Levy (2009, p. 214) goes on to say that if we as a society recognise and give moral consideration to beings such as animals, then at the point robots exhibit similar characteristics as animals, we should treat the robot with value — doing otherwise would be demeaning to our humanity. Levy claims how we treat robots will soon become an example of how we can/should treat other human beings.
However, Coeckelbergh (2009, p.218) argues that we should instead focus on how humans perceive robots and what they do to us as social and emotional beings, and shift the premise of ethics to move away from ‘robot rights’ and look at how robots might facilitate human lives now and into the future.
I think as we move into future where artefacts like Jibo, Tapia and Aibo are becoming more common and widespread, we will be dealing with a very confusing future. We can see that as humans our initial conception of robots is that they are obviously machines and there is a degree of separation between a human and robot. However, because of our anthropomorphising tendencies, we tend to get attached to the very humanlike qualities of a social robot.
This brings a dissonance between our thoughts and how we behave. For example, if you are frustrated with a social robot because it doesn’t understand what you’re saying, would it be ok for you to curse/scream vulgarities at it?
If we consider Levy’s approach, it would be unethical of us to do that, because it goes against how we would treat any other being we regard with moral consideration. Or even if we did shout, we would apologise for that behaviour. Also, we might consider what impression shouting at the robot might give to our children.
If we follow Coeclkbergh’s logic, it shouldn’t be too much of an issue, because no real, tangible or emotional harm will come to the robot from hearing that. Furthermore, an apology in this instance would do more for how we feel about ourselves as opposed to remedy the hurt/anger the robot might feel.
But, if we develop technology that has robots responding to our negative actions in a similar way to humans (ie. a robot getting angry at us and shutting down or a robot expressing distress) will that make us feel uncomfortable? Or would the be beneficial for the way we treat robots into the future?
Ohja & Williams (2016) has done research on creating a model for ethical ways in which robots can respond to human behaviour. In that instance, should humans also have a system of ethics to treat robots?
Research still doesn’t have clear ideas of which line we should follow or if we should use both. But I think it’s important for not only researches, but for us as humans to consider how we want to facilitate our interactions with robots.
Social robots can be a brilliant technological innovation that bring value and improvement to our lives. However, this is one particular technology that will bring about far more social change then others. Because it not only changes how we do things, it also changes how we think, who/what we give value too and how we value our own humanity.
We are at the point where technology is becoming for more innate to human nature. So it’s really time for us, as individuals, to start considering how we might treat a robot best friend.
Coeckelbergh, M 2009, ‘Personal Robots, Appearance, and Human Good: A Methodological Reflection on Roboethics’, International Journal of Social Robotics, vol. 1, no. 3, pp. 217-21.
Dautenhahn, K, Woods, S, Kaouri, C, Walters, ML, Kheng Lee, K & Werry, I 2005, ‘What is a robot companion – friend, assistant or butler?’, in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1192-7.
de Graaf, MMA 2016, ‘An Ethical Evaluation of Human–Robot Relationships’, International Journal of Social Robotics, vol. 8, no. 4, pp. 589-98.
Fong, T Nourbakhsh, I & Dautenhahn, K 2003 ‘A survey of socially interactive robots, Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 143–166.
Friedman, B, Kahn, PH & Hagman, J 2003, Hardware companions? – What online AIBO discussion forums reveal about the human-robotic relationship. in V Bellotti, T Erickson, G Cockton & P Korhonen (eds), Conference on Human Factors in Computing Systems – Proceedings. pp. 273-280, The CHI 2003 New Horizons Conference Proceedings: Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL, United States.
Hancock, PA, Billings, DR, Schaefer, KE, Chen, JC, de Visser, EJ, & Parasuraman, R 2011, ‘A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction’, Human Factors, vol. 53, no. 5, pp. 517-527.
Kahn Jr., P, Ishiguro, H, Friedman, B, Kanda, T, Freier, N, Severson, R, & Miller, J 2007, ‘What is a Human?: Toward psychological benchmarks in the field of human–robot interaction’, Interaction Studies, 8, 3, pp. 363-390
Lee, K M, Peng, W, Jin, S & Yan, C 2006,’ Can Robots Manifest Personality?: An Empirical Test of Personality Recognition, Social Responses, and Social Presence in Human–Robot Interaction’ Journal of Communication, vol. 56, no. 4, pp. 754-772.
Moodley, T 2017, ‘Understanding Social Robots’ Thosha’s AI Blog, weblog post 17 January, viewed 20 May 2018 < https://thosham.wordpress.com/2017/01/15/understanding-social-robotics/>
Ojha, S & Williams, MA 2016 ‘Ethically-Guided Emotional Responses for Social Robots: Should I Be Angry?’. In Agah, A, Cabibihan, JJ, Howard, A, Salichs, M & He, H (eds) Social Robotics, ICSR 2016, Lecture Notes in Computer Science, vol. 9979 Springer, Cham
Ray, C, Mondada, F & Siegwart, R 2001 ‘What do people expect from robots?’ In Intelligent robots and systems, pp. 3816–3821, 2008 proceedings 2008 IEEE/RSJ international conference, Nice, France.
Riek, L and Howard, D 2014 ‘A Code of Ethics for the Human-Robot Interaction Profession’ in 2014 proceedings of We Robot 2014, University of Miami Law School, Coral Gables, FL.