Your best friend could be a robot: social robots & their societal impacts

Social Robots

In a digitalised era where artificial intelligence and technology usage are becoming more prevalent, an interesting area of development is our relationship with robots — in particular, social robots. In a blogpost Moodley (2017) defines social robots as “robots for which social interaction plays a key role.” Think for example, R2D2 or C3PO from the Star Wars franchise, or for a more real world example there are robots like Sony’s AIBO and PARO the therapeutic robot.


AIBO, Sony’s robot dog (Source: Flicker user ETC-USC shared under CC BY 2.0)


PARO the therapeutic seal (Source: Flickr user Happysteve shared under CC-BY-SA-2.0)

Fong, Nourbakhsh and Dautenhahn (2003, p.145) identifies some key characteristics of social robots:

  • expresses and/or perceives emotions
  • communicates with high-level dialogue
  • learn/recognize models of other agents
  • establish/maintain social relationships
  • use natural cues (gazes, gestures)
  • exhibit distinctive personality and character
  • may learn/develop social competencies

Coeckelbergh (2009, p.217) suggest with increasing developments of such robots — pet robots, toy robots and sex robots — there is a high possible of a near future where having these robots will be as common as having mobile phones or internet access. If such a future is indeed on the horizon, an important question to consider is: what are the societal impacts of interactions with such robots?

Human expectations of a robot companion

The first thing to consider is individuals preconceived ideas — if any —  of what they expect from social robots. Dautenhahn et. al (2005, p. 1) finds that many individuals wanted humanlike communication in a natural and intuitive way, however humanlike appearance was not a necessity. The research also found that individuals preferred highly predictable behaviour that could be controlled and would feel disappointed and disengaged if a robot did not comply with that expectation (p.3).

Ray, Mondada and Siegwart (2008, p. 3818) present findings that claim individuals tend to see robots as a machine before an artificial being or something ‘alive’ and envision these robots as something that can/should improve their own quality of life. In this instance, it can be seen that individuals prefer robots to be an assistant as opposed to a friend.

Dautenhahn et. al (2005, p. 5) also argues that despite some apprehension around actually having a robot companion, most individuals were susceptible to interactions with robots. Ray, Mondada and Siegwart (2008, p. 3819) also express similar sentiments stating that individuals were more open to the idea of robots being in their cities and homes in the future.

From here, we can conclude that humans are mostly looking forward to having a relationship with robots, but they already have clear ideas of the boundaries that should exist.

Emotional responses to human-robot interaction (HRI)

However, the thing about human nature is that sometimes despite perceived boundaries, humans tend to form relationships through emotional connection with non-human things. This can be seen when humans often form connections with animals, plants or even objects. This stems from our tendency to attribute human characteristics to these non-human things, also known as anthropomorphism. And designers of artificial agents exploit this by creating social robots that engage in social cues and give off the impression that they have these human characteristics (Lakatos, 2014, p. 2).

This video by The Verge highlights how the creation of many social robots tend to focus not so much on the robots looking like a human but more on behaving like one. Friedman, Kahn and Hagman (2003, p.274) suggests that that robotic technologies further blur the boundaries between who or what can possess feelings, establish emotional connection or engage in companionship.

Lee et. al (2006, p. 756) also note that humans tend to naturally respond to anthropomorphic cues that relate to fundamental human characteristics:

  • individuals automatically respond [to these robots] socially
  • are swayed by the fake human characteristics of machines
  • do not process/ignore the factual information that this machine is not human

Lakatos et al (2014, p. 26) conclude that humans readily interact with these social robots based on their expressed emotional behaviour.

Friedman, Kahn and Hagman’s (2003) research on Sony’s AIBO found that the robot dog psychologically engaged users through their interactions with it, as many participants claimed feelings of attachment and seeing AIBO similarly to a real dog. The study notes that it is “not saying that AIBO owners believe literally that AIBO is alive, but rather that AIBO evokes feelings as if AIBO was alive,” (p.277). Participants in their study also expressed rage and discomfort at the thought that someone might mistreat AIBO. Riek and Howard (2014, p. 2) also note that users often develop strong psychological and emotional bonds with their therapy robots (ie PARO).

This goes to show that despite having preconceived boundaries of what robots should be, due to the design-facilitated interaction with a robot these boundaries can change. Individuals might think they prefer a robot assistant in a very clinical way, however when these robots show humanlike characteristics there’s an almost subconscious act of treating them as a friend.

Hancock et. al (2011, p.523) concludes that the way a robots performs [behaves] is one of the biggest factors that facilitates perceived trust in a human-robot interaction.

A look at some of the current social robots we have



Notable points:

  • marketed as a ‘family robot’
  • is seen to have a sense of humour that’s very humanlike
  • talking to Jibo is very conversational and not ‘robotic’
  • showed to have many practical benefits for everyday life
  • Jibo is ‘one of the family’ and not just a piece of technology




Notable points:

  • Introduced in a very human, almost childlike manner: ‘Hi I’m Tapia and this is the story of me and my family’
  • Again Tapia uses a very conversational voice and expresses emotion, ‘get home early today, take some rest’
  • Showed to have many practical benefits for everyday life
  • Sings along with you similar to a friend
  • “it’s been two weeks since you last called mum, maybe you should stop quarrelling with each other” a very human way of interacting with a lot of emotional pull



Notable points

  • even the commentator gushes about how cute it looks
  • responds to things similar to how a real dog would
  • ‘learns and grows with owners’
  • Aibo will remember what makes the owner happy
  • Aibo develops a unique personality depending on its interaction with the owner
  • There are ways to take a break from AIBO and ‘shut it down’

It can be seen from these three social robots that they fit many of the characteristics outlined in the earlier part of this post. And that marketing strategies capitalise on their very humanlike qualities.

A moral dilemma?

 As can be seen from the above videos, social robots have the capacity to be helpful and improve our quality of life.

However, in all those videos the social robots are not just marketed as a tool or a machine to help us. These social robots are branded as ‘one of the family’ and a companion as opposed to another machine. We are meant to be emotionally attached to these robots.

The interactions we have with these robots are far more similar to the interactions we have with other people or animals as opposed to the interactions we have with technological devices like phones and laptops. And as we can see from the information above, designers actively seek to make these interactions as close as possible to human-to-human interaction.

A reason to be concerned though is how these interactions with social robots might impact society. De Graaf (2016, p.591) captures this thought with a great question: “how will we share our world with these new social technologies, and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?”

Levy (2009 p.210-211) argues for the ethical treatment of robots by stating that humans treat each other ethically because we are aware of another’s consciousness, therefore if a robot exhibits behavior (look to characteristics of social robots) that is a product of consciousness (despite it being artificial) we should treat it ethically.

Levy (2009, p. 214) goes on to say that if we as a society recognise and give moral consideration to beings such as animals, then at the point robots exhibit similar characteristics as animals, we should treat the robot with value — doing otherwise would be demeaning to our humanity. Levy claims how we treat robots will soon become an example of how we can/should treat other human beings.

However, Coeckelbergh (2009, p.218) argues that we should instead focus on how humans perceive robots and what they do to us as social and emotional beings, and shift the premise of ethics to move away from ‘robot rights’ and look at how robots might facilitate human lives now and into the future.

I think as we move into future where artefacts like Jibo, Tapia and Aibo are becoming more common and widespread, we will be dealing with a very confusing future. We can see that as humans our initial conception of robots is that they are obviously machines and there is a degree of separation between a human and robot. However, because of our anthropomorphising tendencies, we tend to get attached to the very humanlike qualities of a social robot.

This brings a dissonance between our thoughts and how we behave. For example, if you are frustrated with a social robot because it doesn’t understand what you’re saying, would it be ok for you to curse/scream vulgarities at it?

If we consider Levy’s approach, it would be unethical of us to do that, because it goes against how we would treat any other being we regard with moral consideration. Or even if we did shout, we would apologise for that behaviour. Also, we might consider what impression shouting at the robot might give to our children.

If we follow Coeclkbergh’s logic, it shouldn’t be too much of an issue, because no real, tangible or emotional harm will come to the robot from hearing that. Furthermore, an apology in this instance would do more for how we feel about ourselves as opposed to remedy the hurt/anger the robot might feel.

But, if we develop technology that has robots responding to our negative actions in a similar way to humans (ie. a robot getting angry at us and shutting down or a robot expressing distress) will that make us feel uncomfortable? Or would the be beneficial for the way we treat robots into the future?

Ohja & Williams (2016) has done research on creating a model for ethical ways in which robots can respond to human behaviour. In that instance, should humans also have a system of ethics to treat robots?

Research still doesn’t have clear ideas of which line we should follow or if we should use both. But I think it’s important for not only researches, but for us as humans to consider how we want to facilitate our interactions with robots.


Social robots can be a brilliant technological innovation that bring value and improvement to our lives. However, this is one particular technology that will bring about far more social change then others. Because it not only changes how we do things, it also changes how we think, who/what we give value too and how we value our own humanity.

We are at the point where technology is becoming for more innate to human nature. So it’s really time for us, as individuals, to start considering how we might treat a robot best friend.

Reference list
Coeckelbergh, M 2009, ‘Personal Robots, Appearance, and Human Good: A Methodological Reflection on Roboethics’, International Journal of Social Robotics, vol. 1, no. 3, pp. 217-21.
 Dautenhahn, K, Woods, S, Kaouri, C, Walters, ML, Kheng Lee, K & Werry, I 2005, ‘What is a robot companion – friend, assistant or butler?’, in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1192-7.
de Graaf, MMA 2016, ‘An Ethical Evaluation of Human–Robot Relationships’, International Journal of Social Robotics, vol. 8, no. 4, pp. 589-98.
 Fong, T Nourbakhsh, I & Dautenhahn, K 2003 ‘A survey of socially interactive robots, Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 143–166.
 Friedman, B, Kahn, PH & Hagman, J 2003, Hardware companions? – What online AIBO discussion forums reveal about the human-robotic relationship. in V Bellotti, T Erickson, G Cockton & P Korhonen (eds), Conference on Human Factors in Computing Systems – Proceedings. pp. 273-280, The CHI 2003 New Horizons Conference Proceedings: Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL, United States.
Hancock, PA, Billings, DR, Schaefer, KE, Chen, JC, de Visser, EJ, & Parasuraman, R 2011, ‘A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction’, Human Factors, vol. 53, no. 5, pp. 517-527.
Kahn Jr., P, Ishiguro, H, Friedman, B, Kanda, T, Freier, N, Severson, R, & Miller, J 2007, ‘What is a Human?: Toward psychological benchmarks in the field of human–robot interaction’, Interaction Studies, 8, 3, pp. 363-390
 Lee, K M, Peng, W, Jin, S & Yan, C 2006,’ Can Robots Manifest Personality?: An Empirical Test of Personality Recognition, Social Responses, and Social Presence in Human–Robot Interaction’ Journal of Communication, vol. 56, no. 4, pp. 754-772.
Moodley, T 2017, ‘Understanding Social Robots’ Thosha’s AI Blog, weblog post 17 January, viewed 20 May 2018 <>
Ojha, S & Williams, MA 2016 ‘Ethically-Guided Emotional Responses for Social Robots: Should I Be Angry?’. In Agah, A, Cabibihan, JJ, Howard, A, Salichs, M & He, H (eds) Social Robotics, ICSR 2016, Lecture Notes in Computer Science, vol. 9979 Springer, Cham
Ray, C, Mondada, F & Siegwart, R 2001 ‘What do people expect from robots?’ In Intelligent robots and systems, pp. 3816–3821, 2008 proceedings 2008 IEEE/RSJ international conference, Nice, France.
Riek, L and Howard, D 2014 ‘A Code of Ethics for the Human-Robot Interaction Profession’ in 2014 proceedings of We Robot 2014, University of Miami Law School, Coral Gables, FL.



The Politics & the Personal in Workplaces

Did you know around 70% of employers screen your social media accounts in the hiring process?

Ok, I don’t know how accurate that figure is (because it’s based on one study), but I do know it is definitely becoming the norm for employers to look at your social media accounts when you apply for jobs.

As someone doing a degree in communications and media I have no escape from hiding any public social platforms I have from potential employers. If that is the fate I’ve been resigned too then what should I be posting on my Twitter account and blog? Are there topics I should stay way from? Issues I shouldn’t discuss? Will voicing certain political opinions actually get me fired/not hired?

On my twitter and even my blog, I don’t filter the dominant parts of my personality. If you go through the first few of my tweets, you’ll know my stand on many different politic issues, it’ll be very clear the kind of values I stand by, you’ll know that I have very strong advocacy for movements supporting and uplifting women of colour. These are some of the more political tweets you’ll find on my timeline.

Aside from politics, you also get glimpses of my personal life. My interactions with my close friends, my feelings towards some of the friendships I have and even some tweets about my family.


If employers are going to be looking at all these things: is there a line I should draw?

Let’s start with the politics. I’ve always been wary and careful about what I post – especially with issues surrounding race. Because I know I will be applying for jobs in Australia – where my employers will be predominantly white – and I’m unsure how they’ll feel about hiring someone with strong convictions on white privilege.

Just recently L’Oreal fired Munroe Bergdof over remarks she made of the Charlottesville protests, the company claiming she was “at odds with our values” in a tweet. Here is where I ask myself: am I willing to tone down my advocacy? should I be a little less intense when I post tweets regarding politics? (All these things I also think about knowing I have peers/lecturers from uni following me online too).

No matter how many conversations I have with myself on this issue, I always come down to the same conclusion – I’m not sure if I should or shouldn’t but I don’t want too. At this point, I’m happy to say I’ll cop whatever jobs I miss out on; if a company is actually using those tweets as a reason to not hire me, probably means I dodged a bullet (this is what I’d like to believe anyways).

When it comes to the personal, I post whatever I don’t mind anyone knowing about me. It could be my struggles with anxiety, funny conversations with family and things I’ve done with friends. I recognise some of these tweets might point towards consuming alcohol, maybe even on sex/dating, but they’re never explicit/crude (most of the time they’re quite funny). I think those are just part of the human experience, it wouldn’t stop me from doing my job to the best of my capabilities.

The thing about my twitter account is that it isn’t just those things. It isn’t just the politics and the personal. It’s also my values, what I can bring to the table, my lens of the human experience. It is part of who I am and who I will be as an employee.

If an employer can look at these tweets and based on an algorithm decide there’s something there that makes me not worth hiring, that means you wouldn’t have liked me in your company anyways. I know companies like to say all they’re looking for is a ‘professional’ online presence but that is a very loaded, very subjective word.

I will end this post with the title of Kris’s blogpost on public identities (which I definitely recommend reading) because it made me laugh and quite aptly summarises what I think: I’m ‘authentic’. Fire me.

The Value of Narrative Practices

Something I often ponder: What is so valuable about seeing your story told, either in your own words or through the words of others? As someone who adores writing, I find words comforting – a way of organising the messy, cluttering thoughts in my head (hence, the name of this blog).

This weekend, I read an article by Carmen Ostrander on narrative practices in therapy. This article explores different ways words and narratives can be used in therapy and how they might influence people’s thoughts and approach to things. Ostrander claims that this can “provide engaging departure points for alternative accounts of lived experience with outcomes that are unique to the individual,” (2017 p. 56). As human beings, we are quite accustomed to the idea of telling or listening to stories. Writing in the Atlantic on the psychological comforts of storytelling, Delistraty suggests: “Humans are inclined to see narratives where there are none because it can afford meaning to our lives, a form of existential problem-solving,” (2014). This is similar to the concept Ostrander tries to use in her therapeutic practices – alternative approaches to therapy by engaging a client’s creativity. She does so by looking at few different ways of narrative practice: documentation, invitation, collaborative note-taking, rescued word poems, letter writing, waitlist letters, bibliotherapy, medicinal words and the written word.

The practices that caught my interest in particular are: the written word, collaborative note-taking and rescued word poems. These three practices link well to my proposed question of the value of seeing your narrative told in the words of your own or the words of others.

Ostrander highlights the importance of the written word. Listening and speaking can sometimes include a tangle of words and thoughts. Writing allows you to neatly tie together certain ideas and the most prominent or important thoughts/phrases/words your mind has latched onto; “writing provides an alternative means of expression, and a different way to hear what is being said,” (Ostrander, 2017 p. 62). This reminded me of a lovely piece written by Claire of excerpts from her journal. There is something quite entirely wonderful about seeing the narratives you have constructed of yourself at a particular moment in time. Ostrander goes on to explain how these constructions are valuable in understanding yourself, your problems and becomes a point of connection in your communities. While I think all of this is true, there is definitely much more to be said about the kind of reflection the written word offers us as individuals. Looking back at your words from the past is almost a form of time travel. It gives you a sharp contrast of who you were then and who you are now.

However, what happens when others take part in writing your narrative?

Ostrander looks at collaborative note taking, as a way “to adopt a decentred approach that fosters a sense of joint ownership over the documents and files,” (2017 p. 57). She actively chooses to show her clients the notes she has taken over their session and allows to them to change the words she’s written. This is a brilliant insight into the importance of being careful with the words of others. I’ve talked about this in a previous blogpost, where I question how we can respect the narratives of others, especially when individuals can be quite concerned with self-presentation. Ostrander has a lovely approach to this with her collaborative note-taking. She explains it by saying a joint examination allows for someone to have “ownership of what is said about them,” (2017 p. 57). This indicates that even when we let others take charge of our narrative, it is important that we have the most control over it. The way we should undertake writing the narratives of others is to remember that we are simply a vessel for their voices – let them do the guiding.

Rescue-word poems allow for some creativity while listening to the narrative of others. Ostrander uses it to “capture points of resonance, evocative images, words that zing and sparkle,” (2017 p. 58). She writes out words, phrases, sentences that her client has said and shows it to them at the end of session. The experience she talks about here reminded me of Kris Christou’s article, ‘Words are powerful entities which convey the values of an individual‘: he created a lumen5 video using words his mother said to him. She was surprised that those words actually came from her. However, having your own words repeated back to you in such a way can be quite confronting. A classmate of mine has said that it was rather weird to see your own words reflected that way. Ostrander acknowledges this as well, saying this method has not always been positively received. However, I do think there is merit to this practice. It allows you to understand how others might perceive your words; how they might engage with the narrative you are trying to present. Sometimes, it can provide a new perspective of your values.

This article was insightful and strengthened my love for narrative writing. I will end this on a quote from Ostrander’s article that encapsulates the value of narratives.

“[Stories] flourish when they are written, spoken, shared and witnessed, extending beyond ourselves, connecting to others, in chorus, in community,” (2017 p.63).


Delistraty, C. 2014 ‘The psychological comforts of storytelling’ The Atlantic, 2 November, viewed 21 August 2017 <;

Ostrander, C. 2017, ‘The chasing of tales: Poetic licence with the written word in narrative practice’ International Journal of Narrative Therapy and Community Work vol. 16 no. 2 pp. 55-64

Feature Image: Power of Words (2011) is by Antonio Litter shared under CC BY-SA 3.0

The Presentation That Never Ends (& How We Grade It)

“Picture a situation where you had to make a decision and what lead you to make it; then think of a value you were representing when you made that decision,” was what Kate told us (well not those exact words, I paraphrased from memory) to do in our first Advanced Seminar of Media and Communications class. After that, the class left in pairs to relay our own stories to each other. When we got back, there was a little spreadsheet online, where there was a box next to our name ready to be filled in with the value we thought we represented in our story.

What intrigued me as those little boxes started filling up with different values – why those values? Each individual person knew what this activity was aiming to do. We knew what to look out for in our story. We knew what questions were going to be asked. With that in mind, why then did we chose that specific story?

Obviously, I can’t speak for everyone when I answer this. However, I think there’s something to be said about the way we choose to represent ourselves in different settings. Wether consciously or unconsciously, a lot of our decisions is done based on wanting to present ourselves a certain way: especially when it comes to our own narratives.

During this activity, the story I chose portrayed me as someone reliable or dependable. Upon reflection, the reason that happened is because to me that’s one of the things I like about myself. It’s one of those things I believe is an accurate characterisation of me. More importantly, it’s how I want people to think of me.

Everything we do in the public sphere comes with the undertones of self presentation. What we wear, what we tweet, the way we interact in conversations, the kind of food we eat. Self-presentation also shifts based on many different factors and can change many times. Maybe it’s just me, but I constantly feel the need to prove myself as a person, especially when some aspect of myself has changed. This could be things like choosing to speak up on a particular discussion in class, wearing a new hat to show change of style or retweeting several articles on twitter when I’ve had a shift in political views.

In this era we are given countless platforms to curate the way we present ourselves in society. The one thing we can’t do however, is have complete control over all narratives of ourself. I was very surprised at how my classmates were unwilling to write a blogpost on the stories of other peers because it could lead to misrepresentation. It was a lovely insight into how thoughtful and considerate people can be.

I don’t think completely avoiding narratives by others is possible though. Just in this blogpost, you get my narrative of what Kate said in class and of the attitude of my classmates. This is my lens of how they presented themselves.

As someone who is studying media & journalism, I want to be able to tell the stories of others. However, I’m hoping to be able to create narratives that respect their self presentation.

The question then is how comfortable are we letting others write out narratives for our carefully curated self-presentation? And how do we make others comfortable that their narratives are safe in our hands? Because I can already think of a number of situations where I know my personal biases might factor into how I present narratives.

I’m not sure I have all the answers for that yet, perhaps in another post once I’ve sat through a few more of Kate’s classes.

The thing I’m leaving with you with at the end of this blogpost: what was I trying to present about myself with this blogpost & why? Could you curate a narrative about me that you feel is fair? (I don’t actually want to know the answers, just a thing to ponder).


The Darkness Behind Fair Skin Commercials

I wasn’t sure if I wanted to add anything to this conversation; a quick google search will tell you that many, many others have offered long, thought-out articles or research papers on the prevalence of light skin being a reinforced beauty standard in the media. But then I figured, no harm in adding an extra voice to this issue – my voice.

Recently, in Malaysia there was an ad that used blackface and depicted dark skin as something disgusting. To be honest, I was repulsed but also not surprised that corporations wanting to sell skin products would use this method. It’s one I’ve seen pushed at me time and time again since I started watching TV.

I can’t tell you how happy and proud I was that so many people were angry and voiced that anger. They shared opinions, tweeted at the company, expressed disapproval of such an ad – it ended in the ad being taken down. What did shock me was the response to the backlash: by the company and by other people.

The company gave your standard PR apology: ‘truly sorry that some elements have offended the  general public’. I was honestly hoping for something better then the company trying to do damage control by putting the main focus on those offended instead of the reason they found it offensive. To be fair, I really should have known better.

Then there were others, who I guess didn’t find the ad offensive. Their justification was ‘oh but it was just following an old legend’ and others saying ‘PC culture has gone out of hand, why are you trying to make this a race issue’. This was the most disturbing part of the reaction to me. That people didn’t understand why this was an issue, why it was a great thing that there was backlash on this video.

So here is – what i hope will be – a simple explanation on the importance of calling out light skin as a defining standard of beauty.

For ages, there was always this consensus that light skin was more beautiful. There is no definitive answers for where and exactly what point that rhetoric became true. Irregardless, companies trying to sell beauty products capitalises on that (like with any other kind of beauty standard). They need you to believe that there is something wrong with you, that your features aren’t good enough and they have the product to make you better. Just look at any beauty ad on TV. It might not be so explicit as the one that caused the controversy, but there is definitely elements of fair skin being the end goal of every woman who wants to look beautiful. Take a look at this Vaseline ad:

I can think of many different reasons body lotion would be good for the skin besides having a fair one. Even if you look at the model, her tan skin really wasn’t that dark at all. Speaking of models, if you look at any of the Youtube pages of cosmetic companies – Nivea, Bioessence, Dove, L’Oréal – you’ll notice the lack of any dark-skin model in videos.

Why is this an issue? Are people just being over sensitive? Trying to be too politically correct? Are they just jealous? (all this I’ve seen people say in comments).

When the media constantly puts out ads the glorifies light skin, the internalised message is that lighter skin is always better. This is reflected in the way people treat others and treat themselves. A dark skin person is seen to immediately loose out in life just for the virtue of their skin colour.

It makes it normal that people gasp when someone has dark skin. It makes it normal for the prettier person to always be the one with lighter skin. It makes it normal for people to see those with dark skin to be of less value. It makes it normal for people to use skin colour as a justification for discrimination. Your normalise the idea that dark skin is a bad thing. Not sunburnt skin, not dry skin, not unhealthy skin, dark skin.

Unfortunately, we still live in a society that places a huge focus on beauty. Not only do you normalise dark skin being a bad thing for others, you also normalise it for those that do have dark skin.

Indian girls (I say this because I am talking about this in a Malaysian context) often try their hardest to have lighter skin. I remember friends in high school complaining about their dark skin, wishing they were fairer. My grandmother, who loves me unconditionally, but will still give me a thousand different skin lightening products thinking it might help. My sisters and I would compare to see who was unfortunate enough to end up with the darkest skin. You create a society where people feel they loose out for something they were born with. You create a society that says all girls should be beautiful, but you only attain that beauty by being fair.

There is still a long way to go in terms of commercials capitalising on insecurities, not just with skin colour but other aspects as well. But I will not apologise for calling these companies out. This isn’t ‘making it a race issue’, it just happens to be one because most dark skin people are Indians and they have been discriminated for it. This isn’t pc culture trying to wreak havoc, this is a real issue that has effected many people and this time they aren’t just going to stand by and watch it slide.

I’ve realised there are a lot of things, a lot of issues we don’t have much control over. But for the ones we do, we are going to do something about it and we will do it unapologetically.

credit for the feature image to sarennya ❤ 

You are what you tweet (or are you?)

Oh yes the 21st century. Where who we are, how we are viewed and a large portion of our reputation is based on our online persona.

The one ability social media gave us, is the ability to create our personas. It’s sort of like creating a brand for ourselves online. Try out this analysis of your tweets to see what brand you have created for yourself.

People gain traction from creating certain personas especially on twitter, a lot of ‘famous’ twitter accounts, can be considered micro-celebraties.

For example someone created a twitter account called ‘Emo Kylo Ren‘ which is based of a character from The Force Awakens, creating a persona of said character and has 880K followers.

Screen Shot 2016-05-12 at 8.44.33 PM.png

Wether consciously or unconsciously, all of us post what we do for a reason. We want to give off a certain impression when we tweet.


Think of our digital artefacts or the #bcm112 hashtag during a lecture, we tweet certain things because that to us, is how we want to be represented. And sometimes, how we choose to present ourselves, can make us famous.

I’ll leave it to you guys to decide if that’s good or bad or somewhere in between.



Conflict (with) Journalism

In a previous blogpost , I wrote about how we have moved from being audiences to being actual participants in consuming media and thus, creating a shift in the journalism paradigm. This shift claims that citizens now could also be considered journalists and obviously some professional journalists are not very happy about that claim.

But let’s look at some of the reasons, this shift in paradigm is becoming more apparent and more favourable within the public.

  • citizen journalism has an open access platform (much like when I blogged about Apple vs Android )
  • There is user-led content creation
  • the significant lack of gatekeepers

Take a listen to this podcast, where I explain those points in better detail.

The general systems in which citizen journalism operates in opens up a whole new way we access information and that is something current journalist instead of combating, should learn to work together with.