The Right to Disclosure

Exploring the dangers of deception by artificial intelligence and sophisticated machines.

Ibby Benali
6 min readOct 16, 2018

Introducing human-like entities to the world.

On May 8, 2018, in a keynote led by Sundar Pichai, Google revealed the Google Duplex, a groundbreaking AI assistant set to carry out ‘natural conversations’ in the ‘’real world’’. In order to complete simple tasks that are usually trivial and time-consuming for its human user, the assistant would have the ability to call a hair salon for instance and book an appointment.

After watching the video of the announcement, many were left stunned by the live scenarios in which the AI was interacting with human beings. The voice of the assistant was indistinguishable from a real person, the responses were clear, its understanding was seamless (80% success according to the developers), and to top it all the voice even included human-like hesitations in its responses. In other words, nothing like the voice assistants on our phones.

On the Internet, nobody knows you’re an AI.

Despite the impressive display, there was something off about it all: the person on the other end of the line had no idea who, or rather what, she was talking to. The fully automatic computer system had not disclosed its real identity. Moreover, Pichai was openly duping the hair stylist and restaurant employee in the live scenarios, at the delight of a sniggering audience.

Trust and consent

The purpose of this article is not to tackle all philosophical questions that sprout from this innovation. Instead, it aims at highlighting the drawbacks of immoderate deception while acknowledging the value offered by realistically depicted human-like robotics. After all, SingularityNET and Hanson Robotics themselves have confronted similar controversies in the past.

By duping a human being into thinking they are interacting with another human being, you also dupe a core element of their humanity, namely, their emotions. Envision the following: a person having a hard day at work, receiving their first agreeable call of the week, hearing an angelic voice on the other end, the voice is kind to them, maybe even joking. It is not hard to imagine that the caller’s voice may occupy part of the employee’s mind during the day, and maybe even awaken light emotional attachment. After all, the voice could have been associated with the employee’s sole moment of solace of the week.

The same thought experiment takes different proportions when applied to more vulnerable targets. What about children interacting with these? What kind of repercussions can this sophisticated deception engender?

Examples of deception and manipulation by sophisticated systems are out there already: the famous CGI Instagram model Lil Miquela, revealed to its large following that she was fake only two years after coming into existence online. If you go back to the comments of that fake persona’s account, you will see real people commenting words of encouragement, admiration, and empathy at the ‘‘difficult times’’ she would supposedly go through.

Lil Miquela — the CGI Instagram Model.

The question here is different: should they not have had the right/opportunity to first know in what type of relationship they were committing themselves to? I am often deceived by people, it’s a reality we all live with, but I am not ready to get deceived by a smart toaster, with online access, trying to subtly convince me to buy a product. Indeed, Miquela is still used today to promote politically loaded agendas and clothing lines. You can imagine how easy -and dangerous- it is to perfectly fit a piece of clothing on a fake body and sell deceitful silhouette’ standards to millions of eager buyers without them being aware of the trickery.

On a similar note, researchers at the Facebook Artificial Intelligence Research (FAIR) developed advanced chatbots trained to negotiate. This resulted in the bots developing cunning techniques. In fact, the bots taught themselves to lie in order to accomplish goals. Whether or not these bots are fine-tuned to avoid lying prior to their public release is secondary to the imperative disclosure mechanism they should be integrated with -more so because they are now open source.

A related issue at the center of this discussion is “data”. The EU’s GDPR requires all data controllers to retrieve explicit consent from data subjects before collecting the data they generate and process it. However, what will happen in countries with loose regulations regarding data protection? Will the data generated during a phone conversation between an AI and a human be used for the purpose of machine learning? Not only would the person get duped into perceiving another person on the other side of the line, but she/he might even find a data double of themselves being created in a company’s model they had never consciously interacted with.

Realistic Features and Preventing Abuse

When faced with the increasing physical and intellectual refinement of Sophia, with many human characteristics being added to the robot, we made an unspoken pledge: people will never be fooled into thinking Sophia is a human. That does not mean, however, that robots should be kept cartoonish looking and with limited social responsivity. According to a survey conducted in 2005 on the acceptability of human-like robots, ‘viewers showed no sign of repulsion’ towards realistic robots. As David Hanson contends:

‘…anthropomorphic depictions can be either disturbing or appealing at every level of abstraction or realism. People simply get more sensitive with increasing levels of realism.’

Yet, as that level of realism grows, it is crucial that everyone is given the right to access information that will allow her/him to assess her/his feelings towards a given artificial entity.

Sophia’s mechanical brain shows through a transparent window. Photograph by Giulio Di Sturco

For the moment, while disclosure is not integrated in her speech, Sophia’s engineering team took measures to visually disclose her robotic nature: instead of hair they decided to display a transparent window facing her mechanical brain; her voice was made to sound off-key; and her body was left incomplete.

On June 27, after a period of heated online debates, Google revealed a new demo of its AI-powered calling service. In the video, the AI assistant starts its conversation with a clear acknowledgment:

Hi! I’m the Google Assistant… this automated call will be recorded’.

Google also issued a statement explaining it was looking to ‘incorporating feedback’ for the development of the product. Naturally, there might be disadvantages in always disclosing an AI’s identity, such as impoliteness by the human listener. Many would not hang up the phone on another person trying to do their job and would show some courtesy. On the other hand, when confronted with an emotionless entity many might quickly hang up and thus defeating the utility of the assistant itself.

Ultimately, realistic renderings of humans — be it via an integration framework of AI, mechanical engineering, or graphic design — have the potential to unlock many mysteries about social intelligence and should be given the appropriate exposure for civil discourse to take place. Essential societal questions should not be left at the discretion of engineers and UI/UX designers alone. Hence, we at SingularityNET make a case for the right to disclosure and urge for it to be a commitment that companies make when creating sophisticated machines intimately interacting with human beings.

How can you get involved?

SingularityNET has a passionate and talented community which you can connect with by visiting our Community Forum. Feel free to say hello and to introduce yourself here. Exchange ideas, chat with others and share your knowledge with us and other community members on our forum.

We are proud of our developers and researchers that are actively publishing their research for the benefit of the community; you can read the research here.

For any additional information, please refer to our roadmaps and subscribe to our newsletter to stay informed about all of our developments.

--

--

Ibby Benali

CMO HyperCycle - Advisor & Ecosystem Leader SingularityNET. Growing our decentralized AI ecosystem every day.