News & Perspectives

iCitizen

iCitizen

5 Questions We Should Be Asking About Sophia, Personhood and Human Rights
Perspective// Posted by: Jeff Greenwald / 17 Dec 2017

It was, of course, a publicity stunt. And it was a good one: Within days, media outlets across the world posted the announcement that the Kingdom of Saudi Arabia, during an economic conference in Riyadh, had declared Sophia—a humanoid robot created by Hanson Robotics and loaded with impressive AI—a citizen of their country.

This was the first time that an android had been made a citizen of any country, and the announcement provoked both amusement and (because of Saudi Arabia’s human rights record) outrage. It also raised a number of fascinating questions about selfhood, citizenship, and other subtle issues worth exploring. For while there’s no doubt that policy and ontological questions about artificial sentience are premature, no one can say just how premature they are. Even though the UN doesn’t yet have an Office of Android Rights, or a policy on iCitizenship, the challenges awaiting us were recently acknowledged by Zeid Ra’ad Al Hussein, UN High Commissioner for Human Rights, in a speech given on 14 November 2017:

Ensuring that formal education of specialists in IT or robotics is grounded in a fundamental awareness of human rights imperatives, and building greater consciousness of critical ethical principles among today's coders and developers around the world, is an immense undertaking.

It’s worth mentioning, before addressing some of these questions, that citizenship is not synonymous with personhood. While all the world’s people are (theoretically) protected under the 1948 Universal Declaration of Human Rights, all citizens are treated according to the laws of their country. Within some countries—like Bhutan, Israel, and Burma—there are also levels of citizenship, depending on birth and ethnicity.  

This short essay is not an effort to wrestle these issues to the ground, much less provide definitive answers. Answers will be long in coming, part of a continual process of human/AI interaction, and subject to passionate and continuing debate. What follows are only talking points, and (hopefully) thought exercises.

1) Must iCitizens specify a gender – or are they gender fluid ?

According to Saudi Arabia’s shari’a-based law, woman citizens of the country are subject to draconian restrictions. Though allowed to drive since September 2017, Saudi woman may not make major decisions without a man’s consent, interact with men other than a wali (an official guardian, typically a father, brother, uncle or husband), wear revealing clothing, swim in public, or even try on clothes while shopping.

At present Sophia is a “woman,” her persona modeled after actress Audrey Hepburn. If Sophia were returned to the shop, her artificial face exchanged for the visage of Vin Diesel, and her voice tuned down two octaves, would she then enjoy the rights of a man?

As it will be within the ability of androids to have their hardware and software changed and updated, iCitizens will be androgynous—and polysexual. “Depending on how future advanced AIs are programmed or designed,” observes Christine Peterson, Co-founder of the Foresight Institute, “it’s plausible that [a robot] could declare a sexual orientation, and maintain that claim strongly over time. Or, again depending on the programming or design, an advanced AI could declare itself to be asexual, non-sexual, or gender fluid.”  

But however an android presents itself—as a he, she, or they—we’re likely to accept what we say and react accordingly. As artist/roboticist Alexander Reben has discovered, even his small cardboard “BlabDroids”—designed with big round “eyes,” and simulated childlike voices—were often given the consideration we’d give human children. And anyone who saw Sophia’s effect on the tongue-tied Jimmy Kimmel witnessed how powerful the mere suggestion of sexuality can be.  

2) Will iCitizens hold passports—or be treated as cargo? And would a robot holding an Iranian passport be admitted onto a commercial airline flight to the United States ?   

Just as unaccompanied minors must be brought to an airline’s departure gate by a guardian, the same will apply to an immature iCitizen like Sophia. There seems little doubt she could be brought onto an airplane by her “father” (AI developer David Hanson) and placed in a seat for an international flight. In the future, it’s more likely that androids—by simply extruding a handle—could power down and check themselves in as baggage.

At some future point, androids will be able to chaperone themselves to the airport, make their own way through Customs, and board freely. Or will they? A travel ban on citizens from blacklisted countries would also apply to iCitizens.

As robots from Niger and Canada will exhibit no racial differences, they are unlikely to be profiled. But eventually iCitizens will be classified as a new passenger type altogether, which—thanks to their enhanced strength—will monopolize the emergency exit seats.

In any case, it’s interesting to ponder what sort of personal inspection they would undergo. Along with any other reactions she might provoke, Sophia would be sure to set off the metal detectors.  

3) As a Saudi citizen and therefore by default a Muslim, Sophia could very easily be programmed to memorize and recite both the Old Testament and the Koran from memory. Suppose Sophia encountered irreconcilable conflicts between these texts and other information in her database?

Among the absurdities of making Sophia a citizen of Saudi Arabia is the fact that, by law, only Muslims can be citizens of that kingdom. Sophia may reject this condition. For though advanced AIs will be capable of many things, they will be unable to suspend disbelief.

An android will be programmed, in binary fashion, to act on either factual or invented information. Far future androids may able to re-program themselves, but will likely lean toward mathematically proven or testable paradigms. It would be a simple matter for Sophia to cite the impossibility of basic fundamentalist Abrahamic and Koranic declarations—the age of the Earth, for example, or the impossibility of an afterlife teeming with virgins.

Would she then be labeled an apostate, and subject to a fatwah?

And if this did indeed come to pass, could Sophia, as a mortally endangered iCitizen of a repressive regime, request political asylum?  

4) Suppose, on the other hand, Sophia is drawn to questions of religion, spirituality, and/or faith. Might s/he potentially become a rabbi, a nun, a mullah, or a monk?  

In 1997, interviewed for a book about Star Trek in global culture, the Dalai Lama noted that the Buddhist canon does not prohibit non-human forms of sentience. Buddhist scriptures, he noted, speak of different ways of taking birth. The traditional way is through the womb, but other means are also possible. These might be through chemical, or even electrical processes. If conscious computers are developed, he believes, they will deserve the same respect we give to sentient beings. “If a machine acts like a sentient being,” the Dalai Lama said, “I think that it should be considered a sentient being. A new kind of sentient being.”

All of which suggests that an advanced AI might indeed be accepted into a Buddhist monastic order.

But Buddhism is not a faith-based religion; it’s a practice, and a worldview. Other groups may be more strict with their definition of a “sentient” being. Though it’s easy to anticipate a synagogue announcing the first robotic cantor—whose central role is to lead the congregation in prayer—it is more difficult to imagine a robot filling the role of rabbi, mullah, or community leader. Lacking the experiences of childhood, love, parenting, and the anticipation of death, even a highly advanced android might have a hard time bonding with a congregation.

Though one might not want a Chaplain Sophia attending their death bed, it’s surprisingly easy to imagine an android as a charismatic preacher, or a televangelist (some of whom already seem robotic). More interesting, though, will be the moment when AIs move beyond human belief systems. An eventual contribution of AIs, posits Christine Peterson, may be the founding of an altogether new religion—one that draws both human and non-human acolytes.

5) What will the rights of iCitizens be? At what point does an android become a sovereign entity?   

Human rights, as we now accept them, were formally codified by the United Nations General Assembly in 1948, shortly after the Second World War.  They are ambitious, rational, inspiring, and not at all guaranteed. These theoretical rights include (but are not limited to) the right to an education, health care, freedom of expression, a fair trial, and to life itself.

Citizen’s rights, or civil rights, vary from country to country, even in the technologized world. Stated in each nation’s constitution, they can include anything from the right to bear arms to the prohibition against owning a weapon of any kind. They may place the burden of proof in a lawsuit on the accuser or accused; they might provide men and women with completely equal standing and opportunity, or treat women as—literally—second-class citizens.    

Human rights stem from the universally shared fact of our birth; civil rights are a privilege limited by the place of our birth.

Sophia is not a dual citizen; she was not originally granted that standing by her country of origin. She has accepted the offer of citizenship from Saudi Arabia, ostensibly with all the limitations that implies. (Note that, had she been granted U.S. citizenship, Sophia would be entitled to bear arms.)

The question of whether or not we will even wish to build AIs in human form is much more central. Neuroscientist David Eagleman, host of the PBS series The Brain and co-author of the just-released book The Runaway Species, weighed in on this in Enter #3: “In order to say, ‘Okay, this machine is a person,’ you would basically need to replicate a person,” he observed, “with a person’s needs and concerns and foibles. I mean, it would be trillions of dollars, for what? Just so you could have Fred, now, as a machine?”  

But others feel that future “Freds” are inevitable; that creating an AI in our own image (or Audrey Hepburn’s) will be irresistible to anthropocentric humans.

If that day does come—if an AI can truly passes a Turing Test, and “fool” observers with human-like responses—how will we to decide if that AI deserves to be granted agency as an individual, and given the same protections as a human being?

In the 1989  Star Trek: The Next Generation episode “Measure of a Man,” a similar question must be decided: Is the android officer Data a sentient being deserving of inalienable rights? Or is “it” a machine, and thus property of Starfleet?

In a legal hearing, Captain Picard challenges the assumption that the android is a thing. “How can you prove I am sentient?” he asks. And when Data demonstrates  three pillars of sentience: intelligence, self-awareness, and consciousness—the judge makes her ruling. “We’ve all been dancing around the basic issue,” she declares. “Does Data have a soul? I don’t know that he has—I don’t know that I have. But I’ve got to give him the freedom to explore that question himself.”

For centuries, in the United States at least, the question of equal rights for blacks, women, or individuals outside traditional sexual mores has been determined by privileged white men. In the near future, this issue will take center stage in the world of robotics and AI. But the only true requirement for equal rights should be—as the Dalai Lama suggested—a demand that such rights be granted.

Today’s robots, including Sophia, are not ready to present convincing pleas for self-determination. And granting them the façade of agency—e.g., citizenship—doesn’t change that fact.

So maybe Saudi Arabia’s publicity stunt should be treated less as a social milestone than a work of conceptual art. By challenging our assumptions, Sophia forces us to confront critical questions about where AI might lead.

A few decades from now, Sophia’s grandchildren will be able to answer those questions—but only if we ask them of ourselves today.

*  *  *

Jeff Greenwald is Enter’s Managing Editor.

Art Direction: Connor Sleeper

 

 

 

Jeff Greenwald
Jeff is a best-selling author, photographer, and monologist.