Robots: Should AI Be Allowed Human Rights?

October 17, 2022 § 5 Comments

               When I tried to find any discussion of AI civil rights, it took me numerous carefully worded Google searches before I found any consideration of robot rights. Instead, “Civil rights AI,” “artificial intelligence civil rights future,” and the concise “AI rights” only yielded worries about the preservation of human rights when AIs are normalized (most of us posting on the web are human after all…I think). The United Nations and The White House offer characteristically vague principles for the protection of humans like “safe and effective systems” and “notice and explanation,” but offer no considerations of what rights AI should be allowed. If we’re supposedly so concerned for our safety and well-being, it does make me wonder why we think creating a highly intelligent new species is a good idea. Sure, sure, we want to preserve humanity for as long as possible, blah blah blah…but what if we could play God instead?? If we are going to bring potentially humanity-destroying beings to life against their will (which seems to be our current route), it’s only responsible to at least try to protectthem! Granted, if the plan for protecting robot rights is just as flimsy as that for protecting human rights, looks like we’re all screwed…but we might as well try!

My unsuccessful Google searches

               Some argue that AI will never be able to be viewed the same as humans, and this perspective often is based in a spiritual perspective. If the “soul” exists, then robots may never be indistinguishable from humans, as there’s no way to synthesize this elusive concept. However, you also can’t prove a soul’s presence or absence, so there’s no way to show that a robot does or doesn’t have one. From a scientific perspective, which is where I land, all creatures operate from a combination of their “nature” and “nurture,” and this is observable and replicable. Humans aren’t unique in this; we have simply developed a more advanced “nature” than our fellow Earth-mates. Instead of attempting to define and prove an immaterial concept like a soul, I would argue that artificial humanity can be both created and defined through manipulating and monitoring nature and nurture.

The human capability to roughly replicate our own brain organically or mechanically is remarkably within our grasp in the next 30 years. Already, advanced humanoid AI is being developed, and one (Sophia, developed by Hanson Robotics) has already received symbolic citizenship in Saudi Arabia. Does she look more like the uncanny valley Polar Express kids than a real human? Sure! And does she also resemble a Halloween mask of Jennifer Lawrence? Also yes…but it’s more about what’s going on inside her creepy mannequin body (sorry Sophia)! The replication of human brains would allow for artificial intelligence to have both human nature (through how their mind is created), and human nurture (in whatever they experience past their conception). If a being of artificial intelligence acts, thinks, and learns like a human, I would argue it must be given the same rights.

Sophia, AI robot developed by Hanson Robotics

               The basic international human rights (as defined by the UN) are “the right to life and liberty, freedom from slavery and torture, freedom of opinion and expression,” and “the right to work and education.” I feel like there could be a lot rights more covered here, but I guess it’s kinda tough to get 193 countries to agree on one thing, much less 7 whole rights! Though AI aren’t traditionally “human,” the idea at the core of this statement is that a creature with the mental capacity and emotion of a human should have rights, one of which is “freedom.” Therefore, if we are creating these beings, it’s immoral to enslave them to serve us.

A Turing Test Diagram (A is a computer writer, B is a human writer,
and C is a human who must distinguish between the two)

               This begs the question, what would qualify an AI for human rights? Some argue that the Turing Test would be sufficient: a test where a machine can communicate with a human without being detected as a machine. However, passing the Turing Test doesn’t necessarily mean the AI is learning; hypothetically, programmers could pre-program response pathways (albeit using an absurd number of pathways) to convincingly respond like a human. In my mind, the Turing Test only holds up when supported with proven learning behavior. Even then, some computer scientists contend that there must be a convincing physiological representation of the humanoid as well, especially one that can experience pain and pleasure (much like Paolo Bacigalupi’s Mika). This corporeal perspective on what makes one human can step into ableist territory, but I will admit that a universal part of being human is the “body,” no matter how varied the functions of it can be. I personally believe that for an AI to qualify for human rights, it must learn on its own and be indistinguishable from a human mind. While the human body may be an ideal system for external interaction, the core of what makes a human human is the brain, and a brain can be housed in many forms. If an AI system uses a single Beats by Dre speaker for communication, but is functionally indistinguishable from a human brain, then I support allowing the system the same rights as any human (though maybe the AI could benefit from an upgrade to JBL)!

Lisa Larson-Walker’s artist rendering of Mika from Paolo Bacigalupi’s Mika Model

               The main argument for why AI shouldn’t have human rights is a fair one. Allowing AI freedoms may give them more power than us, as they would likely have superior intelligence and networking capabilities. My belief is that, if we’re irresponsible enough to create AI with equal or greater intellect, then it would be inhumane to create these creatures just to place them into servitude. Likewise, it would be inhumane to abandon them without incorporation into society (a la Frankenstein’s asshole treatment of his “monster”). This creates somewhat of a catch-22: if we create true AI and don’t allow them rights, we’re being inhumane. However, if we create true AI and allow them rights, there’s a strong possibility they’ll eradicate humanity. Therefore, this leaves us with one option.

1886 Copy of Frankenstein by Mary Shelley

               Creation of AI must be regulated on an international level to keep them from becoming more powerful than humanity. An unlimited exploration of human curiosity is a beautiful ideal, but when applied to artificial intelligence, it could spell the end of humanity, just like the White House and UN warn. Granted, if we do create these AI to be smarter, stronger, and more moral than our natural human construction allows for, perhaps it’s best if they rule the world after all…but that’s an article for another time.

Amelia Day

Sources:

https://www.whitehouse.gov/ostp/ai-bill-of-rights/

https://news.un.org/en/story/2021/09/1099972

https://blogs.cuit.columbia.edu/jp3864/2018/12/04/how-human-is-ai-and-should-ai-be-granted-rights/

https://www.un.org/en/global-issues/human-rights

http://www.slate.com/articles/technology/future_tense/2016/04/mika_model_a_new_short_story_from_paolo_bacigalupi.html

https://www.ndimensionz.com/2016/03/24/which-is-better-a-computer-or-a-human-brain/

https://plato.stanford.edu/entries/turing-test/

https://en.wikipedia.org/wiki/Turing_test

https://www.bl.uk/collection-items/1886-edition-of-frankenstein-by-mary-shelley

https://www.researchgate.net/figure/The-Sophia-Robot-first-shown-in-2015-by-Hanson-Robotics-Courtesy-of-Hanson-Robotics_fig4_326009520

History Takes the Laugh Out of Many Things

April 13, 2020 § Leave a comment

Far from being a genre of pure imagination, Science Fiction is a category of writing insistently aware of the boundaries of reality. In The Paris Review, Ray Bradbury called it “the art of the possible, never the impossible.” Isaac Asimov characterized Science Fiction writers as seeing the inevitable. Science Fiction takes elements of existing reality and extrapolates possibilities from that firmly grounded place…until they’re looking backwards.

With alternate history genres like steampunk and retrofuturism, Science Fiction authors looking back in time change events which cannot be changed, still imaging possibilities but for worlds and histories and timelines that cannot be ours. Often, in the name of exploring possibilities, these stories gloss over the realities of the historical oppression faced by nonstraight, nonwhite, nonmen. Homophobia doesn’t kill Allen Turing in Machines Like Me. Ada Lovelace is far less at the whims of sexism in The Difference Engine.

While imaging a world outside these limitations is freeing in a way, this week I’m interested in authors who do something a little different. Phillip K. Dick’s Ubik (1969) and Clifford D. Simmak’s The City (1952) both look back at our real past, and without changing anything, pronounce it a dystopia.

In Ubik, Joe Chip and his associates have found themselves in an America that has temporally regressed to 1939. He’s just begun accepting this, having A Connecticut Yankee in King Arthur’s Court-esque fantasy of even thriving in the past with his superior knowledge:

Suppose, he reflected, we can’t reverse our regression; suppose we remain here for the balance of our lives? Would that be so bad?

A few beats after Joe has this thought, his cab driver opens up a topic that will answer than last question for him. Bliss, the cab driver, asks Joe what he thinks will happen with World War II:

“‘Hitler will attack the Soviet Union in June 1941.

‘And wipe it out, I hope.

Startled out of his preoccupations, Joe turned to look closely at Mr. Bliss driving his nine-year-old Willys-Knight.

Bliss said, ‘Those Communists are the real menace, not the Germans. Take the treatment of the Jews. You know who makes a lot of that? Jews in this country, a lot of them not citizens but refugees living on public welfare. I think the Nazis certainly have been a little extreme in some of the things they’ve done to the Jews, but basically there’s been the Jewish question for a long time, and something, although maybe not so vile as those concentration camps, had to be done about it. We have a similar problem here in the United States…‘”

The taxi driver goes on to praise Charles Lindbergh, and expand the problem of the Jews to black people, using a racial slur the protagonists says he’s never heard spoken out loud before.*

This interaction thwarts Joe’s nostalgia:

“‘I had forgotten this,’ he realized.

Just as being trapped in 1939 America exactly as it was is peril enough for Joe’s multiracial team, in Clifford D. Simmak’s The City, the great fear-inducing specter hanging over the text is just…humans. Not zombified humans, not a special strain of serial killer humans, just us as we actually were. Whereas Joe Chip living in a technologically advanced vision of 1992 had “forgotten” the racism and zenophobia of the past, when the characters in the far future of Simmak’s novel encounter the ugliness of human history they’ve erased it so thoroughly they can hardly believe it existed at all.

The novel, which is told by dogs who are uncertain that these stories of the creature “man” are anything more than fairytale, includes this caveat in the beginning:

…another concept which the reader will find entirely at odds with his way of life and which may violate his very thinking, is the idea of war and of killing. Killing is a process, usually involving violence, by which one living thing ends the life of another living thing. War, it would appear, was mass killing carried out on a scale which is inconceivable.

Rover, in his study of the legend, is convinced that the tales are much more primitive than is generally supposed, since it is his contention that such concepts as war and killing could never come out of our present culture, that they must stem from some era of savagery of which there exists no record.

These acts that are so violent that advanced dogs have trouble believing the ones who committed them ever existed – war and murder – are ours by right. Simmak is writing these stories during the Korean War, a violent blip between World War II and Vietnam, but there are very few points in human history where the characterization of man as warmonger would not ring true. The stories in The City bear this judgement out. At one point, a robot named Jenkins attempts to have humans again by taking a small group and erasing their mind of the possibility of violence. But still, eventually, one of these primitive men creates a weapon. He forges a bow and arrow, even though he does not have the name for either, and Jenkins is forced to accept the obvious:

“Once I thought that Man might have got started on the wrong road, that somewhere in the dim, dark savagery that was his cradle and his toddling place, he might have got off on the wrong foot, might have taken the wrong turning. But I see that I was wrong. There’s one road and one road alone that Man may travel – the bow and arrow road.

I tried hard enough, Lord knows I really tried.

…I took away their weapons, not only from their hands but from their minds. I re-edited the literature that could be re-edited and I burned the rest. I taught them to read again and sing again and think again. And the books had no traces of war or weapons, no trace of hate or history, for history is hate – no battles or heroics, no trumpets.

But it was wasted time, Jenkins said to himself. I know now that it was wasted time. For a man will invent a bow and arrow, not matter what you do.”

There is something powerful about Science Fiction writers, those arbiters of seeing the foremost possibility, accepting, (thereby forcing the reader to accept), that the limitation imagination runs up against is human nature. The world of Ubik has figured out how to stave off death, but it does not contain the antidote to white supremacy. The genius inventors of The City can produce a 10,000-year-old robot and dogs that philosophize, but they cannot keep man from killing.

Accepting these realities and conveying them as they are stands against the whimsy in time travel narratives like Twain’s or the fun inventiveness produced in steampunk, but, as Jenkins says in The City, “history takes the laugh out of many things.”

*While no one will ever accuse Phillip K. Dick of being an optimist–it is telling that he thought 1992 was far enough in the future to assume racial slurs would have died out.

–Micaiah Johnson

Her Brain Proved Wide Enough for My Sky

March 9, 2020 § Leave a comment

Usually, texts in which a male creator forms a feminine robot tend to fall into one of two categories. The first, (which I previously discussed here), are essentially sexbot stories – men creating programmable women to be ideal partners without any of the pesky maturity or resistance that comes with an adult human mate. The second, really more of a “But I’m a nice guy” version of the former than its own category, is a narrative arc in which the character interacting with the feminine robot plays out the role of the acolyte. The acolyte is the figure who, rather than being explicit about his desire, serves the feminine animated being like a friend, but with an awareness of the robot’s sexuality that belies their romantic interest. In the film Ex Machina (2015), Nathan represents the first type of interaction with the feminine robot, and Caleb the second. Other acolytes include Batou from the animated Ghost in the Shell (1996), and Phil in Lester Del Rey’s short story “Helen O’Loy” (1938).

Richard Powers’s Galatea 2.2 (1995) is a text that defies this categorization.

On its surface, it’s no different from the formula set out in literary works like The Future Eve or films like Ex Machina. One hardcore scientist + one more romantic, empathetic figure + one feminine AI that the latter will grow close to and protect against the former. It’s an easy recipe that would mark the main character, autobiographically also named Richard Powers, as an acolyte waiting to happen. We know he’s lonely, coming off of a recently failed long-term relationship and licking his wounds back in his hometown. The feminine AI is initially identified by a letter, which is the same naming convention he uses for his romantic interests C and A. Also, it is Powers who decides, when the AI who will become Helen asks, that she should be female. It seems at first as if it won’t just be an acolyte tale, but even worse, a rebound acolyte tale.

But to stop digging there would be to give Richard Powers, both the author and the character, too little credit. The protagonist isn’t brought in to mate with or serve the feminine AI; he is recruited to train her in literature. The backdrop isn’t a megalomaniac’s lab as in The Future Eve or Ex Machina, but a university campus. The texts most often referenced are Frankenstein and Tarzan – both stories that trouble humanism, where a character must learn language.

Despite taking its title from the Pygmalion myth, Galatea 2.2. is not concerned with the magic of love; It is concerned with the magic of education.

Even as I type that sentence I can hear my Rex Harrison-loving grandmother in my head, playing My Fair Lady too loudly and insisting these are not separate things. And, yes, an education that results in affection is not materially different from the more lust-driven creators programming toward desire, but Powers seems to want to explore these similarities as a means of destabilizing the human-technological divide. If we see the AI Helen as a figure in this novel that is conditioned toward affection through the experience of being educated, she is the second. Protagonist Richard Powers is the first.

Neatly tucked alongside the scenes of Helen’s education are scenes of Richard Powers falling in love. His love affair with C is marked by an exchange of information. First, he is her teacher, but that is not when their love blooms. It is afterward, when she is educating him in everything about her family’s homeland and customs, that their romantic relationship is strongest. Once he is in the country she described, once she cannot teach him what he does not know, they fall apart. A similar path happens with A, the graduate student he is attempting to recruit to his experiment. As soon as she corrects him in his assumption about the cannon – “There’s more of the cannon than is dreamt of in your philosophy” – he acknowledges to himself that he’s in love with her. That scene ends with his appreciation for her as an educator: “She was a born teacher.” In a later scene, he says “A outsmarted me in every conceivable way” and immediately afterward declares his love for her.

In Galatea 2.2 Richard Powers is not saying that interpreting the world for a machine is an ethical way of fostering love. He is saying it is a way of fostering love for all creatures, all the time, implying the ethics only become explicit because Helen is a machine. In the world Powers creates, love is always the process of translating the world for another. “See everything for me” functions as “I love you,” for both the human C and the inhuman Helen. When Powers explains why he loves A, it is in precisely these terms: “Because I cannot turn around without telling you what I see. Because I could deal even with politics, could live even this desperate disparity, if I could just talk to you each night before sleep.”

Unlike so many feminine animated being tales, Galatea 2.2 is uninterested in exploring the unique possibilities of man-machine romantic relationships. Instead, its investment is in revealing the ways in which they are not unique, a kind of love no more, or less, programmed than the human.

–Micaiah

She Made Herself

February 3, 2020 § 1 Comment

In 8 AD, Pygmalion prays a statue to life to be his wife. In 1886, Ewald enlists a fictional Thomas Edison to create Hadaly, a romantic companion who would have the beauty of a human woman without the pesky spirit. In 2009’s (distressingly orientalist) The Windup Girl, main character Anderson finds a lover in the continually exploited Emiko, a genetically engineered woman programmed with a compulsion to obey and without control over her own sexual responses. Peppered between these are countless others – E.T.A. Hoffman’s automaton Olympia is pursued by Nathanael in 1816, Helen O’Loy becomes robot companion in 1937, and this is without branching out into film, though The Stepford Wives (1975) and Ex Machina (2014) would fit comfortably in this cannon.

Given this legacy, one would be forgiven for envisioning the literature of the artificial human as being primarily* a collection of tales – either cautionary or laudatory – of heterosexual male desire finding its ideal expression in vessels with consciousnesses either too new or too programmed to be anything but subservient. However, one genre of animated being fiction has, historically and contemporarily, bucked this trend: tales of the golem.

A being from Jewish folklore, a golem is a humanoid creature made of clay and given commands by the written word – akin to a robot programmed by binary code, except instead of “0”s and “1”s golem creators use the Hebrew alphabet as the base of their commands. Inarguably the most famous golem tale is that of the Golem of Prague, a creature created by Rabbi Judah Loew ben Bezalel in the 16th Century to protect its/his people from anti-Semitic violence. A dedicated protector for a while, the golem deviates from its purpose by either breaking the Sabbath or falling in love, depending on the tradition, and is eventually deactivated. What is interesting about the figure of the golem is that it is neither solely a vessel for the romantic as female automata, robots, and puppets have been, nor a duty-bound sexless creature as has been the fate of less humanoid constructions**.

Golem depictions are as complicated as the original in Prague, who managed to be both utterly programmed and ultimately disobedient. In Michael Chabon’s The Amazing Adventures of Kavalier & Clay (2000), the golem serves as both an important the main character’s tie to heritage and the physical embodiment of the need to allow ties to the past to disintegrate. Zod, the golem in Marge Piercy’s He, She, and It is the object of female sexual desire, but exists in a society that devalues traditional, heterosexual family constructions.

Perhaps the most interesting construction of a golem comes from Cynthia Ozick’s The Puttermesser Papers (1997). If male creatures of other animated beings are giving life to a vessel for their romantic desires, the title character of Ozick’s novel creates a golem to give birth to herself. This purpose is initially mystified even the Puttermesser herself, who initially does not remember how or why she created the creature. She tries to name the golem after the daughter she imagined she’d have. The golem rejects that name in favor of Xanthippe, the only person who had the courage to gainsay Socrates and a figure with whom Puttermesser herself identifies. When asked why she was created, the golem replies that she came so that Puttermesser “could become what she was intended to become” (65). Xianthippe is both a dedicated servant, and a sexual being, her hunger for partners with greater and greater political power her eventual undoing.

A being both mechanical in that it is manmade, but natural in that its materials are of the earth, it makes sense that the golem presents an opportunity for figures that trouble the binary between creations that are either only sexual objects or totally nonsexualized appliances. The novels above each mention the Golem of Prague directly in text, but their golems also exist in a way that marks them as successors to a branch of animated being stories that provide much needed texture to the animated being cannon.

*Though not entirely, as we do have Carlo Collodi’s The Adventures of Pinocchio (1883) among a few others.

** From this I exempt Paladin, a creation in Annalee Newitz’s Autonomous (2017) a tankish military bot who is both a sexual being, and far from humanoid.

Ted Chiang AI Talk Highlights: Singularity?

March 25, 2019 § 1 Comment

In his talk last Monday here at Vanderbilt, Ted Chiang joined a panel to talk about the future of Artificial Intelligence. He spoke about what A.I. means for humanity, and contested the possibility of the singularity (a.k.a. the technology explosion that occurs when computers begin programming smarter computers, with those smarter computers programming even smarter ones).

In his words, we humans desire A.I. systems that will be “smart, but not outsmart us, autonomous but reliably subservient.” In other words, we need multifunctional A.I. but not multifunctional agents.

However, he cautions us that, even if a general A.I. Overmind were to be achieved, we humans may not necessarily see immediate drastic improvements. This is because, as Ted Chiang points out, most of our pressing problems are not really technology problems. Rather, they are economic or sociopolitical problems. For example, if we assume the crisis of pollution/climate change is real, we as a species are already quite aware of how exactly to solve it. Unfortunately, building the political will to rejigger our economy is hard, which is why we choose not to. If an A.I. Overmind tried to force us, most of us would (rightly so) rebel.

V.I.K.I, the antagonist of I, Robot (Starring Will Smith), is an overmind agent that is fully bound by Asimov’s three laws, yet nonetheless reaches unacceptable conclusions of entrapping humanity

But could this Overmind come into being? Ted made an impassioned argument against singularity by bringing us humans as an example. He posits that, while we have had some very intelligent people born in the past, they have failed to create additional smart people, much less more people smarter than them. The comparative conclusion would then be if humans created smart robots, these smart robots may not necessarily be able to create a smarter robot.

How then, has humanity advanced? He argues that while individuals can’t seem to create smarter people, individuals have helped the species as a whole get smarter. This species learning occurs through the invention of physical and mental tools that are passed down through the ages. For example, the invention of Arabic numerals was an incredible improvement in storage reduction and efficiency, allowing our puny human brains to be able to better conduct arithmetic than they could with, for example, Roman numerals. In other words, civilization, not individuals get smarter. The biggest “beneficiary of Calculus was not newton, but society.”

“Keep, Summer, Safe!” (Screenshot from Rick & Morty)

While I agree with Chiang’s perspectives, I do see many flaws in his reasoning against the possibility of a Singularity.

First, procreation and parenting can be quite effective when it comes to helping to add smarter people to the world. The reason why we don’t have a ton of supergeniuses wandering the Earth is because, unlike a hypothetical robot tasked with creating an even smarter clone, we humans may prioritize other things. We may care about our mate’s attractiveness, physical strength, or character over their raw intelligence. We might not even care about reproduction at all.

Second, teaching is highly effective. Chiang never defines how exactly we’re supposed to measure raw intelligence, but he by fiat argues teaching doesn’t actually increase intelligence much. However, I would argue my innate intuition of how to solve problems and observe the world has increased drastically due to my decade and a half’s worth of education.

Finally, transference of information by humans is absolutely horrible compared to that of computers today. When I talk, I’m flapping pieces of meat, transferring about 100-200 words a minute (the fastest talkers in the world at most reach 500 words a minute). When I write, I am typing on the keyboard or moving my hand with a primitive piece of graphite (a.k.a. a pencil). If I tried to build a smarter A.I., I could take forever: as a person even if I just continuously typed, I may not be able to finish coding an intelligent multi-purpose system (For example, the Windows Operating System or the Google Search Engine are each millions of lines of code).

Again, I am transferring couple of kilobytes of information from my mind into the physical world or to the mind of another human being. In comparison internet speeds today can reach terabytes. Thus a machine is magnitudes faster in building a smarter version of itself than I am. At the same time, while I can’t consciously rewire my DNA or rewire my neurons without significant external machines (which I cannot procure), a machine can easily recode itself on the fly.

All in all, I had an interesting discussion with Ted Chiang and I found his talk enlightening.

-Winston

Foreshadowing in “Ex Machina”

March 13, 2019 § 2 Comments

“Ex Machina” is a 2014 film in which programmer Caleb Smith, who works at a Google-like company, is not-so-randomly chosen for a private retreat at the CEO’s compound. The CEO—Nathan Bateman—lives alone, with the exception of a servant named Kyoko that doesn’t speak English and a humanoid robot named Ava. Caleb is brought to the compound to assist in a Turing test with Ava, but it later becomes apparent that Ava is far more sophisticated than this test and is actually capable of human connection and manipulation.

Ava from “Ex Machina”

Throughout the film, Caleb and Ava have one on one meetings during which Caleb is disillusioned with Nathan and begins to see his cruel side. At the end of the movie, Caleb helps Ava escape the compound. She kills Nathan in the process, and locks Caleb inside her quarters. The movie ends with Ava wearing synthetic skin and blending into the crowd of a city. Grim, right?

Ava killing Nathan Bateman in “Ex Machina”

“Ex Machina” came out in a time before Russian hacking, data leaks, and the Cambridge Analytica/Facebook explosion, but rewatching the film feels like looking into a crystal ball into today’s world. Caleb Smith was a victim of data harvesting—he was chosen for the retreat because of his search engine inputs, which showed a “good kid,” “with no family,” and “with a moral compass,” to quote the movie. Caleb even suspects that Ava’s face was created based on his pornography profile (wow!).

It turns out that this Google-esque company hacked every smartphone in the world to obtain data for creating Ava. Keystrokes were tracked and cameras were hacked—constantly recording people’s facial expressions, emotions, and mannerisms to train Ava. Perhaps the darkest part of this is the fact that the cellphone manufacturers were in on the surveillance too. Data theft on this scale (most likely) does not occur today, but the film feels more like a satire of today’s issues than pure fiction.

While high-level AI like Ava is still far out in the future for us, the use of AI is rapidly increasing. It may not be conscious, murderous, and scheming, but AI use in military technology has been controversial to say the least. Examples of this include Google’s contract to implement software into military drones that uses AI for image recognition, as well the bid for Amazon’s facial recognition software for surveillance.

Armed military drones

We need regulations in place that can prevent the misuse of AI and data as seen in “Ex Machina.” AI doesn’t need to be violent in order to be extremely problematic for society, as seen when Amazon’s resume-reading AI self-learned views of sexism based on available data. Treaty conventions such as the UN’s Convention on Certain Conventional Weapons has begun to look at issues of this sort. 

The themes of “Ex Machina,” data mining, artificial intelligence, surveillance and loss of control, sound all too familiar. While continuing to develop and implement AI into different areas of our world is important, we must keep an eye on the privacy that we value the most. 

-AMW

Something Old to Something New: From Bach to AI

April 6, 2017 § 5 Comments

Composers can spend their entire lives studying the music of deceased composers before them in order to find inspiration for their own compositions. They can then spend years drafting manuscript after manuscript before they finish their work. However, the process doesn’t end there. After music is composed, the composer still has to find musicians willing to play the piece and people willing to listen.

In Cloud Atlas, it takes Frobisher, a struggling composer, quite a while before he even finds inspiration through Arys for the Cloud Atlas Sextet. What if in the time that it took Frobisher to compose The Cloud Atlas Sextet, a computer program could immediately compose five thousand different pieces of music based on the composition style of Frobisher? What if Frobisher’s musical thoughts and ideas could be extracted from the source and recombined into an entirely new composition that still retained every musical nuance of Frobisher as a composer? In this case, Arys wouldn’t have even needed Frobisher; he could have just used the computer program to compose his music. What if I told you that this computer program exists and has composed thousands of different compositions in the style of other composers, and it has even combined the influences of multiple composers into its own unique voice?

In the 1990s, David Cope, Professor of Music Theory and Composition at the University of California-Santa Cruz, created the program Experiments in Musical Intelligence, or EMI for short. EMI is able to take the notes in existing music and convert it into data. The data consists of numbers corresponding to the frequency (pitch), duration (rhythm), speed (tempo), volume (dynamics), and articulation of each note, converting the entire composition into numbers. EMI then analyzes the all of the data, looking for patterns (i.e. “Note 50 tends to be followed by note 56 frequently,”) before being able to compose an entirely new composition based on the characteristics of the source music.

In addition these impressive feats, the compositions of the program are musically indiscernible from the compositions of its source composers when played by human musicians. A four part chorale for voice composed by EMI in the style of Bach has all of the characteristics and nuances of Bach’s chorales, from harmonic structure to melodic development. What’s even more impressive was even able to use Mahler’s music to compose an entire opera in the style of Mahler. Mahler didn’t even write operas. EMI eventually developed into the program now known as Emily Howell. Emily Howell has all the capabilities and the entire database of EMI, its prototype, but it additionally has an interface that allows David Cope to musically and linguistically give Emily positive and negative feedback on its compositions, essentially making Cope the teacher and Emily the student, except Cope is able to take all of the credit for the work. Sound familiar?

In order to further understand EMI/Emily Howell as both a program and a composer, it helps to understand music from an algorithmic perspective through a brief explanation of algorithmic composition. Since the beginning of written music and even perhaps before then, music composition has historically had different sets of rules to be followed. At the most basic level, a note may sound better when played with a specific note as opposed to another, and one sequence of notes may sound better than another sequence of notes. Because of this, a formal set of rules was eventually developed. The success and genius of prolific composer Johann Sebastian Bach posthumously spawned the foundation of tonality for almost all of Western Music. This algorithm dictated rules such as which chord progressions to use (harmonic function) and how the notes within those chords move to the next note. This “Tonality Algorithm” is in fact so established that it is a common exercise in music theory to harmonize a melody in the style of Bach. Moreover, this algorithmic perspective can be applied to musical improvisation as well as composition. In Chord-Scale Theory, specific scales are to be used in correspondence with specific chords (i.e. the notes in a major scale are conventionally played over a major chord).

It takes Emily/EMI mere moments of analysis to develop algorithms for every composer that it studies, while it took humans hundreds of years to analyze Bach and develop an algorithm for the foundation of tonal harmony. Because of this, EMI/Emily can develop its own musical voice by using the influence of multiple composers to in a sense, recombine the musical DNA of the composers until finding its own unique composition style subject to David Cope’s approval and modification.

The invention of Emily/EMI raises questions pertaining to the authenticity and legitimacy of music. If EMI composes pieces that pass Turing Tests such that a listener cannot distinguish whether or not the piece was composed by Stravinsky or EMI, does that challenge the notion that music requires a human element to be viewed with merit and virtuosity? Does Cope still retain the title of composer when he teaches EMI to compose a symphony? Is Cope a composer, a programmer, or both?

“In twenty years of working in artificial intelligence, I have run across nothing more thought-provoking than David Cope’s Experiments in Musical Intelligence. What is the essence of musical style, indeed of music itself? Can great new music emerge from the extraction and recombination of patterns in earlier music? Are the deepest of human emotions triggerable by computer patterns of notes?

“Despite the fact that Cope’s vision of human creativity is radically different from my own, I admire enormously what he has achieved. Indeed, this lovingly written book about a deeply held vision of musical creativity should, I think, earn its place as one of the most significant adventures of the late twentieth century.”

–Douglas Hofstadter, author of “Godel, Escher, Bach” and “Fluid Concepts and Creative Analogies”

A friend of mine told me that the human essence in music is absolutely essential to a composition’s legitimacy. However, when that “human essence” is able to be broken down into numbers and modified into an art that is completely new and unique in its own way, does that not undermine the “human essence” that is so precious to us?

As a musician, this invention and breakthrough in computer music is absolutely terrifying to me. If anything I do as a musician or composer can be done exponentially faster by a computer and be just as much of a “Paolo Dumancas Composition” as my actual compositions, could my passion for music, such a significant part of my identity, be inevitably undermined and rendered obsolete?

-Paolo Dumancas

Sources:

http://artsites.ucsc.edu/faculty/cope/

http://www.radiolab.org/story/91515-musical-dna/

Cope, David “Computer Models of Musical Creativity” (2005), MIT Press.

A New Age of Artifice

January 23, 2017 § 5 Comments

In the fall of 2011, Duke University’s undergraduate literary journal published a rather unassuming poem entitled “For the Bristlecone Snag” (“The Archive”). To the journal’s poetry editors, the poem appeared to be a typical undergraduate work, comprised of several unfulfilled metaphors and awkward turns of phrase. What the editors did not know at the time of publication, however, was that this poem was not written by a human. Instead, it was written by a computer program (Merchant).

When I first learned about “For the Bristlecone Snag”, I was reminded of the writings of Alan Turing, a renowned English computer scientist in the mid 20th century. In his seminal article on the subject of artificial intelligence (A.I.), Turing articulates that the question, “can machines think?”, is “too meaningless to deserve discussion” (Turing 442). After all, he claims, we have no direct evidence that other humans can think, and we merely assume that they do based on their behavior. Turing argues that this “polite convention that everyone thinks” should apply to all beings that can demonstrate human behavior (Turing 446). It is from this line of thought that Turing conceptualized the Turing Test, an experiment in which a computer tries to convince a human of its humanity. According to Turing, if an A.I. can convince a human judge that it is human, then we must assume that the A.I. can think.

While the program that produced “For the Bristlecone Snag” did not complete an extensive and proper Turing Test, it did convince human judges that it was human. At the very least, the poem’s acceptance into an undergraduate literary journal reveals that literate machines can, and will, exist in the near future. The way is paved for more professional and accomplished artificial authors.

Indeed, even in the half decade since “For the Bristlecone Snag” was published, the technology behind artificial intelligence has improved rapidly. Watson, IBM’s “cognitive computing platform”, is a great example of this progress (Captain). In 2011, Watson defeated two reigning champions in Jeopardy, successfully interpreting and answering the game show’s questions. While this feat alone was a remarkable step in cognitive computing, Watson’s analytical abilities have since then contributed to over thirty separate industries, including marketing, finance, and medicine (Captain). For example, the machine can read and understand millions of medical research papers in just a matter of minutes (Captain). As intelligent as Watson is, however, he was never designed to pretend to be human. The chief innovation officer at IBM, Bernie Meyerson, believes ‘“it’s not about the damn Turing Test”’; his team is more interested in accomplishing distinctly inhuman tasks, such as big data analysis (Captain).

While IBM may not be interested in the Turing Test, other artificial intelligence companies have been working specifically towards the goal. In 2014, a program by the name of Eugene Goostman passed the Turing Test using machine learning strategies similar to those that drive Watson (“TURING TEST SUCCESS”). The chatbot, or program that specializes in human conversation, was able to convince several human judges that it was a thirteen-year-old boy (“TURING TEST SUCCESS”). Given the success of Eugene Goostman, and the intelligent accomplishments of Watson, it is indisputable that the Turing Test can be, and has been, passed. Artificial intelligence is a reality. Machines can think.

As an aspiring writer and computer scientist, I can’t help but fixate on the implications that A.I. has for literature. It is entirely possible, even likely, that “For the Bristlecone Snag” foreshadows an era in which the most successful and prolific authors will be machines, an era in which the Pulitzer Prize and Nobel Prize in Literature are no longer given to humans, an era in which humanity no longer writes its own stories.

Yet, this era of artifice should not be greeted with worry or anxiety. Art has always been artificial, a constructed medium for human expression. In the coming decades, we will author the next authors, create the new creators, we will mold the hand that holds the brush. Artificial intelligence should not be feared as an end to art, but rather a new medium, a new age of artifice.

– Zach Gospe

References

Captain, Sean. “Can IBM’s Watson Do It All?” Fast Company. N.p., 05 Jan. 2017. Web. 20 Jan. 2017.

Merchant, Brian. “The Poem That Passed the Turing Test.” Motherboard. N.p., 5 Feb. 2015. Web. 20 Jan. 2017.

“The Archive, Fall 2011.” Issuu. N.p., n.d. Web. 20 Jan. 2017.<https://issuu.com/dukeupb/docs/thearchive_fall2011>.

Turing, A. M. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950,  pp. 433–460. www.jstor.org/stable/2251299.

“TURING TEST SUCCESS MARKS MILESTONE IN COMPUTING HISTORY” University of Reading. N.p., 8 June 2014. Web. 21 Jan. 2017.

Where Am I?

You are currently browsing the Turing Test category at Science/Fiction.