Communal Principles in Arts, Crafts, & Technology

October 15, 2017 § 1 Comment

In 2012 Google premiered a commercial failure called Glass. The wearable eye-glass computer sought to seamlessly merge together life and technology as indicated in the advertisement demo below.

This ad speculates how Glass could have shaped the everyday lives of urban dwellers—allowing them to more easily schedule their days through voice activation software, know the weather forecast by simply looking out the window, navigate the city and its bookstores (despite obvious signage for where the genres are located—lazy bum!), keep track on how close their friends are nearby, and achieve peak hipsterdom by impressing others with their minimal ukulele skills. Upon its arrival, Glass was swiftly met with criticism. From its ugly, cumbersome design to its creepy apps that allow the user to take a photograph by winking or identify strangers just by looking at them, Google Glass became immediately embroiled in debates on utility and privacy. Its costly price of $1500 also didn’t inspire many consumers to line up at the stores. In 2015 Google announced its discontinuation, then revived it in 2017 as a tool to aid factory workers. Regardless of this niche usage today, however, Google Glass’s failed legacy in the public domain expresses current human needs for distinctions between everyday life, technology, and labor. These may occasionally overlap, but a person’s desire to break apart this chain of activities failed to be considered by Google’s tech team.

Such distinctions for the orders of social life were similarly interrogated in the late nineteenth century. In an era of growing distrust in machines and industrial capitalism, the Arts and Crafts movement arose to critique its contemporary moment. Rejecting machine-driven precision and mass-produced commercialism, William Morris and his company of craftsmen emphasized instead natural, rhythmic aesthetics based on floral and organic shapes of the natural environment. These functional, hand-made objects of exquisite intricacy were put in sharp contrast to the stark styles of industrial urbanism and the grim depictions of realist art. Based on the assumption that workers in factory systems were becoming severed from the earth and alienated from their own human nature, Morris advocated for a unification of art, life, and labor in the service to society. In essence, Morris believed that creating and being surrounded by beautiful wallpaper, bedding, chairs, etc. could uplift the spirit of the masses. Furthermore, by beautifying the everyday and quotidian, Morris thought that humans would be able to relearn their relationship with the natural world and nourish the foundations for a more just community. His vision for this future was likewise articulated in his 1890 utopian romance News from Nowhere or an Epoch of Rest.

In this text, the narrator William Guest inexplicably wakes up in a communist future where private property is abolished, gender and class equality have seemingly been met, and both life and work coexist as extensions of pleasure. Everyone is also really really pretty. Their beauty is so intoxicating for the narrator that even the persistently “sunburnt face” (179) of his love interest Ellen does not give him pause (or raise concerns about a potentially untreated skin cancer). Morris’s assumptions that a person’s daily immersion within attractive gardens and buildings could be edifying thus extend to a very transformation of the human appearance itself. But more crucially, the function of beauty and nature in Morris’s novel is to create a more harmonious community.

These art and social theories are put into practice throughout the novel as the narrator freely wanders the landscapes of England alongside his “neighbors” and encounters the incredible freedoms now allowed. Yet although the “art of work-pleasure” (160)—that unification of art, life, and labor—foregrounds the daily activities of Nowhere’s inhabitants, technology remains in distinct and separate spheres. Men and women may be able to work together now, but machines must be kept elsewhere doing the tedious labors that humans do not want to perform any longer. This separation of labor allows for a closer connection with the natural environment for Nowhere’s human community, but what is peculiar about this development is that the people must relearn how to work in the fields and create crafts from these very industrial machines (199). As such, a return to a communally-oriented human nature within the environment depends on the service and then subsequent displacement of technology.

Can today’s technology in our post-industrial age educate us toward similar tasks? It seems difficult to think so, especially considering the ways in which capital, commercialism, and technology have all led to the environmental catastrophes we now face. Perhaps, however, there is still a utopic underpinning in the design of technology itself and its ability to cultivate a community, even if such a community may be based on one’s ties to a consumer product. This utopic promise has been most notably expressed by Apple, leading one critic to even call Steve Jobs the William Morris of our time for his insistence on uniting user-friendly function with pleasurable aesthetic design—not to mention that organic eponymous logo.

In contrast to Google Glass’s affluent hipster dude, Apple often advertises its products for their ability to bring people together from a variety of cultural backgrounds. Consider this year’s iPhone 7 commercials below.

On the one hand, these advertisements demonstrate the interests of capitalism to reach as many diverse and global locales as possible. On the other hand, though, these commercials may be read for their thematic concerns, which hold similarities to the utopian romance News from Nowhere. What we see in these ads are expressions of joy in labor, beauty, travel, leisure, and love. These expressions are indeed necessitated by a capitalist undertaking to sell these products, but the cultural imagination of these ads also suggest the ways in which technology, like nature, may germinate goodwill. Technology may have largely supplanted nature today, yet one thing is clear: the hope in a benevolent human community remains unyielding.

Sources:

Morris, William. News from Nowhere and Other Writings. Penguin Classics, 2004.

—Cameron Clark

Advertisements

From Coal to Code: Feeding the Machines

October 1, 2017 § 2 Comments

What’s the difference between mining coal and writing computer code? According to Michael Bloomberg, the obvious answer is: a lot. In response to concerns over the devastating loss of coal jobs in regions highly dependent on the industry, Bloomberg urged compassion for the displaced workers but also suggested that we need to be realistic about their options. “You’re not going to teach a coal miner to code,” said Bloomberg at a 2014 conference, “Mark Zuckerberg says you teach them to code and everything will be great. I don’t know how to break it to you…but no.” Bloomberg’s comments have been perceived as a sign that he is an out-of-touch elitist who thinks coal miners are intellectually-stunted yokels. This is likely true about Bloomberg, but the assumption on which these comments are based is shared by plenty of people who are not members of the U.S. plutocracy—the assumption that coal mining and computer coding are fundamentally different jobs. Coal mining is blue collar, coding white collar; coal mining is industrial, coding postindustrial; coal mining is rough, manly work, coding is for skinny nerds.

Coal Miners

 

The Kentucky company Bitsource has set out to show that the gap between the industrial and the postindustrial is not unbridgeable. The web and app design company was started in 2015 by Rusty Justice and Lynn Parish, both of whom had spent 40 years in the coal industry. In what was admittedly an act of desperation in the face of increasing unemployment, the men started the company with the help of a coder friend Justin Hall and selected eleven formers miners (out of over 900 applicants) to learn code. Of those eleven, ten still remain, and the company is doing well, despite existing biases (i.e. potential clients assuming that former miners are intellectually-stunted yokels). While Bitsource is a small company whose impact is currently minimal, they have created a much-needed spark of interest in retraining programs for unemployed workers in coal country. Other companies plan to get on-board with the development of what is rather ingeniously being called Silicon Holler. The biggest barrier to this development is the lack of broadband internet access in the region. Kentucky already has an active program called KYWired that is addressing the problem, but they will likely need subsidies from the federal government to create a broadband infrastructure expansive enough to make Silicon Holler a success. This would require the Trump administration to stop pretending it’s bringing back the coal jobs and start thinking about what the region actually needs.

Image result for samuel butler erewhon

What’s so different about coal mining and computer coding anyway? Both tasks are, after all, in service to our machine overlords—at least that’s the way the Victorian writer Samuel Butler suggests we might see it. In Butler’s 1872 satirical utopian novel, Erewhon, an adventuring young man in search of fortune comes across a lost civilization that had once reached a high level of technological advancement but had decided centuries earlier to destroy all advanced machinery. The decision had been prompted by the work of a  professor, who suggested that machines already possessed a certain form of consciousness and would undoubtedly evolve to become a superior race. The professor argues that while it may not seem that machines are acting of their own will, they are already adept at getting men to serve them. He uses the example of the massive coal industry needed to keep the steam engines going: “Consider […] the colliers and pitmen and coal merchants and coal trains, and the men who drive them, and the ships that carry coals—what an army of servants do the machines thus employ! Are there probably not more men engaged in tending machinery than in tending men? Do not machines eat as it were by mannery?” The professor sees the coal workers as an army of domestic servants, running around frantically to get dinner on the table for their insatiable mechanical masters.

But this is not the only way that humanity serves the machines, according to the professor. In addition to keeping them fed, it also acts as their agent of evolution. Humans, in continually seeking technological improvement, are not only ensuring the machines’ survival, but also the latter’s advancement as a species. While it may seem like humans are doing this for their own benefit, this is merely an illusion: “It would seem that those thrive best who use machinery wherever its use is possible with profit; but this is the art of machines—they serve that they may rule. They bear no malice towards a man for destroying a whole race of them provided he creates a better instead; on the contrary, they reward him liberally for having hastened their development […] but the moment he fails to do his best for the advancement of machinery, by encouraging the good and destroying the bad, he is left behind in the race of competition; and this means he will be made uncomfortable in a variety of ways, and perhaps die.”

In short, our machine overlords have cleverly wedded their own advancement as a species to humanity’s economic survival, thus ensuring that they will never be neglected.

Humans may think that we have actively chosen to move away from the coal industry because we’re concerned about climate change, but it is likely that our mechanical masters have simply developed a more refined palate. They are now turning their noses up at the black lumps they once craved and are demanding the more delicate flavors of wind and sun. One thing is certain: they have developed a ravenous appetite for code. As machines continue to advance as a species, their human menials will undoubtedly be expected to serve up an ever-increasing supply.

Most coding is not about solving intense conundrums or making exciting breakthroughs. It’s mostly about keeping things running. It requires a specialized skill and a lot of patience. In this way, WIRED contributor Clive Thompson has argued, coding generally looks less like the creation of Facebook and more like “skilled work at a Chrysler Plant.” These solidly middle-class jobs, Thompson suggests, might be taught through high school vocational programs and at community colleges.

I say: let’s do what we need to do to keep our machine overlords fed. I don’t want to see what happens if they get hungry.

–utopianfictionblog

Sources:

“Hillbillies who Code: the formers miners out to put Kentucky on the tech map” by Cassady Rosenblum. The Guardian. 21 April 2017.

https://www.theguardian.com/us-news/2017/apr/21/tech-industry-coding-kentucky-hillbillies

“Can You Teach a Coal Miner to Code?” by Lauren Smiley. WIRED. 18 November 2015.

https://www.wired.com/2015/11/can-you-teach-a-coal-miner-to-code/

“The Next Big Blue-Collar Job is Coding” by Clive Thompson. WIRED. 8 February 17.

https://www.wired.com/2017/02/programming-is-the-new-blue-collar-job/

Erewhon; or, Over the Range by Samuel Butler. Penguin, 1985.

 

Metamorphosis and Mirrors

September 17, 2017 § 1 Comment

[Please Note: This text contains minor spoilers for the 2017 television series “Twin Peaks: The Return.”]

The season finale of “Twin Peaks: The Return” earlier this month created a seismic ripple amongst David Lynch devotees of the Internet. The proliferation of detail-obsessed fan theories, wikis in at least six languages, and thoughtful analytic pieces speaks to the twisted depths of Lynch’s vision in his reboot of the cult 1990s television series. While the show’s terrain is undoubtedly multidimensional, its intricacies depend on a foundational, age-old motif: dual identity.

While doppelgängers have always been important to the “Twin Peaks” universe, Lynch takes it a step further in “The Return” with the introduction of Tulpas: manufactured alternate identities. A Tulpa takes on the exact appearance (with shifts in minute physical details) of a character, but is actually a construct unknowingly advancing evil while the “real” person is trapped somewhere—in another body or alternate dimension. This play on identity undergoes a number of interesting permutations with the show’s central character, FBI Special Agent Dale Cooper (Kyle MacLachlan). There is 1) the “real” Dale Cooper, known and adored from the original series, 2) his evil doppelgänger, “Bad Cooper,” 3) a manufactured Tulpa, Dougie Jones, 4) Dougie Jones’ evacuated body, which is reinhabited by the “real” Dale Cooper in a dormant, barely verbal state; and, eventually, 5) the reawakened Dale Cooper who, after entering an alternate dimension, becomes someone named Richard. In this shifting landscape, one can never know who is real and who is a Tulpa, not the real characters or the Tulpas themselves, and oftentimes not even the viewers.

twin-peaks-showtime-three-dale-cooper-kyle-maclachlan-david-lynch

Bad Cooper, Dale Cooper, and Dougie Jones (Showtime)

Indeed, any fan of Lynch will recognize doubles as a long-standing interest of the surrealist filmmaker; Mulholland Drive and Lost Highway wholly depend on structures of duality and split existence. But this preoccupation with multiple identities seems to have particular resonance in the world of contemporary television. Jill Soloway’s award-winning series “Transparent,” for instance, revolves around the story of Maura Pfefferman (Jeffrey Tambor), a transitioning transgender woman who, while exploring the complex (and increasingly unlikely) process of sex reassignment surgery, must make peace with her hybrid identity.

The FX original series “The Americans” offers a more politically oriented site for thinking about metamorphosis: two Soviet spies passing as Americans (as well as happily married) near the end of the Cold War. Additional examples are not hard to find: Walter White/Heisenberg (“Breaking Bad”), Don Draper/Dick Whitman (“Mad Men”), and Tony Soprano (“The Sopranos”), who struggles to reconcile his public role as brutal Mafioso with his inner sense of morality and humanity.

That the theme of metamorphosis is central to so many recent television shows is significant, I think. TV has emerged as a medium that not only offers easily digestible entertainment, but also produces serious art, in some conversations even rivaling film as today’s cinematic experience of choice. And perhaps it is no surprise that in an increasingly digital and fragmented culture, creatives have taken up questions of refracted identity, code-switching, and constructed worlds through a medium that is itself somewhat paradoxical: consisting of isolated parts while also sustaining a long-form narrative whole, a combination that has produced a telling term that yokes the consumptive and temporal—binge-watching.

Perhaps, though, it is not merely technology that is fueling inquiry into questions of identity and self-definition in today’s cultural mainstream. As we become an increasingly global society, boundaries separating countries and cultures are less and less defined (or else more fervently reinforced). It may be productive to consider an earlier era that blurred geographic borders through an increase in travel: the 19th Century. In 1859, Charles Darwin’s On the Origin of Species spawned a crisis of faith in the Western world, exploding people’s notions of what it means to be human. The rise of expeditions into new territories and confrontations with indigenous cultures exposed a seemingly infinite variety among human beings and the natural order.

Identity crisis may indeed be one impulse behind the contemporary revival of the 19th Century as a site for artistic inquiry. A particularly inspired example of such bridging of periods can be found in A. S. Byatt’s 1992 novella Morpho Eugenia, which presents Victorian-era social critique and romance through a modern, hybrid form. The story’s protagonist, William Adamson, admits to being “doomed to a kind of double consciousness” after returning to England from a voyage in the Amazon (28). Throughout the novella, William is jockeyed between a host of tensions and dualities: settling into domesticated married life vs. pursuing his life’s work in the jungle; writing a book on natural science vs. editing his father-in-law’s book arguing for the existence of a divine creator; lusting for a woman who is physically alluring vs. one who is intellectually stimulating.

In his book on the behavior of ants, William lays out “some more abstract, questioning chapters” according to a series of possible headings: “Instinct or Intelligence,” “Design or Hasard,” “The Individual and the Commonwealth,” “What is an Individual?” (126). These questions have DNA in cultural artifacts from as recent as the television shows discussed above to as distant as Homer’s Odyssey (a source text that’s taken up in Morpho Eugenia and, appropriately, adapted to suit the novella’s own ends). After sketching out William’s chapters, the text shifts into the actual pages of his book, where he considers “the utility to men of other living things.” He writes, “one of the uses we make of them is to try to use them as magical mirrors to reflect back to us our own faces with a difference” (127).

In “Twin Peaks: The Return,” Lynch ends one of the final episodes with a brief, unsettling shot of Audrey Horne (Sherilyn Fenn), in an unspecified location, staring aghast at her own mirror-image.

twin-peaks-the-return-part-16-audrey

Audrey Horne (Showtime)

Where is she? What year is it? Design or Hasard? What is an individual?

We look to science and religion and art. We look to others and, ultimately, to ourselves. But perhaps the longer and harder we look, the farther away we are from knowing, and the more we demand from the image reflected back to us.

Jennifer Gutman

Source:

Byatt, A. S. “Morpho Eugenia.” Angels & Insects. Vintage International, 1994.

Money Speaks Louder than Human Voices

March 28, 2017 § 3 Comments

“Everything has a price.” This phrase in Margaret Atwood’s Oryx and Crake is not new, but it takes on a new meaning in the context of her novel (139). In today’s world, corporations dominate in every sphere from the economy to religion and politics. While Atwood’s world in which corporations have absolute control is unsettling, her ideas are merely an extrapolation of current times to the future. However, as Atwood shows, commercialism and commodification come at a high price to society and the humans that are a part of it.

Early on in the novel we learn that Jimmy (or later Snowman) lived on a company compound called OrganInc. The corporation controls everything in Jimmy’s life including his school and the rules he has to abide by, enforced through the CorpSeCorps. Later on, we learn that Jimmy and Crake attend what are similar to universities. These “universities,” particularly Crake’s Watson-Crick Institute, aim to generate profits as well, encouraging the very bright students to innovate and develop new technology, carefully securing their facilities, and minimizing interaction with the outside world. In Jimmy’s world, corporations control everything, and their motives clearly dominate.

The corporation-developed compounds seem absurd; however, in reality, they already exist. Massive companies like Amazon and Google have “campuses” that contain everything one needs to live off of. They include restaurants, gyms, childcare facilities, and even sleeping pods – all designed to keep you inside and focused on doing everything possible for the company. Beyond company campuses, universities today mimic those in Atwood’s story. As Vandy students, we even say that we live in a “Vandy Bubble.” Our lives all exist within the confines of our campus as we strive to learn and make new developments in all fields. We are not far off from the fictitious world that Atwood describes.

Images are renderings of future campuses for Google, Amazon, and Apple (from left to right). 

Why does it matter that corporations and technological research centers have such a wide sphere of influence? In a world where profit governs, everything becomes a commodity. This can easily be seen in Oryx and Crake with the story of Oryx. Not only is Oryx commoditized by the pimps that earn money for her sexual acts and pornography but Oryx is also commoditized by every viewer that watches the child pornography, including Snowman. In her discussions of her experience, Oryx has clearly been influenced by the corporation mentality surrounding her, as she states:

“They had no more love…but they had money value: they represented a cash profit to others. They must have sensed that – sensed they were worth something.” (126)

Do we only value human beings for the monetary value they provide? I hope not. Atwood shows a disturbing reality if corporate power continues on its current trajectory. The power of corporations to influence politics and culture even today has implications for cloning and other advanced technology. It is unsettling to think of the development of human clones by companies driven by their own bottom-line. Morality does not seem to have a place in this kind of world.

If we do consider these clones to be “human,” how do we prevent the corporate developers from treating the clones like commodities and not humans, especially when humans today are already commoditized? In the novel, Snowman compares the children in the pornography to “digital clones,” as they did not feel real to him (90). With this statement, Atwood warns of the commodification of both existing humans and potential human clones in the future. If corporations both govern and profit, we cannot prevent abuse and exploitation.

Atwood is not far off in her portrayal of the commodification of human clones. Human cloning has often been criticized for turning human organs into commodities due to their monetary value with cancer treatments and other diseases. President Bush famously rejected all human cloning, stating, “Life is a creation, not a commodity.” He is not alone in being concerned with this idea, as scientists, philosophers, and policy-makers have discussed the implications of human cloning for decades. The Presidents Council on Bioethics expressed the following:

“When the ‘products’ are human beings, the ‘market’ could become a profoundly dehumanizing force.” (The Presidents Council on Bioethics, 2002)

When corporate greed becomes entangled with the morality of health remedies, the potential commodification of humans and human clones is endless. Although Atwood’s fictitious world seems so distant, the reality is that it is much closer to present day than one would first think. From humans to clones to our independence and our value, Atwood shows that everything has a price, and the costs to society are high.

Sources:

Atwood, Margaret. Oryx and Crake. New York: Anchor, 2004. Print.
Arter, Melanie. “Bush: ‘Life Is A Creation, Not A Commodity’.” CNS News. CFC, 07 July 2008. Web. 28 Mar. 2017. http://www.cnsnews.com/news/article/bush-life-creation-not-commodity.
The President’s Council on Bioethics. “Human Cloning and Human Dignity: An Ethical Inquiry.” Georgetown University, July 2002. Web. 28 Mar. 2017. https://bioethicsarchive.georgetown.edu/pcbe/reports/cloningreport/children.html.
Cambria, Nancy. “Our 21st-century Prophet, Margaret Atwood.” St. Louis Post-Dispatch. STLtoday, 26 Sept. 2015. Web. 28 Mar. 2017. http://www.stltoday.com/entertainment/books-and-literature/reviews/our-st-century-prophet-margaret-atwood/article_242b5f9b-3ac6-51e3-9024-e858d178f6e2.html.

Images source: http://www.geekwire.com/2013/4-tech-titans-building-campus/

Indifferent New Technology, Same Old World

March 22, 2017 § 3 Comments

Aldous Huxley’s Brave New World is, according to Wikipedia, a novel that “anticipates developments in reproductive technology, sleep-learning, psychological manipulation, and classical conditioning that combine profoundly to change society.” The book is often interpreted as a cautionary tale, decrying the dangers of unfettered embrace of new technologies. Though the technologies in the novel are used to maintain a shockingly stratified social system and to perpetually distract citizenry through elaborate entertainment, non-sentient technologies are essentially neutral to the way in which they are used. Technologies are tools, and tools do not dictate their own usage. While some tools intrinsically lend themselves to nefarious uses more than others, this is ultimately a question of correlation versus causation – whether the post-Ford society’s practices are due to their technologies or whether their advanced technologies and base instincts merely complement each other.

I’d argue for the latter.

The Caste System. Constantly proposed, constantly rejected. There was something called democracy.

The post-Ford society in the novel is strictly divided into a hierarchy of social castes, including Alphas, Betas, Deltas, Gammas, and Epsilons. Each of these castes, separated by birth, serve a rigid social role and receive varying levels of oxygen and exposure to alcohol during development, either enhancing or limiting their respective intelligences. While horrifying to enact with such efficiency, caste systems are not new for humanity. Medieval Europe was dominated by the feudal system, with stark divisions between the land-owning vassals and the subservient serfs. Rome’s social structure included patricians, plebeians, freedmen, and slaves. The remnants of India’s caste system of brahmins (priests), khsatriyas (warriors), vaishyas (merchants), shudras (servants), and untouchables, still drastically impact the nation’s politics and economics. The common requirement for entry into the upper castes in each of these civilizations? You had to be born into them?

Such stratification is neither an exclusively ancient or international problem, and castes and democracy are not necessarily antithetical, as presented in the novel. Our own nation’s social and economic structure was, for much of our history, built on the idea that entire groups of people, including Chinese-Americans, Irish-Americans, and especially African-Americans, were meant to serve subservient, less-than roles.

You should see the way a negro ovary responds to pituitary!

Additionally, in Brave New World, social order is dependent on the maintaining of a large, unintelligent working class, devoid of opportunity for advancement from birth. Robbing fetuses of oxygen is a direct way to limit intelligence and social mobility, but does disparate access to nutrition, birth control, and education not achieve a similar result? Do our own environments not condition and shape us for the work we ultimately end up performing?

Again, these are not merely problems consigned to a tech-obsessed world. In our own world, resistance to technology may, to some extent, contribute to the maintaining of such stark divides between classes. As I have previously argued, a partial solution to Fordism may be the replacement of dehumanizing human jobs with machine workers, eliminating the need for humans to serve as cog in the industrial machine. However, without educational opportunities, the elimination of such positions could leave man without options. Breaking a cycle of poverty is an uphill climb, one which may take generations to truly surmount.

I’m glad I’m not an Epsilon.

As students at one of the highest-ranked universities in the nation, we stand on the lucky side of societal divides. According to a recent New York Times article, the median household for Vanderbilt students was $204,500, placing a full half of our student body in at least the top 5.6% of household incomes nationally. This trend is by no means exclusive to Vanderbilt, and similar levels of wealth can be seen at most “top-tier” institutions, which is not exactly surprising. Wealthier families are more likely to be able to live in school districts with higher taxes, invest the time and money needed for their children to pursue extracurricular activities, and free their children from the limiting worries of whether they’ll have enough to eat or a place to sleep. Beyond these factors, many top schools cost in excess of $60,000 a year to attend, presenting a cost barrier for those ineligible for financial aid, a knowledge barrier for many who are unsure of how to take advantage of the financial aid system, or an access barrier for those whose previous opportunities do not align with the sought-after qualities for admission.

Nothing like oxygen-shortage for keeping an embryo below par.

Within the university setting, this social stratification manifests itself yet further. Huxley’s characters are divided into groups of various levels of power and social status, assigned letters of the Greek alphabet. As a student, it’s hard to not draw parallels to Vanderbilt. On campus, many activities considered to be prestigious have high entry costs, including club sports, service trips, Maymesters, and Greek life. In such organizations, students are filtered by what is, ultimately, a few impressions, and further self-filtered by their ability to pay.

Hasn’t it occurred to you that an Epsilon embryo must have an Epsilon environment as well as an Epsilon heredity?

According to a 2017 Vanderbilt Political Review article, “Greek students dominate all three upper-income categories…conversely, non-Greek students held a greater share of all four lower-income levels.”

USE
Source: Vanderbilt Political Review

While efforts have been made to increase access to Vanderbilt social groups with high entrance costs, there is still a significant divide. Is Huxley’s vision of social stratification with different tiers of people assigned different Greek letters totally unfathomable? Data would indicate that even in an already massively stratified educational institution, we self-stratify in a similar manner. And proudly so. We even buy t-shirts to tell others about it.

However, the seemingly insurmountable systematic injustices in Brave New World raise some glaring questions. Why are people content? Why do they not revolt?

Every man, woman, and child compelled to consume so much a year.

In Huxley’s novel, characters are distracted and numbed by entertainment. They play complicated games requiring elaborate equipment, such as obstacle golf, escalator squash, and centrifugal bumblepuppy.

Two words: drone racing.

DIS

Source: DroneReview

Games, like all human systems, have become more complex as time goes on. Huxley anticipated that this trend would continue into the future. In our case, we’re competitively racing flying robots through video monitors. Huxley’s vision did not, however, account for the meteoric rise of a paradigm-shifting technology: digital media.

According to the 2015 “How much media?” report by the University of Southern California Marshall School of Business, Americans are estimated to consume a staggering average 15.5 hours of media per person per day. But though we consume vastly more media than our predecessors, we’re not necessarily better informed. In his book The Big Sort: Why the Clustering of Like-Minded America is Tearing Us Apart, author Bill Bishop argues access to such a wealth of information enhances our natural tendency towards confirmation bias on a historically unparalleled scale. If we so choose, we can view only sources that support the beliefs we already possess. We can isolate ourselves from opposing viewpoints, disengage from debate, distract ourselves from substance.

But how can we stay away? We have so many media access points: TV, advanced computers, gaming systems, phones able to access a nearly unlimited wealth of entertainment. We have no reason to not be constantly entertained everywhere we are, should we desire it. For example, Nintendo’s new Switch gaming console can be played through a TV, a computer, or through a built-in screen allowing individual or group play on the go.

Nowadays the Controllers won’t approve of any new game unless it can be shown that it requires at least as much apparatus as the most complicated of existing games.

Strict social structures, clear stratifications, distracted consumers — is Huxley’s truly a brave new world?

Maybe, if history is bunk.

 

SOURCES:

https://www.nytimes.com/interactive/projects/college-mobility/vanderbilt-university

http://www.vanderbiltpoliticalreview.com/vanderbilt-greek-lifes-money-problem/

http://money.cnn.com/calculator/pf/income-rank/

https://www.marshall.usc.edu/faculty/centers/ctm/research/how-much-media

 

Author: Austin Channell

 

Subverting Cognition: Surrealist Automatism and Brooks’ Intelligence Theory

March 13, 2017 § 3 Comments

In Flesh and Machines: How Robots Will Change Us, Rodney Brooks presents his unique take on the pathway to create meaningful artificial intelligence. To briefly summarize, he suggests that removing clunky algorithms aimed at simulating cognition, while simultaneously creating a direct link between sensation and action, supports more advanced general intelligence (functional intelligence). For me, Brooks’ theory of intelligence found in “the interaction between perception and action” (Brooks Ch.3 “Planetary Ambassadors”) called to mind the techniques of Surrealist painters like Salvador Dalí, Joan Miró, and André Masson. The Surrealists used automatism — painting without conscious thought — to subvert their own cognition and the rational mind, in order to tap into deeper and more raw thoughts and feelings. I argue that given the parallel intention of Brooks’ approach to AI and Surrealist Automatism, an exploration of the latter can help us gain an understanding of Brooks’ method.

Before diving in, let me quickly clarify what I mean when I say ‘meaningful artificial intelligence’ or ‘general intelligence’. In Flesh and Machines, Brooks distinguishes between the tasks and processes traditionally tackled by AI researchers (playing chess at the level of mastery, solving calculus problems, etc) and more practical expressions of intelligence (entering a room, navigating a new environment, avoiding obstacles), and points to the fact that programming a robot to do the latter is a considerable challenge. Thus, I define meaningful or general artificial intelligence as the intelligence that human beings and animals employ in performing ‘basic’ operations, but are far more complicated on a cognitive level than they appear to us. Brooks’ strategy was geared towards cracking the code of programming this kind of intelligence, and it was these simpler actions and motions that the Surrealists sought to simplify their brush strokes and methods too.

Caught in a time of political uncertainty both within the art world and the world at large, the Surrealists reflected on how the conscious mind and higher-level cognition is difficult and beset with ideology from what they saw as a flawed society. They wanted to divorce the art-making process from the constraints of the rational mind indoctrinated by an oppressive society. In order to escape, they adopted a working method called automatism, which allowed them to essentially paint without conscious thinking, thus sourcing the lines and forms that resulted from their subconscious.

Pioneered by André Masson, the Surrealists painted using automatism by first completely clearing their minds. Often, they would even close their eyes or use drugs or natural supplements to achieve a more detached state. Then, they would allow their hand, holding a paintbrush, to flow randomly across the canvas, so that the resulting lines and forms were more a product of chance than conscious manipulation of the paintbrush. In this way, their style was free of rational control. In this way, the Surrealists thought that the compositions they created using automatism came directly from their subconscious — the epicenter of interaction between perception and action. In other words, they tried to simplify their cognitive processes as much as physically possible down to the point where they merely operated on the basis of the interaction between their perception — the way the paintbrush felt across the grain of the canvas — and action — creating a line via the paintbrush on the canvas.

Andre-Masson.-Automatic-Drawing-348x395

Andre Masson, “Automatic Drawing”, 1924

As you can see, there are strong parallels between the working method of the Surrealists and Brooks’ approach to simulating general intelligence. In explaining the benefits of his theory, Brooks describes his “subsumption architecture” for machines, by which he created a hierarchy of processing levels simulating the process of evolution of traits.

“For Allen [his first physical robot] I targeted three layers. The first was… to make sure that the robot avoided contact with objects… The second layer would give the robot wanderlust, so that it would move about aimlessly. The third layer was to explore purposefully whenever it perceived anything interesting…” Rodney Brooks, Flesh and Machines

Using Brooks’ vocabulary and framework of layers, one can analyze the process of the Surrealists in a similar way. The primary (first, or bottom) layer of automatism was to paint. The fundamental task programmed into the Surrealist’s action was to paint by dipping paintbrush into paint and applying it to the canvas. The second layer was to paint continuously without really stopping. The surrealists were concerned that if they paused longer than an instant, the conscious mind would kick back in. In terms of programming, the continuous painting layer doesn’t have to process how to paint, because the first layer achieved that. Then, the third and tertiary layer was to explore/follow through on a particular line if the sensation of that line or the texture in that region of the canvas captured the artist momentarily. Thus, the automatism employed by some of the Surrealist painters very closely mirrors the “thoughts”of Brooks’ coded AI. In fact, the drawing by Andre Masson above even looks like it could be an aerial view of the paths that Allen may have taken, moving around Brooks’ research lab.

I find the parallels between the techniques of the Surrealists (developed and employed in the early 1900’s) and Brooks’ theory of intelligence (developed and employed in the late 1900’s) to be confirming the validity and ingenuity of Brooks’ theory for machine intelligence. That is to say, if human beings sought to shed the weight and burden of clunky cognitive thought in order to achieve a greater level of functionality in expressing themselves on the canvas, it is incredibly impressive and valid for Brooks to suggest the same for his machines; he argued to remove dedicated “cognition boxes” from his machines, thus eliminating time-consuming and complex cognition algorithms from his AI’s ‘thought’ processes.

While Brooks may not have drawn inspiration from the methods of the Surrealists, I find it beautiful that leaders in two remarkably distinct disciplines both arrived at a similar point of seeking a purified relationship between sensation and action to achieve greater expression and intelligence of movement. Though it has often been suggested that links between memories will be the key to creating thinking artificial intelligence, Brooks’ theories have led me for the first time to consider that in the future, progress in AI development will also come from the mutual inspiration between disciplines, especially the humanities in creating more “intelligent” and human-like robots and machines.

❈❈❈

Sources / For more information on Surrealism and Automatism:

https://www.khanacademy.org/humanities/art-1010/art-between-wars/surrealism1/a/surrealism-an-introduction

https://www.moma.org/learn/moma_learning/andre-masson-automatic-drawing

http://www.tate.org.uk/research/publications/tate-papers/18/becoming-machine-surrealist-automatism-and-some-contemporary-instances

Patrizio Murdocca

A New Age of Artifice

January 23, 2017 § 5 Comments

In the fall of 2011, Duke University’s undergraduate literary journal published a rather unassuming poem entitled “For the Bristlecone Snag” (“The Archive”). To the journal’s poetry editors, the poem appeared to be a typical undergraduate work, comprised of several unfulfilled metaphors and awkward turns of phrase. What the editors did not know at the time of publication, however, was that this poem was not written by a human. Instead, it was written by a computer program (Merchant).

When I first learned about “For the Bristlecone Snag”, I was reminded of the writings of Alan Turing, a renowned English computer scientist in the mid 20th century. In his seminal article on the subject of artificial intelligence (A.I.), Turing articulates that the question, “can machines think?”, is “too meaningless to deserve discussion” (Turing 442). After all, he claims, we have no direct evidence that other humans can think, and we merely assume that they do based on their behavior. Turing argues that this “polite convention that everyone thinks” should apply to all beings that can demonstrate human behavior (Turing 446). It is from this line of thought that Turing conceptualized the Turing Test, an experiment in which a computer tries to convince a human of its humanity. According to Turing, if an A.I. can convince a human judge that it is human, then we must assume that the A.I. can think.

While the program that produced “For the Bristlecone Snag” did not complete an extensive and proper Turing Test, it did convince human judges that it was human. At the very least, the poem’s acceptance into an undergraduate literary journal reveals that literate machines can, and will, exist in the near future. The way is paved for more professional and accomplished artificial authors.

Indeed, even in the half decade since “For the Bristlecone Snag” was published, the technology behind artificial intelligence has improved rapidly. Watson, IBM’s “cognitive computing platform”, is a great example of this progress (Captain). In 2011, Watson defeated two reigning champions in Jeopardy, successfully interpreting and answering the game show’s questions. While this feat alone was a remarkable step in cognitive computing, Watson’s analytical abilities have since then contributed to over thirty separate industries, including marketing, finance, and medicine (Captain). For example, the machine can read and understand millions of medical research papers in just a matter of minutes (Captain). As intelligent as Watson is, however, he was never designed to pretend to be human. The chief innovation officer at IBM, Bernie Meyerson, believes ‘“it’s not about the damn Turing Test”’; his team is more interested in accomplishing distinctly inhuman tasks, such as big data analysis (Captain).

While IBM may not be interested in the Turing Test, other artificial intelligence companies have been working specifically towards the goal. In 2014, a program by the name of Eugene Goostman passed the Turing Test using machine learning strategies similar to those that drive Watson (“TURING TEST SUCCESS”). The chatbot, or program that specializes in human conversation, was able to convince several human judges that it was a thirteen-year-old boy (“TURING TEST SUCCESS”). Given the success of Eugene Goostman, and the intelligent accomplishments of Watson, it is indisputable that the Turing Test can be, and has been, passed. Artificial intelligence is a reality. Machines can think.

As an aspiring writer and computer scientist, I can’t help but fixate on the implications that A.I. has for literature. It is entirely possible, even likely, that “For the Bristlecone Snag” foreshadows an era in which the most successful and prolific authors will be machines, an era in which the Pulitzer Prize and Nobel Prize in Literature are no longer given to humans, an era in which humanity no longer writes its own stories.

Yet, this era of artifice should not be greeted with worry or anxiety. Art has always been artificial, a constructed medium for human expression. In the coming decades, we will author the next authors, create the new creators, we will mold the hand that holds the brush. Artificial intelligence should not be feared as an end to art, but rather a new medium, a new age of artifice.

– Zach Gospe

References

Captain, Sean. “Can IBM’s Watson Do It All?” Fast Company. N.p., 05 Jan. 2017. Web. 20 Jan. 2017.

Merchant, Brian. “The Poem That Passed the Turing Test.” Motherboard. N.p., 5 Feb. 2015. Web. 20 Jan. 2017.

“The Archive, Fall 2011.” Issuu. N.p., n.d. Web. 20 Jan. 2017.<https://issuu.com/dukeupb/docs/thearchive_fall2011>.

Turing, A. M. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950,  pp. 433–460. www.jstor.org/stable/2251299.

“TURING TEST SUCCESS MARKS MILESTONE IN COMPUTING HISTORY” University of Reading. N.p., 8 June 2014. Web. 21 Jan. 2017.

Where Am I?

You are currently browsing entries tagged with technology at Science/Fiction.

%d bloggers like this: