Pacific Rim and Cognitive Hybridity

November 12, 2017 § 6 Comments

 

Guillermo del Toro’s 2013 sci-fi flick Pacific Rim begins with a series of flashbacks that sketch out the fictional history of the Kaiju War. The Kaiju (Japanese for “monsters” like Godzilla) are a species of giant and deadly alien invaders who are somehow finding their way into the Pacific Ocean through an inter-dimensional portal, laying waste to cities from Sydney to Hong Kong to Los Angeles as they prowl the titular Pacific Rim. An elite group of human fighters – including protagonist Raleigh Becket and his brother Yancy – have been trained to battle the Kaiju using huge mechas called Jaegers. A mecha, in general, is a seriously cool, giant humanoid robot controlled by a human pilot (‘90s kids might recognize The Power RangersMegazord as another mecha). Pacific Rim doesn’t disappoint with its own iteration of these powerful machines. Take a look at Yancy and Raleigh suiting up for battle with their mecha, called Gipsy Danger, in the clip below:

This clip also highlights the most unusual feature of the mechas in Pacific Rim. They aren’t just operated by their human pilots; the machines and humans actually fuse into a single fighting unit when they initiate the “neural handshake” and head into battle. Each Jaeger requires two pilots, and each pilot provides one hemisphere of brain function and motor control to the robot body. Raleigh explains the resulting condition like this: “The drift. Jaeger tech. Based on DARPA jet fighter neural systems. Two pilots mind-melding through memory with the body of a giant machine.” Once pilots like Yancy and Raleigh enter the drift-space, the connection between them and the Jaeger allows the two-part human pilot system to sync up seamlessly with the robotic frame.

The Jaeger program has its up and downs over the course of the twelve-year Kaiju War. By the time Raleigh’s narration brings us back to his present, in 2024, for example, international leaders have decided to phase out the program in favor of building a really big wall around the Pacific Ocean. (This plan is as stupid as it sounds, and just as ineffective as you might expect.) Raleigh’s life without Gipsy Danger is sad and mundane, and the juxtaposition of his post-Jaeger existence with earlier scenes like the one in the clip above emphasize how much better the program made him. He also alludes to a lackluster pre-Jaeger past, introducing himself and Yancy like this: “Years before, you wouldn’t have picked my brother Yancy and I for heroes – no chance. We were never star athletes, never at the head of the class, but we could hold our own in a fight.” Alone, neither of the Becket brothers is that impressive. Together, and with Gipsy Danger, they are heroes.

This emphasis on mental synthesis, or the idea that Jaeger pilots gain something from their co-cognitive experiences in the drift, raises an interesting (and surprising) parallel with Robert Louis Stevenson’s classic 1886 novella The Strange Case of Dr. Jekyll and Mr. Hyde. (Possible spoilers here if you’ve somehow avoided contact with the culturally pervasive tale of the ill-fated Dr. Jekyll and his cruel doppelganger.) In his posthumous confession at the novella’s conclusion, Jekyll writes about “the profound duplicity of [his] life” and “the two natures that contended in the field of [his] consciousness” – one hedonistic, and the other moral – until he found the means to chemically separate them (42). The disturbingly violent and antisocial Mr. Hyde is awful on his own, but in Dr. Jekyll, the pleasure-seeking impulses Hyde represents simply contribute to the doctor’s complexity. Hyde is a part of Jekyll, and so Jekyll is aware of Hyde’s thoughts and memories; however, Jekyll’s memories and thoughts do not translate to Hyde, who represents simply one component of the doctor’s composite self. Stevenson’s gothic tale builds to a horrific reveal in which Jekyll concludes far too late that he should have embraced his original duality.

Pacific Rim’s drift technology effectively offers the opposite of Jekyll’s separating drug: it augments a single human mind (already incredibly complex, as Stevenson’s novella demonstrates) by doubling it and fusing it to a mecha. The two narratives, Stevenson’s and del Toro’s, have nearly nothing in common in terms of genre, setting, medium, and particular plot points, but each is profoundly interested in the power of duality, complexity, and hybridity.

The trailer for Pacific Rim’s upcoming sequel, Pacific Rim: Uprising, debuted last month and seems to carry an even more robust passion for the Jaeger’s robot-human hybridity than the first film. In the trailer, below, John Boyega as Jake Pentecost (son of Idris Elba’s General Pentecost from the original Pacific Rim) encourages a new generation of Jaeger pilots to head into battle. “This is our time. This is our chance to make a difference,” he tells them fervently. In the final sequence of shots in the trailer, several flashy new Jaegers line up and brandish weapons in poses that feel reminiscent of recent superhero films like The Avengers. Yet Jake also calls the Jaegers “the monsters we created,” perhaps signaling a slightly more fragile faith in the Jaeger program than we saw in the first installment.

Raleigh Becket’s reference to the real-life DARPA program as he’s explaining the drift in the first clip embedded above indicates that we’re startlingly close to achieving the kinds of human-machine fusions that the Pacific Rim films celebrate. Are these augmenting possibilities really a way to enhance ourselves? Or are we heading down a path we should seriously rethink? I’ll be really interested to see whether Pacific Rim: Uprising tackles any of these questions in its handling of the “monsters” at the heart of the franchise.

— Katie Mullins

Works Cited:

Pacific Rim. Dir. Guillermo del Toro. Perf. Charlie Hunnam, Idris Elba, and Rinko Kikuchi. Warner Bros. Pictures, 2013.

Stevenson, Robert Louis. The Strange Case of Dr. Jekyll and Mr. Hyde. Ed. Tim Middleton. London: Wordsworth, 1993. Print.

Advertisements

Alias Grace and the Present Past

November 6, 2017 § 3 Comments

On Friday, November 3, Netflix premiered Alias Grace, a six-part miniseries adaptation of Margaret Atwood’s 1996 novel of the same title. The pairing of the acclaimed novelist and a major streaming service was bound to generate much interest, not least owing to Netflix’s rival Hulu’s hugely successful small-screen iteration of Atwood’s The Handmaid’s Tale earlier this year.

Some parallels between the two Atwood adaptations are clear, inevitable, and, one might say, even encouraged. Both Alias Grace and The Handmaid’s Tale concern the plights and oppression of women under patriarchal structures, ideas that strike a particularly sensitive nerve in the wake of recent revelations of an entrenched culture of sexual abuse. Yet The Handmaid’s Tale and Alias Grace, despite this shared premise, diverge in crucial ways: most importantly, whereas The Handmaid’s Tale depicts a dystopian future ruled by an authoritarian government that strips women of their rights and forces them into sexual slavery, Alias Grace derives its stories of wronged women from actual historical record. One is speculative, the other historical, and this distinction in genre is rendered visually – for instance, the crimson red of the Handmaids’ gowns stands out amid the muted tones of gray in The Handmaid’s Tale to lend an almost unreal, parable-like quality, and the more subtle and varied color palette of Alias Grace brings to viewers’ minds the style of familiar historical drama pieces that carry the impression of being grounded in specific times and places.

Alias Grace seems to adhere to the facts, or the specific and even irrefutable details of “who,” “what,” “when,” “where” and “how” that pertain to the true crime case of the real-life figure of Grace Marks. Marks, an Irish immigrant to Canada, was convicted alongside a fellow servant, stable-hand James McDermott, of murdering her employer, Thomas Kinnear, and his pregnant housekeeper and mistress Nancy Montgomery, in 1843. McDermott was hanged and Marks sentenced to life imprisonment, later released. These bare facts, however, fall woefully short, as the show illustrates, when tasked to explain the “why” of the case, and account for the 1840s Canadian public’s horror-mingled fascination with this servant girl-turned-“celebrated murderess,” a more specific example of which can be found in the increasing obsession of Grace’s interlocutor, Atwood’s invention Dr. Simon Jordan, whose once-comfortable conviction in the truth-discovering powers of “scientific” methods slowly crumbles. In the end, even the ascertained “facts” mentioned above begin to unravel as viewers start questioning the validity of any single view or narrative. The breakdown in communication and the elusiveness of knowability are underscored by the multiple, never-quite-aligning story-driving threads: the conversations between Grace and Dr. Jordan, Grace’s perspective of the past as shown in flashbacks, and the fleeting emotions that ghost across Grace’s face, the expressions of which range from shy, naïve and innocent to cold, hard, and resentful, and which is at other times sly and coy, never to be pinned down with either surety or exactness.

In a curious and quite ingenious twist, I would say, as Alias Grace progresses, the “why” element emerges as the most constant and perhaps even knowable, if that can be said to be the right word. This is not to say that the show succeeds in spelling out in explicit terms the reasons for the double deaths of Kinnear and Montgomery. Instead, it shifts focus from the “celebrated murderess” to the society that christens her with this name, diffusing viewers’ attention from the gory and sensationalist details of the murder to the yet more grim realities of a world that would drive a young, poor, and helpless woman such as Grace Marks to her breaking point after a lifetime of tragedy, exclusion, and exploitation (the show is at pains to remind us at every turn that Grace has faced unrelenting oppression by virtue of her gender, race, class, nationality, religion, and even family circumstances). It is perhaps fitting that so many scenes take place in domestic spaces that unite the show’s concerns with class and gender, and more so that Grace and Dr. Jordan hold their regular interviews in the sewing room, with Grace engaged in quilting, or an extension of the housework or domestic drudgery to which she is constantly subjected. (It is, however, also worth mentioning here that Grace’s quilt-making carries subversive undertones, as a seemingly mundane household chore that can be imbued with creative vision and the joining together of various parts.) The sewing room, in turn, is part of the home of the Governor of the Kingston Penitentiary in which Grace is employed following her pardon from imprisonment in the facility for her “exemplary” conduct. Throughout the show, Grace trudges from one domestic space to another, and such spaces, as viewers observe, are further stratified according to class and gender, among others.

Set in Canada in the 1840s, Alias Grace offers a glimpse into the pervasiveness of Victorian values as they pertain to women and minority groups in remote settings such as Canada and their specific effects on a character such as Grace Marks. If Alias Grace is descriptive in its depiction of Canada of the 1840s and the wider, far-reaching influences of Victorian values outside England, especially as they pertain to women and minority groups, it is also prescriptive in the sense that it serves as a harrowing reminder of the persistent or lingering presence of the past in the present and the possible directions in which such presences could lead us in the not-so-distant future. In other words, works such as Alias Grace reinvent our memories of the past to weaken or collapse notions of their safe and secure distance from the present or even a dystopian future.  To return to matters of genre, the historical and the speculative might perhaps not be so distinct after all.

Pauline Hopkins and the American Dilemma

October 29, 2017 § 4 Comments

In 2016, during the height of the presidential primaries, the Pew Research Center reported that black and white Americans are (unsurprisingly) split on how much racial progress has been achieved in America eight years after President Barack Obama took office. In response to the statement “Our country has made the changes needed to give blacks equal rights with whites,” 38% of white respondents agreed while only 8% of black respondents agreed. In response to the statement “Our country will not make the changes needed to give blacks equal rights with whites,” 43% of black respondents agreed, while only 11% of white respondents said the same. There was more agreement in the middle—in response to “Our country will eventually make the changes,” framing racial progress as an ongoing process, 42% of blacks and 40% of whites agreed.

Fast forward more than a year later, after the election and inauguration of President Donald Trump. At a September 2017 rally in Huntsville, Alabama for (now-defeated) Republican Senator Luther Strange, Trump made the following comment:

Wouldn’t you love to see one of these NFL owners, when somebody disrespects our flag, to say, “Get that son of a bitch off the field right now. Out! He’s fired. He’s fired!”

Trump’s now-famous quote sparked very different sets of reactions: to the mostly white audience assembled in Huntsville, the word “our” signified a unique sense of white collectivity and belonging, harkening back to an implicit form of white, Middle Americanness that lay at the heart of Dwight Eisenhower and Ronald Reagan’s appeal. The implied bodies referenced in the phrases “somebody” and “sons of bitches” were also very apparent and did not need to be addressed directly. To many other people, especially African Americans, “somebody” and “sons of bitches” recalled generations of othering, alienation and mistreatment. Almost immediately Trump’s comments ignited a firestorm of controversy within every corner of the public sphere:

The results of the Pew poll and the president’s remarks underscore an enduring open secret in American public life: that this country is really two nations, black and white, who coexist but continue to experience deep schisms. Of course, the relationship between the two races is exceedingly complicated, in addition to the reality that America is more than just black and white. America today is part of a vast globalized public square, but you would never know it based on the incendiary rhetoric of our national politics. If it’s not undocumented immigrants or Muslim Americans who pose a threat to the nation, then it’s violent Black Lives Matter militants, welfare mothers, black teenage boys in hoodies, etc. Though the roots of these imagined threats lie primarily in racism, they are also anchored in what the late political scientist Richard Hofstader calls “The Paranoid Style in American Politics,” or the need for America to form its political subjectivity through the construction of a looming bogeyman. Long after the days of the Communist witch-hunts of the McCarthy era and the exploitation of the black “superpredator” myth of the 1990s, scapegoating and othering remain as enduring national dilemmas.

American pop culture and the corporate realm have tried to alleviate this dilemma with mixed results. My parents fondly remember the famous 1971 Coca-Cola commercial in which young people of all colors from all over the world bond through the universality of Coke. Another more recent ad by Cheerios depicted a racially mixed family, generating online comments so racist and vitriolic that the comments section on the ad’s YouTube page had to be shut down.

Black intellectuals have tried various ways of communicating ideas of racial kinship and community through their work. Through his sociological and historical research, W. E. B. Du Bois believed that racial progress could be achieved by appealing to white people’s rational sensibilities. In his view, if whites would only look at the hard, objective data about black life, they could be moved to embrace racial equality as a national ideal. Black writers also sought to use literature for these ends, often employing popular literary tropes such as racial discovery. In her 1892 novel Iola Leroy, Frances Ellen Watkins Harper tells the story of Iola, a fair-skinned girl on a Louisiana plantation who only discovers that she’s black when she’s separated from her family and sold into slavery. The experience, initially a shock to her system, sets in motion a journey of racial consciousness, and she ends the novel marrying a black physician and teaching in a black college to help uplift the race. Pauline Hopkins’ protagonist Reuel Briggs follows a similar trajectory in Of One Blood.

What makes these novels different from modern-day attempts at “uniting” the races is the way “unity” is framed. In Hopkins’ novel, racial unity is established through bloodlines. Often descriptions of black beauty vs. white beauty—such as descriptions of Dianthe Lusk, with her “wavy bands of chestnut hair” that do not fit the “preconceived idea of a Negro” (14)—are indistinguishable from each other. Blackness and whiteness come together and come apart seamlessly. In modern-day culture, however, the unity of the races is framed more as “community,” which is similar to but different from “kinship.” Stakeholders can be part of a community but not necessarily be related. Terms such as “diversity” and “inclusion” still imply that we (the “we” can change depending on who is uttering it) are “letting in” people who are different from us. In a sense, these terms imply bridge-building that is conscious of the fact that the “others” coming across the bridge are welcome but are not part of “us.”

As we continue to grapple with how to solve the “us vs. them” dilemma as a nation, Hopkins’ novel brings much value to the conversation, even more than a century after it was originally published.

Works Cited

Hopkins, Pauline Elizabeth. Of One Blood: Or, the Hidden Self. New York: Washington Square Press, 2004. Print.

–Magana Kabugi

 

 

Communal Principles in Arts, Crafts, & Technology

October 15, 2017 § 1 Comment

In 2012 Google premiered a commercial failure called Glass. The wearable eye-glass computer sought to seamlessly merge together life and technology as indicated in the advertisement demo below.

This ad speculates how Glass could have shaped the everyday lives of urban dwellers—allowing them to more easily schedule their days through voice activation software, know the weather forecast by simply looking out the window, navigate the city and its bookstores (despite obvious signage for where the genres are located—lazy bum!), keep track on how close their friends are nearby, and achieve peak hipsterdom by impressing others with their minimal ukulele skills. Upon its arrival, Glass was swiftly met with criticism. From its ugly, cumbersome design to its creepy apps that allow the user to take a photograph by winking or identify strangers just by looking at them, Google Glass became immediately embroiled in debates on utility and privacy. Its costly price of $1500 also didn’t inspire many consumers to line up at the stores. In 2015 Google announced its discontinuation, then revived it in 2017 as a tool to aid factory workers. Regardless of this niche usage today, however, Google Glass’s failed legacy in the public domain expresses current human needs for distinctions between everyday life, technology, and labor. These may occasionally overlap, but a person’s desire to break apart this chain of activities failed to be considered by Google’s tech team.

Such distinctions for the orders of social life were similarly interrogated in the late nineteenth century. In an era of growing distrust in machines and industrial capitalism, the Arts and Crafts movement arose to critique its contemporary moment. Rejecting machine-driven precision and mass-produced commercialism, William Morris and his company of craftsmen emphasized instead natural, rhythmic aesthetics based on floral and organic shapes of the natural environment. These functional, hand-made objects of exquisite intricacy were put in sharp contrast to the stark styles of industrial urbanism and the grim depictions of realist art. Based on the assumption that workers in factory systems were becoming severed from the earth and alienated from their own human nature, Morris advocated for a unification of art, life, and labor in the service to society. In essence, Morris believed that creating and being surrounded by beautiful wallpaper, bedding, chairs, etc. could uplift the spirit of the masses. Furthermore, by beautifying the everyday and quotidian, Morris thought that humans would be able to relearn their relationship with the natural world and nourish the foundations for a more just community. His vision for this future was likewise articulated in his 1890 utopian romance News from Nowhere or an Epoch of Rest.

In this text, the narrator William Guest inexplicably wakes up in a communist future where private property is abolished, gender and class equality have seemingly been met, and both life and work coexist as extensions of pleasure. Everyone is also really really pretty. Their beauty is so intoxicating for the narrator that even the persistently “sunburnt face” (179) of his love interest Ellen does not give him pause (or raise concerns about a potentially untreated skin cancer). Morris’s assumptions that a person’s daily immersion within attractive gardens and buildings could be edifying thus extend to a very transformation of the human appearance itself. But more crucially, the function of beauty and nature in Morris’s novel is to create a more harmonious community.

These art and social theories are put into practice throughout the novel as the narrator freely wanders the landscapes of England alongside his “neighbors” and encounters the incredible freedoms now allowed. Yet although the “art of work-pleasure” (160)—that unification of art, life, and labor—foregrounds the daily activities of Nowhere’s inhabitants, technology remains in distinct and separate spheres. Men and women may be able to work together now, but machines must be kept elsewhere doing the tedious labors that humans do not want to perform any longer. This separation of labor allows for a closer connection with the natural environment for Nowhere’s human community, but what is peculiar about this development is that the people must relearn how to work in the fields and create crafts from these very industrial machines (199). As such, a return to a communally-oriented human nature within the environment depends on the service and then subsequent displacement of technology.

Can today’s technology in our post-industrial age educate us toward similar tasks? It seems difficult to think so, especially considering the ways in which capital, commercialism, and technology have all led to the environmental catastrophes we now face. Perhaps, however, there is still a utopic underpinning in the design of technology itself and its ability to cultivate a community, even if such a community may be based on one’s ties to a consumer product. This utopic promise has been most notably expressed by Apple, leading one critic to even call Steve Jobs the William Morris of our time for his insistence on uniting user-friendly function with pleasurable aesthetic design—not to mention that organic eponymous logo.

In contrast to Google Glass’s affluent hipster dude, Apple often advertises its products for their ability to bring people together from a variety of cultural backgrounds. Consider this year’s iPhone 7 commercials below.

On the one hand, these advertisements demonstrate the interests of capitalism to reach as many diverse and global locales as possible. On the other hand, though, these commercials may be read for their thematic concerns, which hold similarities to the utopian romance News from Nowhere. What we see in these ads are expressions of joy in labor, beauty, travel, leisure, and love. These expressions are indeed necessitated by a capitalist undertaking to sell these products, but the cultural imagination of these ads also suggest the ways in which technology, like nature, may germinate goodwill. Technology may have largely supplanted nature today, yet one thing is clear: the hope in a benevolent human community remains unyielding.

Sources:

Morris, William. News from Nowhere and Other Writings. Penguin Classics, 2004.

—Cameron Clark

Looking Forwards, Going Backwards

October 8, 2017 § 6 Comments

Once my Australian friend Nick began his new job in corporate, he started talking in jargon. Like, all the time. Midway through coffee a little while ago, his phone rang: ‘Okay, okay. Okay. I’m going to run the numbers in the hope that we can move the needle. Going forward, let’s take this offline.’ Nick was moving through a vague world of ‘actionables,’ ‘synergizing’ with his co-workers, and ‘jockeying for position’ with new clients for ‘outcome-specific goals.’ Asking for some sort of translation for this businessspeak was doubly confusing: most of the time, Nick answered questions about business jargon with more jargon. It’s not the inexplicability of the jargon per se, it’s that the closer you look at what a worker like Nick does for eight hours a day, five days a week, you see that the jargon more often than not comes to constitute the work itself. Organizing work and talking about work is the work.

'I'm the Company's Registered Acronym Promoter but I've yet to be given a job title.'

In 2013, the activist and anthropologist David Graeber defined predicaments like the above as offshoots of the ‘phenomenon of bullshit jobs.’ Writing in Strike Magazine, Graeber maintained that the decline across the twentieth century in jobs in industry and manufacturing occurred simultaneously with a massive increase in the administrative sector. Since the end of the second world war, we have seen ‘the creation of whole new industries like financial services or telemarketing,’ along with ‘the unprecedented expansion of sectors like corporate law, academic and health administration, human resources, and public relations.’ For Graeber, these jobs doubled back on themselves in an administrative loop. They existed for no reason besides validating themselves as valuable work.

Graeber’s article touched a nerve, and had as its implicit backdrop other cultural critiques of humdrum office life. Around the time of the millennium, social satires like Office Space and The Office began to appear on the cultural scene. There is a peculiar cultural fascination with the sheer inexplicability of corporate life. Beyond the tired jargon and acronyms, corporate situations were funny because the daily lives of office workers didn’t make much sense. Or, more accurately, it made sense to those countless souls working in offices already, who saw an image of their lives reflected back at them; for the rest of us, office life was simply perplexing, and therefore ripe for satire. Watching The Office, the viewer is drawn in by how the characters fill out the time of the day: congregating around the water cooler for a few minutes, playing jokes on each other, circulating emails that seem to contain little of value. Against a color palette of monotonous, endless grey, the denizens of The Office, in any logical universe, could seemingly complete their work much faster than the time for which they are legally contracted. A friend of my wife’s who works for a large accounting firm admitted that she could easily complete her fulltime job in one to two days a week. The trick then is to figure out ways to pad out the work until the clock runs down. As in The Office, the work of employees is narrowed to consultations, checking emails, and organizing meetings—essentially vague tasks made oddly legible in a world of increasing efficiency and key performance indicators. What was most significant is that the world of offices often only makes sense in reference to itself: emails are about setting up meetings, meetings refer back to past consultations, and consultations lead to more emails.

officeUS_3180082b

Is this what the great utopian thinkers of yesteryear dreamed of for the future? I’m not referring to the bleak dystopias of the twentieth century, but those optimistic utopias of the Victorians. In Looking Backward 2000-1887 (1888), Edward Bellamy constructed a futuristic utopia where all citizens enter an ‘industrial army’ at twenty-one, performing only useful labor for their working lives before retiring at the age of forty-five. After that point, people are free to pursue art, science, music, painting, or anything else that engages their interests. Against current preoccupations to limit the working day or the working week, Bellamy limited the working life. In Bellamy’s Boston of the early twenty-first century, labor is centered around the production of material things, the economy is centrally planned, and ‘service-industry’ jobs are almost non-existent. What’s more, as everyone has the same income, social distinctions have evaporated. Obviously, the year 2000 looked a little bit differently to what Bellamy imagined; if anything, the Y2K sensation made us wonder if our machine overlords could even get us to the new millennium in one piece.

The point here is not to lambast Bellamy for failing to predict that automation would most certainly not set us free, but to recalibrate our diagnoses of modern life with the aid of these sometimes-quaint utopias of the future. While Graeber doesn’t refer to the nineteenth-century utopians, his examination of bullshit jobs nicely pinpoints the inexplicability of corporate work. ‘How can one even begin to speak of dignity in labour [sic] when one secretly feels one’s job should not exist?’ Graeber asks. Such jobs don’t exist in Bellamy’s Boston of the future. In contrast to Looking Backward, ‘work’ in corporate environments is more about managing the expectations of your superiors, understanding the internal dynamics of the office, and figuring out the business lingo that produces this bizarre world. If Bellamy’s protagonist in the year 2000 looked back at the late nineteenth-century, then it seems rather apt that we do the same in our current moment. But rather than critiquing the social ills of the Victorians, Looking Backwards can give us some indicators of what a future might look like sans bullshit jobs.

–ajon8090

Sources

David Graeber. ‘On the Phenomenon of Bullshit Jobs.’ Strike Magazine. August 17, 2013.

Edward Bellamy. Looking Backward 2000-1887. Oxford: Oxford University Press, 2009.

From Coal to Code: Feeding the Machines

October 1, 2017 § 2 Comments

What’s the difference between mining coal and writing computer code? According to Michael Bloomberg, the obvious answer is: a lot. In response to concerns over the devastating loss of coal jobs in regions highly dependent on the industry, Bloomberg urged compassion for the displaced workers but also suggested that we need to be realistic about their options. “You’re not going to teach a coal miner to code,” said Bloomberg at a 2014 conference, “Mark Zuckerberg says you teach them to code and everything will be great. I don’t know how to break it to you…but no.” Bloomberg’s comments have been perceived as a sign that he is an out-of-touch elitist who thinks coal miners are intellectually-stunted yokels. This is likely true about Bloomberg, but the assumption on which these comments are based is shared by plenty of people who are not members of the U.S. plutocracy—the assumption that coal mining and computer coding are fundamentally different jobs. Coal mining is blue collar, coding white collar; coal mining is industrial, coding postindustrial; coal mining is rough, manly work, coding is for skinny nerds.

Coal Miners

 

The Kentucky company Bitsource has set out to show that the gap between the industrial and the postindustrial is not unbridgeable. The web and app design company was started in 2015 by Rusty Justice and Lynn Parish, both of whom had spent 40 years in the coal industry. In what was admittedly an act of desperation in the face of increasing unemployment, the men started the company with the help of a coder friend Justin Hall and selected eleven formers miners (out of over 900 applicants) to learn code. Of those eleven, ten still remain, and the company is doing well, despite existing biases (i.e. potential clients assuming that former miners are intellectually-stunted yokels). While Bitsource is a small company whose impact is currently minimal, they have created a much-needed spark of interest in retraining programs for unemployed workers in coal country. Other companies plan to get on-board with the development of what is rather ingeniously being called Silicon Holler. The biggest barrier to this development is the lack of broadband internet access in the region. Kentucky already has an active program called KYWired that is addressing the problem, but they will likely need subsidies from the federal government to create a broadband infrastructure expansive enough to make Silicon Holler a success. This would require the Trump administration to stop pretending it’s bringing back the coal jobs and start thinking about what the region actually needs.

Image result for samuel butler erewhon

What’s so different about coal mining and computer coding anyway? Both tasks are, after all, in service to our machine overlords—at least that’s the way the Victorian writer Samuel Butler suggests we might see it. In Butler’s 1872 satirical utopian novel, Erewhon, an adventuring young man in search of fortune comes across a lost civilization that had once reached a high level of technological advancement but had decided centuries earlier to destroy all advanced machinery. The decision had been prompted by the work of a  professor, who suggested that machines already possessed a certain form of consciousness and would undoubtedly evolve to become a superior race. The professor argues that while it may not seem that machines are acting of their own will, they are already adept at getting men to serve them. He uses the example of the massive coal industry needed to keep the steam engines going: “Consider […] the colliers and pitmen and coal merchants and coal trains, and the men who drive them, and the ships that carry coals—what an army of servants do the machines thus employ! Are there probably not more men engaged in tending machinery than in tending men? Do not machines eat as it were by mannery?” The professor sees the coal workers as an army of domestic servants, running around frantically to get dinner on the table for their insatiable mechanical masters.

But this is not the only way that humanity serves the machines, according to the professor. In addition to keeping them fed, it also acts as their agent of evolution. Humans, in continually seeking technological improvement, are not only ensuring the machines’ survival, but also the latter’s advancement as a species. While it may seem like humans are doing this for their own benefit, this is merely an illusion: “It would seem that those thrive best who use machinery wherever its use is possible with profit; but this is the art of machines—they serve that they may rule. They bear no malice towards a man for destroying a whole race of them provided he creates a better instead; on the contrary, they reward him liberally for having hastened their development […] but the moment he fails to do his best for the advancement of machinery, by encouraging the good and destroying the bad, he is left behind in the race of competition; and this means he will be made uncomfortable in a variety of ways, and perhaps die.”

In short, our machine overlords have cleverly wedded their own advancement as a species to humanity’s economic survival, thus ensuring that they will never be neglected.

Humans may think that we have actively chosen to move away from the coal industry because we’re concerned about climate change, but it is likely that our mechanical masters have simply developed a more refined palate. They are now turning their noses up at the black lumps they once craved and are demanding the more delicate flavors of wind and sun. One thing is certain: they have developed a ravenous appetite for code. As machines continue to advance as a species, their human menials will undoubtedly be expected to serve up an ever-increasing supply.

Most coding is not about solving intense conundrums or making exciting breakthroughs. It’s mostly about keeping things running. It requires a specialized skill and a lot of patience. In this way, WIRED contributor Clive Thompson has argued, coding generally looks less like the creation of Facebook and more like “skilled work at a Chrysler Plant.” These solidly middle-class jobs, Thompson suggests, might be taught through high school vocational programs and at community colleges.

I say: let’s do what we need to do to keep our machine overlords fed. I don’t want to see what happens if they get hungry.

–utopianfictionblog

Sources:

“Hillbillies who Code: the formers miners out to put Kentucky on the tech map” by Cassady Rosenblum. The Guardian. 21 April 2017.

https://www.theguardian.com/us-news/2017/apr/21/tech-industry-coding-kentucky-hillbillies

“Can You Teach a Coal Miner to Code?” by Lauren Smiley. WIRED. 18 November 2015.

https://www.wired.com/2015/11/can-you-teach-a-coal-miner-to-code/

“The Next Big Blue-Collar Job is Coding” by Clive Thompson. WIRED. 8 February 17.

https://www.wired.com/2017/02/programming-is-the-new-blue-collar-job/

Erewhon; or, Over the Range by Samuel Butler. Penguin, 1985.

 

Ghost Ships, Ghost Ships Everywhere…

September 24, 2017 § 3 Comments

Growing up on an island in the Atlantic, I spent summers reading adventure stories. One such tale was Brian Jacques’ Castaways of the Flying Dutchman. In this young adult fantasy novel, a boy stows away on a ship: the Flying Dutchman. The ship’s crew is a depraved lot, and most fearsome of all is Captain Vanderdecken. One day Vanderdecken curses God for inclement weather, and an angel descends on the ship, scourging all but the faultless boy (and his dog). The ship’s crew is doomed to wander the seas for all eternity, never to make port, while boy and dog conversely must wander the earth, spreading goodness wherever they go.                                    cotfd

The myth of the ghost ship and its doomed crew is remixed again and again in maritime tales. The polar vessel in Coleridge’s The Rime of the Ancient Mariner comes to mind, with its glowing seraphs and reanimated corpses. Here, too, exists a lone survivor, who must wander the globe sharing his cautionary tale. We see ships of similar make in Poe’s story “MS. Found in a Bottle­­”—whose ghost ship gives off a “dull, sullen glare of red light”—and again in his novel, The Narrative of Arthur Gordon Pym of Nantucket. Such ships are spooky to say the least, often emitting a ghastly radiance, and seem to portend disaster for those who sight them.

poe_ms_in_a_bottle_byam_shaw

                “Upon the very verge of the precipitous descent, hovered a gigantic ship”                    MS Found in a Bottle by Byam Shaw c. 1909

Perhaps the best-known contemporary nod to this myth appears in the Pirates of the Caribbean film franchise, where the story of the Flying Dutchman is conflated with that of Davey Jones’ Locker, the guardian of which gathers the souls of the dead-at-sea. Krakens and maelstroms are also par for the course in these films, yet this is in keeping with older, more literary nautical adventures.

original

While there are many iterations of the Flying Dutchman myth (including a German opera by Wagner!) the unifying element seems to be that the sailors are exempt from the ruins of time and barred from return to land. In Jules Verne’s Twenty Thousand Leagues Under the Seas, Captain Nemo’s submarine may be considered one such ship. This mysterious craft emits a phosphorescent glow, which readers learn is due to the engine’s mechanisms, and not an otherworldly curse. Still, mysteries abound regarding not only where, but when Nemo is from. Like many a ghost ship captain, Nemo eschews dry land in favor of the ocean depths. You might have noticed, too, that Captain Nemo enjoys playing creepy organ music, as does Captain Davy Jones in Pirates of the Caribbean: Dead Man’s Chest. Nemo’s existence is an isolated, liminal one, and as such, he and his ship may be seen as ghostly; indeed, he promises Dr. Aronnax “he who enters the Nautilus is destined never to leave again.”27twenty_thousand_leagues_under_the_sea27_by_neuville_and_riou_027

 What is it about seafaring that encourages storytellers to include these ships in their narratives? Perhaps ghost ships are a caution against hubris in the face of the ungovernable ocean, or of loving the sea too much.

–Elena Britos

 

 

 

 

 

 

 

 

 

 

 

  • Categories

  • Tags

  • Archives

  • Social

%d bloggers like this: