Money Speaks Louder than Human Voices

March 28, 2017 § 3 Comments

“Everything has a price.” This phrase in Margaret Atwood’s Oryx and Crake is not new, but it takes on a new meaning in the context of her novel (139). In today’s world, corporations dominate in every sphere from the economy to religion and politics. While Atwood’s world in which corporations have absolute control is unsettling, her ideas are merely an extrapolation of current times to the future. However, as Atwood shows, commercialism and commodification come at a high price to society and the humans that are a part of it.

Early on in the novel we learn that Jimmy (or later Snowman) lived on a company compound called OrganInc. The corporation controls everything in Jimmy’s life including his school and the rules he has to abide by, enforced through the CorpSeCorps. Later on, we learn that Jimmy and Crake attend what are similar to universities. These “universities,” particularly Crake’s Watson-Crick Institute, aim to generate profits as well, encouraging the very bright students to innovate and develop new technology, carefully securing their facilities, and minimizing interaction with the outside world. In Jimmy’s world, corporations control everything, and their motives clearly dominate.

The corporation-developed compounds seem absurd; however, in reality, they already exist. Massive companies like Amazon and Google have “campuses” that contain everything one needs to live off of. They include restaurants, gyms, childcare facilities, and even sleeping pods – all designed to keep you inside and focused on doing everything possible for the company. Beyond company campuses, universities today mimic those in Atwood’s story. As Vandy students, we even say that we live in a “Vandy Bubble.” Our lives all exist within the confines of our campus as we strive to learn and make new developments in all fields. We are not far off from the fictitious world that Atwood describes.

Images are renderings of future campuses for Google, Amazon, and Apple (from left to right). 

Why does it matter that corporations and technological research centers have such a wide sphere of influence? In a world where profit governs, everything becomes a commodity. This can easily be seen in Oryx and Crake with the story of Oryx. Not only is Oryx commoditized by the pimps that earn money for her sexual acts and pornography but Oryx is also commoditized by every viewer that watches the child pornography, including Snowman. In her discussions of her experience, Oryx has clearly been influenced by the corporation mentality surrounding her, as she states:

“They had no more love…but they had money value: they represented a cash profit to others. They must have sensed that – sensed they were worth something.” (126)

Do we only value human beings for the monetary value they provide? I hope not. Atwood shows a disturbing reality if corporate power continues on its current trajectory. The power of corporations to influence politics and culture even today has implications for cloning and other advanced technology. It is unsettling to think of the development of human clones by companies driven by their own bottom-line. Morality does not seem to have a place in this kind of world.

If we do consider these clones to be “human,” how do we prevent the corporate developers from treating the clones like commodities and not humans, especially when humans today are already commoditized? In the novel, Snowman compares the children in the pornography to “digital clones,” as they did not feel real to him (90). With this statement, Atwood warns of the commodification of both existing humans and potential human clones in the future. If corporations both govern and profit, we cannot prevent abuse and exploitation.

Atwood is not far off in her portrayal of the commodification of human clones. Human cloning has often been criticized for turning human organs into commodities due to their monetary value with cancer treatments and other diseases. President Bush famously rejected all human cloning, stating, “Life is a creation, not a commodity.” He is not alone in being concerned with this idea, as scientists, philosophers, and policy-makers have discussed the implications of human cloning for decades. The Presidents Council on Bioethics expressed the following:

“When the ‘products’ are human beings, the ‘market’ could become a profoundly dehumanizing force.” (The Presidents Council on Bioethics, 2002)

When corporate greed becomes entangled with the morality of health remedies, the potential commodification of humans and human clones is endless. Although Atwood’s fictitious world seems so distant, the reality is that it is much closer to present day than one would first think. From humans to clones to our independence and our value, Atwood shows that everything has a price, and the costs to society are high.

Sources:

Atwood, Margaret. Oryx and Crake. New York: Anchor, 2004. Print.
Arter, Melanie. “Bush: ‘Life Is A Creation, Not A Commodity’.” CNS News. CFC, 07 July 2008. Web. 28 Mar. 2017. http://www.cnsnews.com/news/article/bush-life-creation-not-commodity.
The President’s Council on Bioethics. “Human Cloning and Human Dignity: An Ethical Inquiry.” Georgetown University, July 2002. Web. 28 Mar. 2017. https://bioethicsarchive.georgetown.edu/pcbe/reports/cloningreport/children.html.
Cambria, Nancy. “Our 21st-century Prophet, Margaret Atwood.” St. Louis Post-Dispatch. STLtoday, 26 Sept. 2015. Web. 28 Mar. 2017. http://www.stltoday.com/entertainment/books-and-literature/reviews/our-st-century-prophet-margaret-atwood/article_242b5f9b-3ac6-51e3-9024-e858d178f6e2.html.

Images source: http://www.geekwire.com/2013/4-tech-titans-building-campus/

Indifferent New Technology, Same Old World

March 22, 2017 § 3 Comments

Aldous Huxley’s Brave New World is, according to Wikipedia, a novel that “anticipates developments in reproductive technology, sleep-learning, psychological manipulation, and classical conditioning that combine profoundly to change society.” The book is often interpreted as a cautionary tale, decrying the dangers of unfettered embrace of new technologies. Though the technologies in the novel are used to maintain a shockingly stratified social system and to perpetually distract citizenry through elaborate entertainment, non-sentient technologies are essentially neutral to the way in which they are used. Technologies are tools, and tools do not dictate their own usage. While some tools intrinsically lend themselves to nefarious uses more than others, this is ultimately a question of correlation versus causation – whether the post-Ford society’s practices are due to their technologies or whether their advanced technologies and base instincts merely complement each other.

I’d argue for the latter.

The Caste System. Constantly proposed, constantly rejected. There was something called democracy.

The post-Ford society in the novel is strictly divided into a hierarchy of social castes, including Alphas, Betas, Deltas, Gammas, and Epsilons. Each of these castes, separated by birth, serve a rigid social role and receive varying levels of oxygen and exposure to alcohol during development, either enhancing or limiting their respective intelligences. While horrifying to enact with such efficiency, caste systems are not new for humanity. Medieval Europe was dominated by the feudal system, with stark divisions between the land-owning vassals and the subservient serfs. Rome’s social structure included patricians, plebeians, freedmen, and slaves. The remnants of India’s caste system of brahmins (priests), khsatriyas (warriors), vaishyas (merchants), shudras (servants), and untouchables, still drastically impact the nation’s politics and economics. The common requirement for entry into the upper castes in each of these civilizations? You had to be born into them?

Such stratification is neither an exclusively ancient or international problem, and castes and democracy are not necessarily antithetical, as presented in the novel. Our own nation’s social and economic structure was, for much of our history, built on the idea that entire groups of people, including Chinese-Americans, Irish-Americans, and especially African-Americans, were meant to serve subservient, less-than roles.

You should see the way a negro ovary responds to pituitary!

Additionally, in Brave New World, social order is dependent on the maintaining of a large, unintelligent working class, devoid of opportunity for advancement from birth. Robbing fetuses of oxygen is a direct way to limit intelligence and social mobility, but does disparate access to nutrition, birth control, and education not achieve a similar result? Do our own environments not condition and shape us for the work we ultimately end up performing?

Again, these are not merely problems consigned to a tech-obsessed world. In our own world, resistance to technology may, to some extent, contribute to the maintaining of such stark divides between classes. As I have previously argued, a partial solution to Fordism may be the replacement of dehumanizing human jobs with machine workers, eliminating the need for humans to serve as cog in the industrial machine. However, without educational opportunities, the elimination of such positions could leave man without options. Breaking a cycle of poverty is an uphill climb, one which may take generations to truly surmount.

I’m glad I’m not an Epsilon.

As students at one of the highest-ranked universities in the nation, we stand on the lucky side of societal divides. According to a recent New York Times article, the median household for Vanderbilt students was $204,500, placing a full half of our student body in at least the top 5.6% of household incomes nationally. This trend is by no means exclusive to Vanderbilt, and similar levels of wealth can be seen at most “top-tier” institutions, which is not exactly surprising. Wealthier families are more likely to be able to live in school districts with higher taxes, invest the time and money needed for their children to pursue extracurricular activities, and free their children from the limiting worries of whether they’ll have enough to eat or a place to sleep. Beyond these factors, many top schools cost in excess of $60,000 a year to attend, presenting a cost barrier for those ineligible for financial aid, a knowledge barrier for many who are unsure of how to take advantage of the financial aid system, or an access barrier for those whose previous opportunities do not align with the sought-after qualities for admission.

Nothing like oxygen-shortage for keeping an embryo below par.

Within the university setting, this social stratification manifests itself yet further. Huxley’s characters are divided into groups of various levels of power and social status, assigned letters of the Greek alphabet. As a student, it’s hard to not draw parallels to Vanderbilt. On campus, many activities considered to be prestigious have high entry costs, including club sports, service trips, Maymesters, and Greek life. In such organizations, students are filtered by what is, ultimately, a few impressions, and further self-filtered by their ability to pay.

Hasn’t it occurred to you that an Epsilon embryo must have an Epsilon environment as well as an Epsilon heredity?

According to a 2017 Vanderbilt Political Review article, “Greek students dominate all three upper-income categories…conversely, non-Greek students held a greater share of all four lower-income levels.”

USE
Source: Vanderbilt Political Review

While efforts have been made to increase access to Vanderbilt social groups with high entrance costs, there is still a significant divide. Is Huxley’s vision of social stratification with different tiers of people assigned different Greek letters totally unfathomable? Data would indicate that even in an already massively stratified educational institution, we self-stratify in a similar manner. And proudly so. We even buy t-shirts to tell others about it.

However, the seemingly insurmountable systematic injustices in Brave New World raise some glaring questions. Why are people content? Why do they not revolt?

Every man, woman, and child compelled to consume so much a year.

In Huxley’s novel, characters are distracted and numbed by entertainment. They play complicated games requiring elaborate equipment, such as obstacle golf, escalator squash, and centrifugal bumblepuppy.

Two words: drone racing.

DIS

Source: DroneReview

Games, like all human systems, have become more complex as time goes on. Huxley anticipated that this trend would continue into the future. In our case, we’re competitively racing flying robots through video monitors. Huxley’s vision did not, however, account for the meteoric rise of a paradigm-shifting technology: digital media.

According to the 2015 “How much media?” report by the University of Southern California Marshall School of Business, Americans are estimated to consume a staggering average 15.5 hours of media per person per day. But though we consume vastly more media than our predecessors, we’re not necessarily better informed. In his book The Big Sort: Why the Clustering of Like-Minded America is Tearing Us Apart, author Bill Bishop argues access to such a wealth of information enhances our natural tendency towards confirmation bias on a historically unparalleled scale. If we so choose, we can view only sources that support the beliefs we already possess. We can isolate ourselves from opposing viewpoints, disengage from debate, distract ourselves from substance.

But how can we stay away? We have so many media access points: TV, advanced computers, gaming systems, phones able to access a nearly unlimited wealth of entertainment. We have no reason to not be constantly entertained everywhere we are, should we desire it. For example, Nintendo’s new Switch gaming console can be played through a TV, a computer, or through a built-in screen allowing individual or group play on the go.

Nowadays the Controllers won’t approve of any new game unless it can be shown that it requires at least as much apparatus as the most complicated of existing games.

Strict social structures, clear stratifications, distracted consumers — is Huxley’s truly a brave new world?

Maybe, if history is bunk.

 

SOURCES:

https://www.nytimes.com/interactive/projects/college-mobility/vanderbilt-university

http://www.vanderbiltpoliticalreview.com/vanderbilt-greek-lifes-money-problem/

http://money.cnn.com/calculator/pf/income-rank/

https://www.marshall.usc.edu/faculty/centers/ctm/research/how-much-media

 

Author: Austin Channell

 

Subverting Cognition: Surrealist Automatism and Brooks’ Intelligence Theory

March 13, 2017 § 3 Comments

In Flesh and Machines: How Robots Will Change Us, Rodney Brooks presents his unique take on the pathway to create meaningful artificial intelligence. To briefly summarize, he suggests that removing clunky algorithms aimed at simulating cognition, while simultaneously creating a direct link between sensation and action, supports more advanced general intelligence (functional intelligence). For me, Brooks’ theory of intelligence found in “the interaction between perception and action” (Brooks Ch.3 “Planetary Ambassadors”) called to mind the techniques of Surrealist painters like Salvador Dalí, Joan Miró, and André Masson. The Surrealists used automatism — painting without conscious thought — to subvert their own cognition and the rational mind, in order to tap into deeper and more raw thoughts and feelings. I argue that given the parallel intention of Brooks’ approach to AI and Surrealist Automatism, an exploration of the latter can help us gain an understanding of Brooks’ method.

Before diving in, let me quickly clarify what I mean when I say ‘meaningful artificial intelligence’ or ‘general intelligence’. In Flesh and Machines, Brooks distinguishes between the tasks and processes traditionally tackled by AI researchers (playing chess at the level of mastery, solving calculus problems, etc) and more practical expressions of intelligence (entering a room, navigating a new environment, avoiding obstacles), and points to the fact that programming a robot to do the latter is a considerable challenge. Thus, I define meaningful or general artificial intelligence as the intelligence that human beings and animals employ in performing ‘basic’ operations, but are far more complicated on a cognitive level than they appear to us. Brooks’ strategy was geared towards cracking the code of programming this kind of intelligence, and it was these simpler actions and motions that the Surrealists sought to simplify their brush strokes and methods too.

Caught in a time of political uncertainty both within the art world and the world at large, the Surrealists reflected on how the conscious mind and higher-level cognition is difficult and beset with ideology from what they saw as a flawed society. They wanted to divorce the art-making process from the constraints of the rational mind indoctrinated by an oppressive society. In order to escape, they adopted a working method called automatism, which allowed them to essentially paint without conscious thinking, thus sourcing the lines and forms that resulted from their subconscious.

Pioneered by André Masson, the Surrealists painted using automatism by first completely clearing their minds. Often, they would even close their eyes or use drugs or natural supplements to achieve a more detached state. Then, they would allow their hand, holding a paintbrush, to flow randomly across the canvas, so that the resulting lines and forms were more a product of chance than conscious manipulation of the paintbrush. In this way, their style was free of rational control. In this way, the Surrealists thought that the compositions they created using automatism came directly from their subconscious — the epicenter of interaction between perception and action. In other words, they tried to simplify their cognitive processes as much as physically possible down to the point where they merely operated on the basis of the interaction between their perception — the way the paintbrush felt across the grain of the canvas — and action — creating a line via the paintbrush on the canvas.

Andre-Masson.-Automatic-Drawing-348x395

Andre Masson, “Automatic Drawing”, 1924

As you can see, there are strong parallels between the working method of the Surrealists and Brooks’ approach to simulating general intelligence. In explaining the benefits of his theory, Brooks describes his “subsumption architecture” for machines, by which he created a hierarchy of processing levels simulating the process of evolution of traits.

“For Allen [his first physical robot] I targeted three layers. The first was… to make sure that the robot avoided contact with objects… The second layer would give the robot wanderlust, so that it would move about aimlessly. The third layer was to explore purposefully whenever it perceived anything interesting…” Rodney Brooks, Flesh and Machines

Using Brooks’ vocabulary and framework of layers, one can analyze the process of the Surrealists in a similar way. The primary (first, or bottom) layer of automatism was to paint. The fundamental task programmed into the Surrealist’s action was to paint by dipping paintbrush into paint and applying it to the canvas. The second layer was to paint continuously without really stopping. The surrealists were concerned that if they paused longer than an instant, the conscious mind would kick back in. In terms of programming, the continuous painting layer doesn’t have to process how to paint, because the first layer achieved that. Then, the third and tertiary layer was to explore/follow through on a particular line if the sensation of that line or the texture in that region of the canvas captured the artist momentarily. Thus, the automatism employed by some of the Surrealist painters very closely mirrors the “thoughts”of Brooks’ coded AI. In fact, the drawing by Andre Masson above even looks like it could be an aerial view of the paths that Allen may have taken, moving around Brooks’ research lab.

I find the parallels between the techniques of the Surrealists (developed and employed in the early 1900’s) and Brooks’ theory of intelligence (developed and employed in the late 1900’s) to be confirming the validity and ingenuity of Brooks’ theory for machine intelligence. That is to say, if human beings sought to shed the weight and burden of clunky cognitive thought in order to achieve a greater level of functionality in expressing themselves on the canvas, it is incredibly impressive and valid for Brooks to suggest the same for his machines; he argued to remove dedicated “cognition boxes” from his machines, thus eliminating time-consuming and complex cognition algorithms from his AI’s ‘thought’ processes.

While Brooks may not have drawn inspiration from the methods of the Surrealists, I find it beautiful that leaders in two remarkably distinct disciplines both arrived at a similar point of seeking a purified relationship between sensation and action to achieve greater expression and intelligence of movement. Though it has often been suggested that links between memories will be the key to creating thinking artificial intelligence, Brooks’ theories have led me for the first time to consider that in the future, progress in AI development will also come from the mutual inspiration between disciplines, especially the humanities in creating more “intelligent” and human-like robots and machines.

❈❈❈

Sources / For more information on Surrealism and Automatism:

https://www.khanacademy.org/humanities/art-1010/art-between-wars/surrealism1/a/surrealism-an-introduction

https://www.moma.org/learn/moma_learning/andre-masson-automatic-drawing

http://www.tate.org.uk/research/publications/tate-papers/18/becoming-machine-surrealist-automatism-and-some-contemporary-instances

Patrizio Murdocca

A New Age of Artifice

January 23, 2017 § 5 Comments

In the fall of 2011, Duke University’s undergraduate literary journal published a rather unassuming poem entitled “For the Bristlecone Snag” (“The Archive”). To the journal’s poetry editors, the poem appeared to be a typical undergraduate work, comprised of several unfulfilled metaphors and awkward turns of phrase. What the editors did not know at the time of publication, however, was that this poem was not written by a human. Instead, it was written by a computer program (Merchant).

When I first learned about “For the Bristlecone Snag”, I was reminded of the writings of Alan Turing, a renowned English computer scientist in the mid 20th century. In his seminal article on the subject of artificial intelligence (A.I.), Turing articulates that the question, “can machines think?”, is “too meaningless to deserve discussion” (Turing 442). After all, he claims, we have no direct evidence that other humans can think, and we merely assume that they do based on their behavior. Turing argues that this “polite convention that everyone thinks” should apply to all beings that can demonstrate human behavior (Turing 446). It is from this line of thought that Turing conceptualized the Turing Test, an experiment in which a computer tries to convince a human of its humanity. According to Turing, if an A.I. can convince a human judge that it is human, then we must assume that the A.I. can think.

While the program that produced “For the Bristlecone Snag” did not complete an extensive and proper Turing Test, it did convince human judges that it was human. At the very least, the poem’s acceptance into an undergraduate literary journal reveals that literate machines can, and will, exist in the near future. The way is paved for more professional and accomplished artificial authors.

Indeed, even in the half decade since “For the Bristlecone Snag” was published, the technology behind artificial intelligence has improved rapidly. Watson, IBM’s “cognitive computing platform”, is a great example of this progress (Captain). In 2011, Watson defeated two reigning champions in Jeopardy, successfully interpreting and answering the game show’s questions. While this feat alone was a remarkable step in cognitive computing, Watson’s analytical abilities have since then contributed to over thirty separate industries, including marketing, finance, and medicine (Captain). For example, the machine can read and understand millions of medical research papers in just a matter of minutes (Captain). As intelligent as Watson is, however, he was never designed to pretend to be human. The chief innovation officer at IBM, Bernie Meyerson, believes ‘“it’s not about the damn Turing Test”’; his team is more interested in accomplishing distinctly inhuman tasks, such as big data analysis (Captain).

While IBM may not be interested in the Turing Test, other artificial intelligence companies have been working specifically towards the goal. In 2014, a program by the name of Eugene Goostman passed the Turing Test using machine learning strategies similar to those that drive Watson (“TURING TEST SUCCESS”). The chatbot, or program that specializes in human conversation, was able to convince several human judges that it was a thirteen-year-old boy (“TURING TEST SUCCESS”). Given the success of Eugene Goostman, and the intelligent accomplishments of Watson, it is indisputable that the Turing Test can be, and has been, passed. Artificial intelligence is a reality. Machines can think.

As an aspiring writer and computer scientist, I can’t help but fixate on the implications that A.I. has for literature. It is entirely possible, even likely, that “For the Bristlecone Snag” foreshadows an era in which the most successful and prolific authors will be machines, an era in which the Pulitzer Prize and Nobel Prize in Literature are no longer given to humans, an era in which humanity no longer writes its own stories.

Yet, this era of artifice should not be greeted with worry or anxiety. Art has always been artificial, a constructed medium for human expression. In the coming decades, we will author the next authors, create the new creators, we will mold the hand that holds the brush. Artificial intelligence should not be feared as an end to art, but rather a new medium, a new age of artifice.

– Zach Gospe

References

Captain, Sean. “Can IBM’s Watson Do It All?” Fast Company. N.p., 05 Jan. 2017. Web. 20 Jan. 2017.

Merchant, Brian. “The Poem That Passed the Turing Test.” Motherboard. N.p., 5 Feb. 2015. Web. 20 Jan. 2017.

“The Archive, Fall 2011.” Issuu. N.p., n.d. Web. 20 Jan. 2017.<https://issuu.com/dukeupb/docs/thearchive_fall2011>.

Turing, A. M. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950,  pp. 433–460. www.jstor.org/stable/2251299.

“TURING TEST SUCCESS MARKS MILESTONE IN COMPUTING HISTORY” University of Reading. N.p., 8 June 2014. Web. 21 Jan. 2017.

Craving Attention in the 21st Century: The Significance of Social Media and Science Fiction.

November 5, 2015 § Leave a comment

My newest insta pic only has 66 likes. Do you SEE that creative angle? I was lying on the grass (I was reading in the sun so I was not totally pathetic and desperate) for that angle. Do you SEE that slight color adjustment? Not too much, but just enough for you to truly appreciate that blue sky and green lawn. I had creative hashtags that would draw people in while also using some broader hashtags to appeal to a larger audience (#sky #scenic #nature). I even made sure to delete those hashtags after a few days (I definitely don’t want people to think I am desperate for attention). I might as well have put #desperate if I was being truly honest with myself.

** Username, location and caption have been edited out for my own privacy. **

** Username, location and caption have been edited out for my own privacy. **

My dad calls me pretty regularly, approximately 75% of the time for help with his phone, iPad or really anything that plugs into an outlet. My father is many things: an electrician, a carpenter, a chef, a jack-of-all-trades. Unfortunately, he is anything but technologically literate. Born long enough ago to have been drafted for the Vietnam War (He would probably be mad if I ever posted his true age for the world to see), he frequently recounts what he thought the future would look like and how hard it was growing up in the ancient days (He would probably claim to have walked six miles in the snow to school like any other stereotypical old people if he hadn’t grown up in Miami).

His dreams of the future saw technology revolutionizing the world, not supporting the rise of social media platforms such as Instagram and Facebook that feed on the new generation’s narcissism and dependence on social approval for value.

For most of its existence, science fiction was a genre outside of mainstream consumption that depended largely on pulp magazines to cheaply circulate the stories. Unfortunately, the terms “science fiction” and “sci-fi” often conjure up judgmental images of young prepubescent boys on the outskirts of society. In a way, science fiction was for the socially inept or awkward. It was literature for cheap paper and obscure magazines. A positive change in the 21st century has been a larger acceptance of science fiction, largely due to blockbuster hits. Science fiction is becoming exciting, dramatic and mainstream.

But, becoming more popular and circulated opens it up for more influence from this new technologically narcissistic generation. I freely admit that I have been guilty of this exact issue. When I started writing for this blog I wrote from a creative part of my heart that simply wanted to entertain anyone willing to give my stories attention. I really didn’t use too many hashtags, just enough to convey what my story was about. I wanted creative titles that would confuse or intrigue the reader.

And then I realized I could see how much traffic my posts were generating.

Every week I checked to see if my stories, articles and posts had new views and comments. My hastags started to get more specific and numerous. My titles started to reflect more what someone would enter into a search engine. I noticed that the most “successful” post had a title along the lines of “the significance of . . .” and realized that people had a reason to type that into Google. People cared about that answer.

When I think of science fiction I think of it being a sort of social commentary that incorporates real issues with a focus on technology, space exploration and the like. That is what I had wanted to do but not what I ended up necessarily doing. I had let my own egoism and dependence on attention for self-esteem and value that I had strayed from my creative goals and aspirations. I was a sell out.

This is not to say that all of my entries have fallen for this social media pitfall. I poured my heart and soul into a blog post about the dangers of genetic engineering. I spent hours writing a story about evolution. But, I am human and I make mistakes. I look back on my blogging career and see those lapses in judgment where I sought attention for my articles.

Much of science fiction is about technology and yet I let the technology control me. I was no longer the one reinventing technology or predicting the social implications of existing technology. I was letting technology reinvent my creative process.

So, maybe some of you will have some sort of homework assignment that asks you to discuss the significance of social media. Or, maybe the significance of science fiction as an “emerging” genre. So, you guys hop on the computer and search “the significance of . . .” in a search engine and my article pops up. I hope you all click on the article and read it. Not for views, comments and likes, but maybe to learn from my own experience and realize that technology is a tool that you use, not a tool that uses you.

– S. Jamison

Technological Stagnation

August 31, 2012 § 3 Comments

Science fiction stories are all about technological progress, and ubiquitous throughout science fiction is the assumption that the future will be filled with technology much more advanced than our own.  This assumption, though present in science fiction, is made by virtually every human being in their musings on future society.  Many people go as far as to postulate a future technological singularity, which would entail a rapid outburst of technological growth bringing technology beyond anything we can even comprehend. This assumption of universal technological progress is only logical, because throughout human history technology has always grown, and often exponentially.  Even in historical periods with relative technological stagnation, technology has always been progressing, even if at a much slower pace.

But what if in the future this is not the case?  What if we reach a point where technology  has grown as much as it ever will, and because of the laws and constraints of the Universe, it can never advance any further?  For instance, one of the most popular laws relating to technology is Moore’s Law which talks about the rate at which computer chips are getting smaller.  However, if it is true that the Universe is discrete rather than continuous as some have suggested recently, there is a definite limit to how small a computer chip can get, because there is an absolute limit to how small anything can be.  The Universe is filled with these types of limitations and laws, and it makes one wonder if we will one day be so technologically advanced that these laws begin to limit our progress.

Or alternatively, what if human society reaches the point where we no longer desire or need technological progress?  It has seemingly always been the case that human beings have been creating new technology, finding new and better ways to accomplish tasks.  But what if this desire disappeared?  What if some technological disaster rid humanity of its love for new technology, leaving only the fear of technological disaster?  What would human beings do instead and how would society change?

Both of these technological stagnation scenarios would make very intriguing topics for science fiction stories, but even more so, they could reveal a lot about humanity’s relationship with technology and the dangers of technological growth.

-PJ Jedlovec (pjjed)

Where Am I?

You are currently browsing entries tagged with technology at Science/Fiction.

%d bloggers like this: