The Vietnam War

The Vietnam War took place between November 1, 1955 and April 30, 1975. This war cost 2 million Vietnamese civilian lives, 1.1 million North Vietnamese and Viet Cong soldiers, 250,000-250,000 South Vietnamese soldiers, 58,220 American fighters, and 173 billion dollars. The Vietnam War started out being a proxy war for America, but quickly escalated to a full-scale American involvement for fear of communism. This war was one of the only wars that America did not win. A memorial is necessary to commemorate the millions of lives lost for a cause that was not understood by most, for a seemingly endless war that was spurred by the personal pursuits of the American government in the Cold War. The American people need to learn about the history of this war to recognize that the American government can act selfishly in order to put themselves in an advantageous position.

Vietnam had been occupied since 1883 by the French. The French colonized Cambodia, Laos, and Vietnam, and later renamed the region Indochina. However, there had long been opposition to French rule. Vietnamese people did not like being controlled by a foreign power. In 1930, a man called Ho Chi Minh created a communist party in order to rebel against their oppressors. This party was called the Viet Minh. Ho Chi Minh was not driven by communism, however, he was driven by the desire for independence. In World War II, Japan occupied Vietnam in 1945 and declared Vietnam an independent state from Indochina under Japanese rule. Ho Chi Minh then established the Democratic Republic of Vietnam, modeled after the American government, and declared himself the president. However, when World War II ended, so did the independence of Vietnam. Indochina was surrendered to the French. In 1946, France declared Vietnam an independent nation, but it was still under French control. This spurred the first of the Indochina Wars, which started with an attack on French forces in Hanoi.

In early 1950, the communist countries China, the Soviet Union, and Yugoslavia official recognized the state of Vietnam. Later that same year, democratic countries America and Great Britain also officially recognized the state of Vietnam. In 1954, the Geneva Accords were made. The Geneva Accords divided Vietnam into two sections: North VIetnam, led by communists under Ho Chi Minh, and the South of Vietnam, led by U.S. supported government under Ngo Dinh Diem. By this time, more than a million Vietnamese civilians had fled North Vietnam for South Vietnam.

The involvement of America in the Vietnam War was caused greatly by the U.S.’s fear of communism. In a communist society, one party claims to represent everyone, there are no elections, and as a result, the same leaders are always in power. All citizens receive what they need from the government, including healthcare, education, and housing. The state owns all means of production. Communism aims for a state of total equality. Their leaders do not support individualism, and no one is richer than his or her neighbor. There is not free market or enterprise. Everything people earn is given to the state, and the state then redistributes money or supplies based on people’s needs. According to Karl Marx, communism is, “From each according to his ability to each according to his needs.” Communist countries believe that capitalism debases human needs, while capitalist countries believe that communist states make human beings slaves to the government.

A factor that contributed to the U.S. involvement in Vietnam was the Cold War. The Cold War started after World War II and ended in 1989, with the collapse of the Berlin Wall. After World War II, the Americans and the Soviet Union each had separate political ideology and separate spheres of influence. America’s sphere of influence was in the Western part of the world, and the Soviet Union’s sphere of influence was in the Eastern part of the world. Both of these two powers wanted to stop each other from extending their spheres of influence. The Soviet Union feared the Americans would extend capitalism, and the Americans were afraid that the Soviet Union would extend communism.

Another factor that played a huge part in the American involvement in the Vietnam War was the Domino Effect Theory. The Domino Effect Theory stated that if one country in Southeast Asia became communist, the other countries would soon follow suit, and the Soviet Union would be able to extend communism even to Europe.

In Vietnam, however, the American troops were finding it very difficult to win the war. The Vietnamese fought every army they encountered using guerilla warfare. The Viet Minh used guerilla warfare against the French in the South. At first, instead of direct involvement in Vietnam, the Americans supported the French financially in the South of Vietnam. The Soviet Union gave Ho Chi Minh and the communist nationalists in the north of Vietnam financial aid. The Soviet Union and the Americans used the French and Vietnamese in order to wage a proxy war- a war instigated by a major power that itself does not become involved. Eventually, French forces left Vietnam, but not before leaving Ngo Dinh Diem in charge, who promised to establish a democratic republic. However, Ngo Dinh Diem refused to have elections, which was against what was stated in the Geneva Accord, making himself dictator. The Americans found they were giving money to a dictator in South Vietnam.

The Americans misunderstood the North Vietnamese call for freedom and independence. The U.S. simply considered them communists, and ignored their actual intent. For the Vietnamese, this war was on the brink of civil war. For the Americans, Vietnam was a pawn in the Cold War, a war to prevent the Soviet Union from expanding communism.

A key battle in America’s proxy war using the French was the Battle of Dien Bien Phu. This battle was fought in 1953 by French soldiers on a mountain outpost near the border of Laos. The decisive Viet Cong victory brought an end to the Indochina Wars. The French Army found they were losing a lot of ground, so they retreated to their outpost, called Dien Bien Phu. The Viet Minh cut off all paths and supplied their forces using the Ho Chi Minh path, a network of trails through the jungle that connected all Viet Cong bases. The outpost of Dien Bien Phu could only be supplied by air for the French, but they were still confident of their position. General Vo Nguyen Giap surrounded the French base with 40,000 men and used heavy artillery, completely taking the French Army by surprise.

The Battle of Ấp Bắc took place on January 1st, 1963. In 1961, the U.S. found a large portion of the Viet Cong forces near the village of Ấp Bắc, which was in South Vietnamese territory. Therefore the ARVN (Army of the Republic of South Vietnam) was ordered to destroy their base. In 1963, American helicopters dropped off ARVN soldiers near the village, but a catastrophe took place. South Vietnam was defeated, and five American helicopters were destroyed. This battle showed that the Americans were losing the war, and that VIet Cong forces were gaining support.

The Gulf of Tonkin Incident, also known as the USS Maddox Incident, is an example of how the American government lied to its own people and to the people and state of Vietnam. Americans claimed that the USS Maddox, a destroyer, had been attacked by the North Vietnamese army. The U.S. used false radar images to prove their point, which were called the “Tonkin Ghosts”. The U.S. used these fake attacks as an excuse to bomb North Vietnam and use Agent Orange to uncover the hiding spots of the Viet Cong.

On both sides, America and Vietnam, the resistance to the war was growing fast. Unlike in World War II, the American people did not understand why they were sending their men to die for a cause they did not understand or believe them. Furthermore, the American people did not believe in the hypocrisy of the war; President Kennedy saying he would support every nation that wanted to be independent and set up their own government, but only if the Americans liked the government they had set up. Americans also opposed the draft, which threatened families in the lower and middle classes. The draft targeted men and boys in fighting age (ages 18-30). Many Americans thought that the U.S. was using Vietnam as an excuse to fight Russia and get an advantage in the Cold War. On the other side of the ocean, Buddhists were campaigning for representation in a government that oppressed them. Many Buddhist monks opposed the war, and they set themselves on fire as part of their protests. Following their example, two men in the United States set themselves on fire as well; one in front of the Pentagon, another in front of the White House. As images of the war were released, the American public opposed the war even more. Many pictures showed military misconduct and the massacre of innocent civilians, such as the May Lai Massacre. The Americans heard reports on the radio about how soldiers were ordered to “kill everything that walks, runs, grows, and crawls” in order to completely annihilate the enemy.

Many families lost their fathers, brothers, and husbands in the war. Those who survived the war have never forgotten the horrors they were put through, and they have never been able to fully recover from the emotional damage it caused. The Vietnam War reduced people’s patriotism and faith in their government. Vietnam is one of the only wars in history that America has not won, deserving a memorial to commemorate the cost and aftermath of this war.

Works Cited

Bender, David L. The Vietnam War. Greenhaven Press, 1984.

Prados, John. Vietnam: the History of an Unwinnable War, 1945 -1975. Univ Pr Of Kansas, 2013.

Spector, Ronald H. “Vietnam War.” Britannica School, Encyclopaedia Britannica, Inc.,

school.eb.com/levels/middle/article/Vietnam-War/277599. Accessed 29 Oct. 2017.

 

Schoolwork Aiding Websites: Innocent Aid or Devious Cheating?

As technology slowly seeps into our lives, influencing our daily lives, elbowing it’s way into our mind and schedule, it becomes more and more crucial to establish a clear border between it and us. The establishment of a border that clearly demarcates where your hand ends and your phone begins may seem easy at first. As technology worms it’s way further into our lives, however, the hand and the phone fuse, and the weight of humanity becomes more and more reliant on the crutch sweetly proffered by our mechanical aides.

This increasing dependence on technology manifests itself in many ways. Hackers are born, people who spend their lives trying to defeat the online systems in games and spitefully create viruses. Many people sink into deep depressions as a result of online social rejection, only to attempt to abjure the situation by fleeing to other social media platforms. The Hikikomori, a Japanese term meaning “being confined,” are a group of Japanese youth who spend their lives in their rooms, eyes glazed over from screens, their meals delivered under the door. Technology rears its ugly head as well by contributing to a long-brewing firestorm of fake news, using the naive reliance of young adults on the Internet for news to pollute their minds with twisted facts. In the 2016 election, many Russian bots, or fake users, were sent on to Facebook and other platforms, where they contributed to the alarmingly rapid spread of misinformation.

The avaricious outreach does not stop there, however, and also makes itself known in what I believe to be its most vicious method yet: spell check.

Yes, that little red underline that pops up when you fail to put “I” before “E” (except after C), that innocent little reminder of your various grammatical error, the one that has saved your life on countless school assignments. Yes, that unassuming little helper will be more disastrous to humanity than the influx of bots and fake news will be, and it will be so in accordance with the single most important law regarding electronics and all forward motion, the Golden Rule: Short-term convenience always leads to long-term inability.

Picture it like this: If I have to jump over a large crate to get to school each day, I would feel greatly inconvenienced, as it might result in my being late to school. That weekend, I decide to hire a team of workers to lift the large, heavy crate each day as I go to school, in order to stop that tiring leap each day. Once the crate is gone, I enjoy an uneventful trip to school each day, free of stress or physical exertion. Over time, since I stopped my daily crate-jump, my legs slowly lose that ability, as I am getting no crate-jumping exercise elsewhere. As I enjoy my walk to school Monday morning, I notice that, by some freak accident, all of the members of my special crate-removal crew are sick. I look around and see no other way to get to school in time, no way to get around the crate. If I attempt to jump over the crate, I will be unable, despite the fact that not long ago it had been easy. An inconvenience, but in hindsight, relatively easy. Therefore, I become late to school, and I am late to school every single day that my crew is absent from their station.

While navigating a complicated and rapidly evolving world, it is important to remember the actual reason for our evolving. What is the actual force the has propelled us past the denizens of the animal kingdom. It is certainly not our brains, as we are dumber than not only dolphins, but elephants and certain whales as well. It is not our strength, say bears, oxen, tigers and gorillas, or even weight and strength ration, crow the Dung Beetle and the Leafcutter Ant. If not brain or brawn, what could it be that separates us from the multitudes of beasts? The answer is simple. It is one of the very basic skills of humanity and part of the reason we survive today: our ability to write, springing from our opposable thumbs. Opposable thumbs, however, are not nearly as interesting an article topic, so writing it is.

The differentiation between an early human’s schedule and a dolphin’s may well have been very similar. Awaken. Search for food. Potentially meet predator. Die at the hands/fins of predator or not. Eat food. Sleep. Repeat the process until death, whenever that might come. The only reason humans dominate the earth is their fast-paced evolution, beginning with writing, which enabled mankind to pass down discoveries. Isaac Newton said it best in the famous quote, “If I have seen further it is by standing on (the) shoulders of giants.” That was how humans broke out of the cycle they shared with dolphins, by building on the knowledge already gained by their ancestors. If a dolphin found a place with a particularly copious amount of food, there was no way to record that, so he would eat up and leave. A human would paint it on the walls of his cave, and the place would feed generations. This baseline skill of humanity has been the reason that we have progressed even through hardship, and its absolute necessity should not be forgotten.

Because of this, and the fact that humans know it as well, there have been no attempts to actively inhibit our writing, with a few exceptions. Unfortunately for us, however, we have somehow found a way to do so, and under the guise of being completely innocent, which is even worse. The more aid that humans receive from online, the less they write themselves without help, the less they are able to write without the constant help of websites and spell check, which will almost definitely result in debilitating results in the long-term. Already, people rely too heavily on these such websites, and too many students now rely on sites like Grammarly for their essays. There is a reason that we are not producing the same caliber of writers as we used to, a reason why the quality of the average book has deteriorated from complicated and nuanced to weight loss, a reason why nearly all the books worth referencing are from at least twenty years ago, if not much more. Who would have guessed that believing that we need external aid for humanity’s most basic need would result badly?

Another reason to be alarmed by Grammarly and its similar entourage is its surprising amount of tolerance that teachers have regarding it, when the reality is that they are helping students too much. I personally find it astounding that its usage has not been banned by the DOE, especially since Grammarly and Co. are not doing that much to attempt to dispel any sort of criticism. They are, in fact, being very outright about the fact that they are servicing students with essays they are meant to be doing by themselves. “If I want to get A’s on (my final exams), they better be free of typos,” an actor playing a student states in a Grammarly ad, and then continues with a sly smile, “Grammarly is my secret weapon.” One might think that this is just a little business tactic and that Grammarly does not do a ton to help your writing, just maybe catch the occasional mistake. Nope. The actor boldly plows onwards, “It’s more than just a simple spelling or grammar checker, Grammarly catches ten times as many errors as Microsoft Word. (Grammarly) helps me with word choice, punctuation, and sentence structure.” Oy vey. And then the video closes with two absolutely awful phrases that sound straight out of an episode of Black Mirror, “Better writing. Better results.”

Better writing. Better results. We will improve your writing and make sure you get better grades. All for free. And this is all allowed by nearly all schools, which is absolutely appalling. How is one supposed to learn writing if whenever there is bad writing, it is automatically fixed? This is even ignoring the simple fact that the teachers must be very misled. If a student needs extra help with their writing, the teacher will never know, and neither will the high school or college the student applies to when they see the students’ grades. This is an appalling interjection of corruption and laziness into society, and soon enough the long-term effects will come into play. In 1997, world champion Garry Kasparov lost to the IBM supercomputer Deep Blue in a game of chess. Now, a supercomputer can know how to beat you after you make your first move, and human skills at chess are useless compared to theirs.

Better writing indeed. We shall see about the second count.

 

CHANGE & CONTINUITY IN APOCALYPTIC THOUGHT

Since the beginning of recorded history, humankind has maintained a strong fascination with its own demise. From its eschatological roots to the nuclear age and beyond, apocalyptic thought has permeated mass culture. However, the thematics of apocalyptic thought and therefore of its representation in culture have shifted, although certain consistencies have survived. Change and continuity of factors and components of apocalyptic thought may help us to understand change and continuity of our own mindsets.

Definitions may vary, but most would agree that the term “apocalypse” refers to the end of an era or even of the world. In ancient times, apocalyptic thought tended to focus on the day in which said era ended, commonly described in ancient texts as the “day of wrath.” Usually used in religious context, the “day of wrath” serves to embody the gestalt of ancient apocalyptic thought, at least in terms of Christian eschatology. The “day of wrath,” also in many cultures the “day of Judgement,” outlined apocalyptic thought with a focus on oneself; apocalyptic thought was centered around self-reflection and the apocalypse was viewed as the epic, ultimate decision of one’s fate. Even outside of Christian eschatology, most of these ideas still applied: most ancient apocalyptic thought was centered around the day in which the apocalypse occurred and focused on oneself. Cultural manifestations of these ideas are seen frequently across ancient cultures. Religious texts are the most blunt example of such manifestations. In Jewish eschatology, the coming of the Messiah is described in the Torah as an apocalyptic event. And, in the biblical tale of Noah’s Ark, the Torah focuses not on the events that caused or the events that followed the flood but rather on the day itself that God flooded the Earth; it also emphasizes Noah’s significance in a way that carries the theme of introspection to the tale. A representation of later origin, hymns such as the thirteenth century (or earlier) Latin hymn “Dies Irae,” which literally translates to “Day of Wrath,” present the dawn of the apocalypse in a self-reflective light, as shown in the following excerpt from “Dies Irae”: “Worthless are my prayers and sighing, / Yet, good Lord, in grace complying, / Rescue me from fires undying” (Verse 14, Irons 1849). The hymn also focuses on the day of destruction itself, as expressed in the following excerpt: “Ah! that day of tears and moaning, / From the dust of earth returning / Man for judgement must prepare him, / Spare, O God, in mercy spare him” (Verse 18, Irons 1849). This individualistic, instantaneous approach strongly juxtaposes that of current day. Modern society tends to focus not on the downfall of oneself, but rather, on the downfall of humanity. Furthermore, the moment of this downfall is often difficult to distinguish from the sequence of events that encompass it and thus blurs the line between the pre-apocalyptic and post-apocalyptic. When analyzing ancient representations of the apocalyptic, one may almost always point to an exact moment within the narrative when one era gave way to another. In the case of Noah’s Ark, this instant was the moment the Earth was flooded. In the case of the story of Adam and Eve, their paradise was consumed by a flawed existence the instant that Adam followed Eve’s lead and took a bite of the forbidden fruit. Biblical and other religious narratives such as these are one of the biggest influences on human history, yet current narratives that portray the apocalyptic do not follow their lead.

Evidence of our primitive origins has faded in the thousands of years since biblical times. Although still built for survival, we have long since become preoccupied with civilization and societal endeavors. This preoccupation is perhaps the only thing that separates human from animal. In ancient times, societies maintained their survivalist foundations despite impressive levels of advancement. Fear of death was at the core of the motivations of every individual, and thus the heart of one’s existence was the fear and prevention of their own personal demise. History has consistently demonstrated this; the characteristics of the Early Middle Ages (5th-10th centuries A.D.) are a perfect example of such a demonstration. Host to severe population decline and increased immigration, this era was not a time of great empires, but rather, a time of mediocre, largely powerless kingdoms, the societies of which were unadvanced and unevolving. In fact, many historians refer to this time period as the “Dark Ages,” drawing upon the severe lack of literary and cultural development of the time (Berglund), serving to express the state of primitivity that humans existed in during this time. As made evident by the era’s drastic increase in migration, people of the Early Middle Ages were not rooted in their societies. Rather, they were rooted in their own mortality and were more affected by the deaths of individuals around them than the deaths of the societies around them, as kingdoms did so frequently collapse because they were small and unstable. In this sense, a death of an individual was perceived as more apocalyptic than an utter societal collapse. While this atavistic core remains relevant to those of modern times, its symptoms are concealed by the astronomical degree of progress achieved since biblical times. Derived from the inadvertent devotion of essentially the entirety of humanity, this progress has led to the complex, interconnected, and precarious global society of today. The weight of this devotion is what buries one’s atavistic foundations, as the core of the motivations of every individual shifts from fear of their own mortality to fear of societal mortality. This is at the center of the evolution of apocalyptic thought. In our minds, so much has been devoted to society that to see it crumble is more terrifying than to see ourselves crumble.

If our biggest fear is not of the death of oneself but of the death of civilization, then apocalyptic thought will manifest itself accordingly; as this is the case, apocalyptic thought has done such. Imagination of the apocalyptic in its most culturally significant platforms almost always consists of the deterioration of a society or of humankind. However, the nature of such imaginations begs the illustration of not an instant, but rather, a process. Modern cultural representations of the apocalyptic present themselves as such, and subsequently, the moment of transition between pre-apocalyptic and post-apocalyptic often blurs. This trend is further enforced by previously unimaginable crises of the past century, of which have left a remarkable impact on humanity’s perception of itself and of its society. Our culture naturally turns to history for influence, and historical events are often portrayed apocalyptically (Berger, XIII). From the Great War to the Holocaust to the current threat posed by climate change, the available influences all consist of the same foundation, in which an era or society deteriorates not instantaneously, but rather, through a process; ergo, the aforementioned trend in modern imagination of the apocalyptic can be seen not only as a product of the evolution of human fear, but also as an imitation of the models available to us.

However, the influence of these models on the way we think about the apocalypse also reveals a continuity in apocalyptic thought between biblical times and now. Nearly every culturally significant portrayal of the apocalyptic shares a common element: we are to blame. From the crucifixion of Jesus Christ to the Nuclear Age, our history reflects time and time again that we are the cause of our own suffering; and from the expulsion of Adam and Eve from Eden, the very earliest apocalyptic narrative of Western culture (Lisboa 230), to the iconic 1983 movie The Day After, our culture demonstrates time and time again our recognition of this role we play.

It is important to recognize the relationship between change and continuity in this case. Imagination of the apocalyptic has shifted from an individual to a societal scale and has evolved to take on the presentation of not just an instant of deterioration, but a process of deterioration, consequently blurring the distinction between pre and post apocalyptic. Yet, imagination of the apocalyptic has maintained a constant narrative of human causation. From this relationship, one may gain much insight as to the influence of diversion from our primitive origins and of functioning in a civil society on our mindsets as a whole. Simply the absence of apocalyptic thought, at least past an individual scale, lacking the incorporation of human flaw as a causation indicates our apathy towards thinking about the apocalypse outside of the context of human flaw. Therefore, apocalyptic thought is and always will be relevant and prevalent because it satisfies our need to address the unnaturality of the sheer amount of power we have and the instability it is accompanied by. In our primitive states, it would never have occurred to us to worry about or imagine a demise larger than that of ourselves individually. That we have developed the natural tendency to imagine the apocalyptic in order to come to terms with our own power may serve as a demonstration of the degree in which we have diverted from our primitive origins. Humankind has conquered genetics and its survivalist orientation in favor of an existence of societal orientation. Atavistic fears have been overshadowed by civil fears. And the prevalence of apocalyptic thought attests to human awareness of the unnaturality of our current state of being. Hence, since and even prior to biblical times, apocalyptic thought has served as a manifestation of our awareness of our own unnaturality; this has and will remain consistent. Furthermore, as we divert more and more from our primitive origins, we are bound to tend to apocalyptic thought more frequently as our own potential becomes less natural and more precarious.

The role of apocalyptic thought in the story of human evolution reveals more than perhaps is first let on. Yet, representation of the apocalyptic may serve as a framework in which to study the big picture of the impact of civil and societal existence on our own thinking. Change and continuity in apocalyptic thought serves as proof of the astronomical extent of which we have strayed from our primitive origins and as proof of our own disconcertment with our own power.

 

Works Cited

Benedict, et al. Eschatology, Death, and Eternal Life. Catholic University of America Press, 2007.

Berger, James. After the End: Representations of Post-Apocalypse. University of Minnesota Press, 1999.

Berglund, Bjorn E. “Human Impact and Climate Changes: Synchronous Events and a Casual Link?” Department of Quaternary Geology, Lund University.

Bibby, Geoffrey. Four Thousand Years Ago: a World Panorama of Life in the Second Millennium B.C. Greenwood Press, 1983.

Collins, Adela Yarbro. Cosmology and Eschatology in Jewish and Christian Apocalypticism. Brill, 1996.

Collins, John J. “Apocalyptic Eschatology as the Transcendence of Death.” The Catholic Biblical Quarterly, vol. 36, no. 1, Jan. 1974, pp. 21–43.

Gathercole, S. J. The Critical and Dogmatic Agenda of Albert Schweitzer’s the Quest of the Historical Jesus. Tyndale Bulletin, 2000.

Hanson, Paul D. The Dawn of Apocalyptic: the Historical and Sociological Roots of Jewish Apocalyptic Eschatology. Fortress Press, 1989.

Hindley, Geoffrey. Medieval Sieges & Siegecraft. Skyhorse Publishing, 2014.

Lee, Alexander. The Ugly Renaissance. Random House US, 2015.

Lisboa, Maria Manuel. The End of the World Apocalypse and Its Aftermath in Western Culture. Open Book Publishers, 2011.

McLuhan, Marshall, and Sut Jhally. “Advertising at the Edge of the Apocalypse.” Mediaed.org, Media Education Foundation, 2017, www.mediaed.org/transcripts/Advertising-at-the-Edge-of-the-Apocalypse-Transcript.pdf.

Rand, Edward Kennan. Founders of the Middle Ages / – Unabridged and Unaltered Republication. Dover, 1957.

Wikisource contributors. “Dies Irae (Irons, 1912).” Wikisource. Wikisource, 15 Jan. 2016. Web. 9 Dec. 2017.

The Holy Bible (King James). Lds.org, www.lds.org/scriptures/ot?lang=eng.

Meyer, Nicholas, director. The Day After. ABC Motion Pictures, 1983.

 

Romeo and Juliet Revisited

Sigmund Freud once theorized that all instincts can be categorized as life instincts (Eros) or death instincts (Thanatos). Life instincts, most commonly referred to as sexual instincts, are the need for humans to survive, feel pleasure, and reproduce. Death instincts create a thrill-seeking energy that is expressed as self-destructive behavior. When that energy is expressed towards others, it becomes aggression and violence. William Shakespeare’s “Romeo and Juliet” describes the tragic love story of two star-crossed lovers whose passion leads to both of their suicides. Their love is driven by life instincts of libido when they fall in love at first sight. Their death instincts drive them to become self-destructive and violent when Romeo slays Tybalt and when they both commit suicide. According to the Encyclopedia of Death and Dying, Freud’s psychoanalogy describes how “humans function and feel at their best when these two drives are in harmony. Sexual love, for example, may include both tenderness and thrill-seeking.” Throughout the play, neither Romeo nor Juliet find that perfect balance between Eros and Thanatos. Both their romance and their deaths are driven by their broken instincts that resulted from the environment of hatred and violence in which they were raised.

Freud concluded that people will always have an unconscious yearning for death, however life instincts alleviate this desire. Not everyone agrees with Freud’s theories; however, if one chooses to believe this idea that everyone subconsciously is led by their death instincts, they would agree that Romeo and Juliet both express the want to die, but their unbalanced instincts don’t temper these feelings, which results in both of their suicides. After Tybalt has been slain by Romeo, Capulet tells Paris that he no longer can wait to marry Juliet, for they will be wed on Thursday. Juliet tells Lady Capulet, “O sweet my mother, cast me not away. Delay this marriage for a month, a week, / Or, if you do not, make the bridal bed / In that dim monument where Tybalt lies” (3.5.210). Juliet is threatening her mother by telling her that she would rather die than marry Paris. She declares that if the wedding is not delayed, her bridal bed will be her death bed next to Tybalt’s in the Capulet burial vault. In other words, death will take her maidenhead. In this case, Juliet’s desire to die is not tempered by her life instincts. According to Freud’s philosophy, the want to die is supposed to be balanced out with life instincts before the thought becomes a conscious one. Whereas with Juliet, her instincts aren’t in harmony and cause her to become self-destructive. In the same fashion, Romeo also expresses a certain eagerness to die, in particular when he finds out that Juliet is dead, but he doesn’t know that she has only faked her death. Romeo exclaims, “Well, Juliet, I will lie with thee tonight. Let’s see for means. O mischief, thou art swift / To enter in the thoughts of desperate men” (5.1.37). Here, Romeo is stating that he will kill himself and lie dead next to his beloved very shortly. His eagerness to be with Juliet drives his want to die. Wicked mischief passes as this thought in his vulnerable mind and gives him ideas of death. It is important to realize that what Romeo refers to as mischief, is, in fact, death instincts. His instincts have led him to want to die, and he is enraged by it. His and Juliet’s broken instincts have led their vulnerable minds to consciously settle on the idea of death.

Additionally, Romeo and Juliet’s romance is driven by their sexual instincts when they fall in love at first sight. According to Sigmund Freud, the libido is part of the id and is the driving force of all behavior. According to the article “Life and Death Instincts,” “The id, he believed, was a reservoir of unconscious, primal energy. The id seeks pleasure and demands the immediate satisfaction of its desires. It is controlled by what Freud termed the pleasure principle. Essentially, the id directs all of the body’s actions and processes to achieve the greatest amount of pleasure possible. Because the id is almost entirely unconscious, people are not even aware of many of these urges.” The only thing that can control these urges is the ego. The ego is the part of a person’s personality that must tone down the libidinal energy. It must negotiate between the libidinal energy and the superego, which is the part of a person’s personality that incorporates lessons and morals taught by parental or authority figures. When Romeo and Juliet first fall in love and find out that their families are rivals, their superego doesn’t take control of their id’s impulses, therefore they choose to have pleasure over thinking rationally about the consequences of their actions. For example, in the balcony scene, Juliet says, “O Romeo, Romeo, wherefore art thou Romeo? Deny thy father and refuse thy name, / Or, if thou wilt not, be but sworn my love, / And I’ll no longer be a Capulet” (2.2.35).

Instead of thinking about how their families will react to their love, Juliet says she’d give up being a Capulet for Romeo. In fact, she has lost all common sense and is overtaken by her libidinal energy. Later in the scene, Romeo asks “O, wilt thou leave me so unsatisfied?” (2.2.132) As can be seen, Romeo as well as Juliet, is simply looking to satisfy his sudden desire for Juliet, driven by his life instincts. Based on Freud’s pleasure principle, their wishful impulses needed to be satisfied, regardless of the consequences. Ultimately, if their superegos had balanced out their libidinal energy, the play would not have resulted in their deaths.

If you think about it, death and sex are actually commonly associated as one concept in “Romeo and Juliet.” The article “Sex and Death” states that, “Juliet links sex and death by punning on the word “die” when, daydreaming about her impending wedding night with Romeo, she imagines Romeo being transformed into a bunch of “little stars” lighting up the night sky: ‘Give me my Romeo, and when I shall die / Take him and cut him out in little stars, / And he will make the face of heaven so fine’ (3.2.23-25).” Many take this quote quite literally and imagine that Juliet is talking about her physical death, when she is really referring to the slang, commonly used at that time, for sexual climax, “die.” Therefore, on her wedding night, Juliet wasn’t thinking about cutting Romeo up into stars when she physically dies, but rather when her libidinal urges are satisfied. Normally sex leads to the creation of life, however with Romeo and Juliet that is definitely not the case.

Another possible explanation for Romeo and Juliet’s unrequited love is their age and stage of development. In the play, Juliet is only thirteen, and Romeo is not much older. “Life and Death Instincts” asserts that, “according to Freud, children develop through a series of psychosexual stages. At each stage, the libido is focused on a specific area. When handled successfully, the child moves to the next stage of development and eventually grows into a healthy, successful adult.” Romeo and Juliet were teenagers and had not yet fully developed into healthy adults. Consequently, their actions were ones of careless adolescents, not ones of mature people. It is plausible that their behavioral immaturity was caused by their families’ feud. Maybe they were traumatized by something when they were younger, or perhaps being in a setting full of hatred and fights affected their superego. In addition, their superegos were not fully developed and could not function to control the id’s impulses of sex and aggression. “Id, Ego and Superego,” explains that “The ego engages in secondary process thinking, which is rational, realistic, and orientated towards problem solving. If a plan of action does not work, then it is thought through again until a solution is found. This is known as reality testing, and enables the person to control their impulses and demonstrate self-control, via mastery of the ego.” Obviously Romeo and Juliet had not mastered their ego, for they did not have self-control and did not think realistically when they tried to problem-solve. In contrast, an example of a character who had, in fact, mastered his ego is Friar Lawrence, who only agrees to marry Romeo and Juliet because he thinks that it might help to ease the ongoing feud between the Montagues and the Capulets. When that plan falls through, he comes up with an elaborate plan of Juliet faking her death and Romeo running away with her once she’s been placed in the Capulet burial vault.

Would Romeo and Juliet have come up with this plan on their own? Did Romeo even stop and think, when he was given the news that Juliet had died? Even when things have gotten completely out of hand with Romeo’s banishment, Capulet forcing Juliet to marry Paris, and the deaths of Mercutio and Tybalt, Friar Lawrence stays calm and tries to problem solve. There is clearly a contrast between characters with functioning instincts, and Romeo and Juliet. According to Freud, the id, the ego, and the superego are developed in stages. Romeo and Juliet’s were not fully mature and led them to irrational and irresponsible decision making.

Moreover, Freud observed that after experiencing trauma, people have self-destructive behavior, and they are more violent and aggressive. Thus, after trauma, death instincts take over a person’s behavior. After Romeo is witness to Tybalt murdering Mercutio, he suddenly changes from resisting the urge to fight, to being in a sudden rage. Before being traumatized by watching his best friend die, Romeo says, “I do protest I never injured thee / But love thee better than thou canst devise / Till thou shalt know the reason of my love” (3.1.70). In a word, Romeo simply doesn’t want to fight. In contrast, after Mercutio has died he yells, “Alive in triumph, and Mercutio slain! Away to heaven, respective lenity, / And [fire-eyed] fury be my conduct now.- Now, Tybalt, take the ‘villain’ back again / That late thou gavest me, for Mercutio’s soul/ Is but a little way above our heads, / Staying for thine to keep him company. Either thou or I, or both, must go with him” (3.1.130). Romeo’s sudden mood change from trying not to fight, to saying that either Tybalt or him or both must die and join Mercutio in heaven, shows how a traumatizing event can bring out a person’s death instincts. Romeo’s increased aggression and desperation causes him to slay Tybalt, and eventually kill himself. All in all, Romeo’s words were true. He, Tybalt, Paris, and even Juliet eventually joined Mercutio up in heaven.

Now one may ask, why do Romeo and Juliet have broken instincts? There are many possibilities. Their age and stage of development could be one factor. Their personalities weren’t fully developed or mature, which means neither their superegos were not fully developed nor their id, which would cause the desynchronization of their instincts. However, the most probable cause of their defective instincts is the environment in which they were raised. Throughout their whole life, they were taught to abhor the other family. For generations, the Montagues and the Capulets have been fighting. This could have been upsetting to a young child. Going back to the idea of trauma and its effect on personalities, Romeo and Juliet were probably traumatized as children because of all of the violence surrounding them. If they had experienced a shocking event at a young age, their personality would have been affected. If their personality was not developing normally, this might explain their damaged instincts. PsyArt Journal states that, “Repressed childhood traumata tend to elude repression and induce disguised reenactments of the original trauma later in life. Understanding puzzling aspects of a character’s behavior as a reenactment of childhood trauma would help explain his or her paradoxical actions and the unconscious processes underlying his or her words, thoughts, and feelings.” Romeo and Juliet both behave in puzzling ways and act in irrational ways. If they had experienced a childhood trauma, that would explain their damaged inner drives. The article “Romeo’s Childhood Trauma — ‘What Fray was Here?’” explains that “if one listens clinically to Romeo’s words, one hears indications of… a traumatic experience in childhood as would drive him toward his tragic fate. I believe it is a reenactment of childhood trauma that prevents Romeo from ‘putting Juliet on his horse and making for Mantua’ (Mahood 57) and thus avoiding the catastrophe entirely.” If Romeo was not reenacting a traumatizing experience as a child, he might would have avoided his tragic ending. Therefore, the most reasonable cause of at least Romeo’s damaged drives is a childhood trauma.

In conclusion, Romeo and Juliet are perfect examples of instincts expressed unhealthily. Their deaths were caused by either being too drunk in love to think rationally or too desperate to think of any other option but death. However, if they had thought about the consequences to their actions before the balcony scene and their marriage, the play would not have been called the tragedy of Romeo and Juliet. “Id, Ego and Superego,” clarifies that “The id engages in primary process thinking, which is primitive, illogical, irrational, and fantasy oriented. This form of process thinking has no comprehension of objective reality, and is selfish and wishful in nature.” Romeo and Juliet were driven by their ids into being “fantasy oriented.” Love at first sight is a fantasy, getting married despite their families’ fuel is irrational, and when they commit suicide, they are only acting as a response to their feelings of tension and unpleasure due to id’s impulses being denied. Shakespeare and Freud come from two completely different time periods, and obviously Shakespeare would not have known Freud’s theories while writing his plays. However, they both intertwined the contrasting ideas of sex and death. Freud believed that our life instincts need to be balanced out with our death instincts. Shakespeare often uses sex and death as one common theme throughout many of his plays. If both Freud and Shakespeare came up with the same conclusion, wouldn’t it be valid to compare their ideas? All in all, there are many debates and contradictions surrounding both Shakespeare’s works and Freud’s theories, however the one thing everything can agree on is that they both try to examine the most abstract and mysterious thing there is to understand: humans.

 

Works Cited

Cherry, Kendra. “What Are Life and Death Instincts?” Verywell. N.p., n.d. Web. 28 Apr. 2016.

Freud, Sigmund. “IV. Sigmund Freud. 1922. Beyond the Pleasure Principle.” IV. Sigmund

Freud. 1922. Beyond the Pleasure Principle. Bartleby.com, n.d. Web. 02 May 2016.

Kastenbaum, Robert. “Death and Dying.” Death Instinct. Advameg, n.d. Web. 02 May 2016.

Krims, Marvin. “Romeo’s Childhood Trauma? — “What Fray Was Here?”” PsyArt: An Online

Journal for the Psychological Study of the Arts. N.p., n.d. Web. 02 May 2016.

McLeod, S. A. “Id, Ego and Superego.” Id Ego Superego. N.p., n.d. Web. 02 May 2016.

Shmoop Editorial Team. “Sex and Death in Romeo and Juliet.” Shmoop.com. Shmoop

University, Inc., 11 Nov. 2008. Web. 02 May 2016.

Shmoop Editorial Team. “Sex and Death in Romeo and Juliet.” Shmoop.com. Shmoop

University, Inc., 11 Nov. 2008. Web. 02 May 2016.

Shakespeare, William, and Jill L. Levenson. Romeo and Juliet. Oxford University Press, 2008.

 

The Dangers of Stereotyping by the Media

Two years ago, I sat in social studies class on a rainy Friday morning counting the hours until I could go home. As I typed out a text to an equally bored friend across the room, my male teacher, responding to an inquiry about his weekend plans, made a casual remark about his husband. Admittedly, I felt surprised. Not because I harbored any prejudices towards the LGBT community, but because he didn’t fit the image of a gay person that the media had painted in my mind. Years of watching television shows and reading magazines had instilled in me a misguided representation of gays and lesbians. I imagined a gay man to be as theatrical and melodramatic as Modern Family’s Cameron Tucker or as feminine and neurotic as Will & Grace’s Jack McFarland. My teacher, easygoing with a passion for history rather than Beyoncé’s latest album, did not meet any of these expectations. The results of misrepresenting a group of people in the media have a much greater reach than rousing me from a boredom induced near-coma on a dreary day. Young women often starve themselves to fit the stereotype of the perfect woman broadcast all across television and film. People of color and homosexuals face discrimination due to the broad and largely unfavorable preconceptions created by the media. The media stigmatizes the mentally ill, causing a lack of adequate medical care and leading to deadly consequences. While the writers of television programs likely believe that they serve as comical running gags or punchlines, stereotypical portrayals of groups of people in the media can have adverse and calamitous consequences in the real world.

Gender stereotypes occur across all forms of media. For instance, television and the advertisement industry constantly portray the thin woman as the “perfect” woman. This fixation on an ideal body type relates to the growing incidence of eating disorders and body issues among young women. According to the National Centre for Eating Disorders, fifty percent of girls between the ages of eleven and fifteen read fashion magazines and ninety-five percent watch television. This exposure to a thin ideal size corresponds to a time in their lives where self-esteem and body image are at their most tenuous due to the onset of puberty and the increasing tendency for social comparison. A desire to mold to the stereotypical skinny, “perfect” woman seen on television can lead to the development of eating disorders and rigorous dieting. This can possibly account for the drastic rise in eating disorders from 1.5% of women in 1988 to 9.3% in 2017 (Currin). Underrepresentation presents another concern about the portrayal of women on television. One study found that men triple women in number on primetime television and that in newscasts, women make up only about 16% of reporters (Wood). According to this researcher, “the constant populace distortion of men and women tempts us to believe that there really are more men than women and, further, that men are the cultural standard.” This portrayal by the media can foster the belief that women do not make up a large and active component of the population. Such ideas may cause a reluctance to acknowledge and reward women for their contributions to society, resulting in negative consequences for the already existing gender wage gap and the likelihood of women holding positions of power such as the presidency or a seat in Congress.

A high prevalence of racial stereotypes exists in television and film. For instance, Asian actors and actresses often find themselves playing the roles of nerds and intellectual masterminds. Unfortunately, such stereotyping makes it difficult for them to secure work outside of this limited arena, resulting in most roles — even those originally intended for portrayal by an Asian actor — to go to white performers instead. This causes a minimization of the importance of people of color in society and a lack of cultural understanding. In addition, casting Asian-Americans in primarily academic roles on television “plays on the existing stereotype about Asians being intellectually and technologically superior to Westerners,” resulting in the direction of antagonism and discrimination their way (Nittle). Furthermore, the fostering of the perception of Asians as the “model minority” in television and film further drives a wedge between Asians and their counterparts of other races.

Misconceptions and a lack of representation of gay people in television can have unfavorable implications for lessening discrimination against the LGBT community and the development of individuals within it. According to one study, “the lack of portrayals of homosexuality on television influence the beliefs among viewers that homosexuality is abnormal or extremely rare” (Fisher). As humans have the tendency to react more adversely to the unfamiliar and deviations from the social norm, this can heighten negative reactions towards the LGBT community. In addition, the absence of depictions of gay people — particularly positive ones — in media can lead to a lack of role models for homosexual teens or those questioning their sexuality, creating greater feelings of isolation.

Stereotyping of the mentally ill also occurs in the media. For instance, television often links madness or creative genius to a mental disorder, romanticizing the struggle of afflicted individuals. For example, a running gag on the television series Bones featured the protagonist’s socially inept demeanor. Although her awkward gaffes — characteristic of someone suffering from Asperger’s — continued throughout the duration of the show, the showrunners used them as a punchline and never addressed the isolating difficulties of living with this disorder. Additionally, an underlying criminal element to the portrayal of mental disorders on television often exists. For example, “popular psychological thrillers like Hannibal, Mr. Robot, and Dexter, all perpetuate the stereotype that people with mental illnesses are fearsome criminals, if not outright violent ones” (Bastién). This can inspire the belief that the mentally ill will not respond to reproach or assistance, causing them to be denied professional help that could aid in coping with their affliction. For many of the mentally ill individuals involved in the country’s violent tragedies, their diagnoses did not come to light until too late. For example, Adam Lanza, the man who shot and killed twenty-six people at Sandy Hook Elementary School in 2012, did not receive a diagnosis or treatment for psychiatric disorders such as anxiety and obsessive compulsive disorder (Cowan).

Misguided stereotypes function as a red thread running through all forms of the media. Television portrays the most beautiful women as the thin ones, and female underrepresentation in the media minimizes and devalues their role in society. Racial stereotypes, particularly those pertaining to Asian Americans, limit the work available to people of color in the show business industry and foster divides. Preconceptions about gay people and a lack of visibility in television heighten enmity to the LGBT community and rob homosexual teens of adequate role models. Inaccurate portrayals of mental illness can have detrimental consequences in reality, as showrunners and television writers often overlook the difficulties associated with these ailments or include a criminal undertone to the disorders. Although the depiction of these stereotypes may boost network ratings or make for wildly entertaining storylines, they have proven to be devastating in the real world.

 

Works Cited

Bastién, Angelica. “What TV Gets Wrong About Mental Illness.” Vulture. N.p., 8 Sept. 2016. Web. 8 Oct. 2017.

Cowan, Alison Leigh. “Adam Lanza’s Mental Problems ‘Completely Untreated’ Before Newtown Shootings, Report Says.” The New York Times. The New York Times, 21 Nov. 2014. Web. 8 Oct. 2017.

Currin, L. “Time Trends in Eating Disorder Incidence.” The British Journal of Psychiatry 186.2 (2005): 132-35. JSTOR. Web. 8 Oct. 2017.

Fischer, D. “Gay, Lesbian, and Bisexual Content on Television: A Quantitative Analysis Across Two Seasons.” J Homosex 52.3 (2007): 167-188. JSTOR. Web. 8. Oct. 2017.

Wood, Julia T. Gendered Lives: Communication, Gender, and Culture. Stamford, CT: Cengage Learning, 2015. Print.

 

The​ ​Dilemma​ ​of​ ​a​ ​Debater’s​ ​Moral Integrity

What would you do to win? How far would you go to get what you want? This is a question I often ask myself, mostly because of the sport of debate, which I have been taking in school for a year so far. The main reason that debate makes me think of how far I would go to win is my specific forte of debate, which is congressional debate. Congressional debate is simple. You get a bill or resolution to respond to in pro or con. But, the problem is, you have an advantage if you go first because the judges hear your opinion first, and this means that you’ll find yourself putting away your own opinions and ideas in order to win. If you want to have an advantage in congressional debate, you will have to put aside your personal viewpoints.

In congressional debate, if there is an author of a bill or resolution present, they will speak first, in pro of said bill or resolution. If the author is not present, a representative of the bill or resolution will speak on its behalf and is forced to speak in pro. There is then a limited questioning period, and from there on, a trade-off of pro and con and questioning. Moral tension is created when you choose to have the advantage of arguing first, while having to argue pro, because you will have to sacrifice your own views, whether you believe in pro or con for a matter.

Congressional debate is less about the topics discussed and more about the form in which you debate them. In congressional debate, you get the date of an upcoming debate. You get an official list of topics at varying times. Then, you have time to prepare and have the option to submit a bill or resolution. A bill states laws to be put in place. A resolution is a bill in response to another bill or event that has happened. As the word suggests, you are resolving the problem. Though that’s how typical congressional debate works, humanity’s usage of congressional debate roots back as old as time, even in its most primitive state. And I don’t just mean two cavemen arguing over a piece of meat. Looking at the roots of the word starting with congressional, according to Dictionary.com, “congressional means of or relating to Congress.” In Congress, people argue over bills and resolutions, just like in congressional debate. Now, according to Dictionary.com, the standard definition of debate is “a formal discussion on a particular topic in a public meeting or legislative assembly, in which opposing arguments are put forward.” Debate can be contextualized as either a sport or humanistic inquiry, and it is the contextualization that makes all the difference.

The point is, the deeper you get pulled into debate as a sport, the less of what you’re saying matters and the more of winning the debate matters. Soon, winning becomes all you care about, having been pulled into the highly addictive sport of debate. In contrast, contextualizing debate as a form of human inquiry is about the search for justice. However, when debating as a sport, it doesn’t matter how you debate, what you debate, or why you debate. In the sport of debate, only one thing matters, and it’s winning.

I was at my first congressional debate tournament. I’d had two weeks to prepare a speech that was either pro or con to the impeachment of Donald Trump. My personal viewpoint is that by all means, he should be impeached because of the many outrageous claims he’s made and the countless acts of torment and bullying he’s committed via social media. The debate starts out fast. You barely have enough time to prepare before they read out the topic.

From there, they asked the dreaded question, “Is the author of this topic present, or would a representative like to speak pro on behalf of the topic?” The room turns quiet, and eyes dart around the room nervously.

“Come on, guys, we need to continue…” Sure, it’s a simple enough side to debate. You know that if you really needed to debate it, you could. So then, why is it so hard to agree to debating the topic? For one, just because you can do something doesn’t mean you should do it. There are many variables that play into the situation, like the risks, why you want to do it, or even how you want to do it. We all feel entitled to our own views and opinions.

In the 21st century, nothing is more important than your opinion. Think about it. This year’s election has been almost entirely based on dominating public opinion because ranting on social media can have a surprisingly strong effect on popularity. When you have services literally built for stating your opinions, you’ll start to, well, think everyone, and I mean anyone and everyone, cares about your opinion. We are being fed by the media that our opinion matters, but it is only the manipulation of our opinion that really matters. We soon figure out that day to day, our opinion does not matter in reality. Because of our ceaseless egos, despite the triviality of our opinions, we hold our opinions very dear. And in the end, should we push our opinions away just to win? We shouldn’t, because our opinions are our integrity.

Before debate, I was, to put it lightly, very argumentative. And when I first discovered debate, I was excited. Finally, a sport I could win by arguing! It was the end of the year, but there was still time to participate in one debate. Novice congressional. Now at the time, I had no idea what that was, and I wouldn’t have without the help of my debate teacher, Jim Shapiro. So with one week of preparation and a poorly written speech, I went to my first debate. And I rocked it. Question after question, the battleground became clearer and clearer to me. All you had to do was to state your claim, interrogate your opponent, and act like you know what you’re doing, and you’ve pretty much won the debate. Plus, it didn’t hurt that everyone else was new to debate also. So, I was plowing down questions when the judge stated the final question: should Donald J. Trump be impeached?

We are back at the pivotal moment, the crossroads between my moral integrity and my egotism for winning. The crossroads between sport and humanistic inquiry. Now before I continue, I want to make something very clear. I’m a liberal. I go to a liberal school in a liberal neighborhood in a liberal city. So, the last thing I was expecting was that question. But before you knew it, two people had chosen pro. That meant that in order to go first, I would have to put away all my pride, all my honor, and all my opinions in order to win. And I won. I’m not going to go into full depth of how I won, but let’s just say it involved a lot of bias and fake information, like the blatant ignorance of some of the atrocities he’s said or the creation of false sources of good things he had done, as I couldn’t think of any myself. I actually hoped, prayed even, that I wouldn’t win. For corrupt politics not to prevail once again. And even though I won, I lost the true debate. I lost my opinion, one of the only things that makes me, me.

It’s almost funny. Humanity is built on the standards of “victory is good!” But at what cost? How far are you willing to go to “win?” What even is winning? It’s a social construct we created to segregate, a construct we need to distinguish who’s better and who’s less than others. This status currency has almost no meaning other than pride, so why do we chase it? Why play the game of cat and mouse with your life, with almost everything to lose? The answer is, even with all of our opinions, we only matter if other people mandate it. Our opinion only matters if it can be manipulated by greater power structures but here, on that debate podium, my individual opinion was the only moral integrity I had. Our individual opinions are the only morals we have, and in the pursuit of the relativity of opinion, I debated against Trump’s impeachment. In a society where status, currency, and popularity are based on our own agency, we crave power. We crave being loved. We crave appreciation. We crave someone holding us and telling us that we are okay. And most importantly, we crave winning. It’s only human. So when people ask me why I would help this horrible man spread his opinion, I say I’m only human. Because at the end of the day, that’s what we do. We segregate, label, and divide people into groups, so we can judge them. It’s terrible, brutal, and unfair. But it’s what we do. We put away our moral integrity to win and to be recognized. The question now, is how do we contextualize ourselves?

 

You Could be Next

“We’re going to die here. We’re going to die,” Carmen Algeria thought as she dodged gunshots raining down on her while witnessing people drop left and right. “About five feet to the left of me there was a man with a bullet wound to his chin. My jeans were covered in someone’s blood, my T-shirt was covered in someone’s blood, my sister’s whole leg was covered in blood.” In the face of this crisis, citizens unified, and after the initial shock, they began to move. The civilians who did not obtain injuries ran to their cars to transport people to the hospital while others directed people to safety. Men grabbed Algeria and her sister and lifted them into trucks. “Bodies were literally being tossed on top of us,” Algeria said. Blood covered every inch of the emergency room. The bodies of victims littered the floor. Gunshot wounds riddled the victims’ bodies. “All I could describe it as was a war zone,” said John Kline, an officer with the Los Angeles Police Department (Carcamo et al.).

This scene depicts one of the hundreds of stories from people at an outdoor concert in Las Vegas on October 1, where the deadliest mass shooting in US history occurred. Thousands of Americans witnessed and survived incidents similar to Carmen Algeria’s. In the past 275 days, 273 mass shootings have occurred. Since Las Vegas alone, six more mass shootings (four or more people killed or injured) and 240 shootings (under four) have terrorized the United States (“Mass”). But according to our president and leading politicians, no solution exists. The loss of life apparently equals the price to pay for the right to bear arms. The US cannot politicize this event; instead, Americans should come together and mourn, solely sending thoughts and prayers. Despite politicians’ intentions, these tactics disrespect the victims of shootings by preventing change from happening. When 521 mass shootings have occurred in the past 477 days (“477 Days”), the only time to talk about gun safety is now.

Mourning the victims of mass shootings and politicizing the event must occur simultaneously. America has the capability of doing both. Many politicians send “thoughts and prayers” and urge Americans to mourn. They discourage people from talking gun politics, which supposedly polarizes the country. President Trump advised, “we’ll be talking about gun laws as time goes by. Today we mourn.” Via this logic, the topic of gun rights will finally come up in political discourse the day an American is not fatally shot. Unfortunately, if America continues its current gun policies, this day will never come: at least one mass shooting happens daily in the United States, and ninety-two Americans die from gun violence every day (Kristof). Thus, the lack of conversation about gun violence will continually inhibit progress in terms of safety in public settings.

In other spheres of life, enactments of precautionary steps maintain safety, and gun laws should mimic this model. For example, fire alarms, smoke detectors, and fire drills combat the potential of deadly infernos. Airbags, seatbelts, and highway guardrails combat deadly auto accidents. Even the minuscule, by comparison, dangers of a ladder, which kills 300 people a year, have seven pages of regulations in the Health Administration guidebook (Kristof). However, not only has the government prohibited research on gun safety, but also the administration has deemed the mere discussion as un-American. Over and over again this country faces mass shootings, and each time politicians send their condolences. But nothing changes, and the cycle continues: Mass shootings horrify Americans, and outraged citizens demand sane policy. Yet, eventually a bigger story blows up somewhere else in the world, the news stops discussing gun laws, and sane policy still has not materialized. Then it happens again; only this time, more people die. When no discussion of mass shootings occurs, Americans can expect to continue to see death at the hands of guns.

Common sense gun laws should naturally pervade bipartisan policy. Both sides of the political spectrum agree: 79% of Republicans and 88% of Democrats want background checks for gun shows and private sales (Fingerhut), and 80% of Democrats and Republicans want mandatory background checks, five-day waiting period for gun purchases, and a mandatory registration of handguns (Smith 156). Yet these policies languish in Congress, despite the fact that they have bipartisan support. Why? Because the NRA, the largest lobbyist group in the country, makes sure the passing of these regulations never occur. During the 2016 election cycle, the NRA gave 5.9 million dollars to the Republican Party (“Gun”). The same candidates who received money from the NRA also voted to allow people on the no-fly list and mentally disabled people to purchase guns. However, 89% of Democrats and Republicans believe the mentally ill should be prevented from purchasing guns, and 82% of Republicans and Democrats believe gun purchases should be barred for people on the no-fly list (Oliphant). Politicians repeatedly put their campaign needs over lives of citizens. The NRA and the politicians they support essentially value power and money over life. By advising people to mourn instead of discussing gun laws, these NRA backed congressmen commit the very action they protest against — politicizing mass shootings. By sending thoughts and prayers without action, policymakers fulfill the desires of the NRA. Via prioritization of the NRA, politicians make the gun debate a polarizing issue. If politicians put aside their greed and corrupt tactics, they would listen and reform policy in accordance to the people’s needs.

America needs more gun regulations. The constant mass shootings that the US face every single day proves the necessity for gun restriction. However, some fear that reforms will potentially lead to a ban on all guns. Gun safety proponents warn against the straw man “extremist agenda.” In reality, there is no desire to take away the rights of citizens to buy guns. Instead, proponents simply want a more difficult and thorough screening process. They want more background checks, a mandatory five-day waiting period, and limits on assault and semi-automatic weapons. People (who have mental stability and do not appear on the no-fly list) can still have their handguns and rifles for hunting and protection. But no reason exists for common citizens to own automatic weapons. The sole purpose of automatic weapons is simply to kill many people quickly and efficiently. And again, both sides of the political spectrum agree: 77% of Republicans and 90% of Democrats want background checks for private sales and gun shows, and 54% of Republicans and 80% of Democrats want to ban assault-style weapons.

These types of reforms have reaped benefits in Australia, Britain, and Canada. When faced with mass shootings, these modern countries crafted laws that basically eliminated the threat of guns to public safety. For example, in Australia, a gunman shot and killed thirty-five people in Port Arthur. The public’s response was outrage and persistence on change. The government responded with a ban on almost all automatic and semiautomatic rifles as well as shotguns. They implemented this with a gun-buyback program. John Howard, the Prime Minister said, “we won the battle to change gun laws because there was majority support across Australia for banning certain weapons” (Bilefsky et al.) Both Australia and American have majority support for tighter gun laws — the only difference: the NRA.

In order to combat the NRA and corrupt politicians, we must speak out. We cannot allow politicians to put their own needs in front of ours any longer. We simply cannot continue to go on this way. If we do, America will continue to suffer through shooting after shooting, death after death. We can mourn and send prayers, but if we want the shootings to stop, we also must act. Now.

 

Works Cited

Bilefsky, Dan, et al. “How Australia, Britain and Canada Have Responded to Gun Violence.” The New York Times, The New York Times, 2 Oct. 2015, www.nytimes.com/2015/10/03/world/americas/australia-britain-canada-us-gun-legislation.html.

Carcamo, Cindy, Tchekmedyian Alene, Mather, Kate, Winton, Richard. “Survivors from California Recount Their Terrifying Escape from Danger in Las Vegas.” Los Angeles Times, 4 Oct. 2017, www.latimes.com/local/lanow/la-me-california-survivors-las-vegas-20171004-story.html.

Fingerhut, Hannah. 5 Facts about Guns in the United States. Pew Research Center, 5 Jan. 2016, www.pewresearch.org/fact-tank/2016/01/05/5-facts-about-guns-in-the-united-states/.

“Gun Rights: Money to Congress.” OpenSecrets.org, The Center for Responsive Politics, 2016, www.opensecrets.org/industries/summary.php?cycle=2016&ind=Q13.

Kristof, Nicholas. “Preventing Mass Shootings Like the Vegas Strip Attack.” The New York Times, The New York Times, 2 Oct. 2017, www.nytimes.com/2017/10/02/opinion/mass-shootingvegas.html?rref=collection%2Fcolumn%2FNicholas%2BKristof&action=click&contentCollection=Opinion&module=Collection®ion=Marginalia&src=me&version=column&pgtype=article

“Mass Shootings.” Gun Violence Archive, 2017, www.gunviolencearchive.org/reports/mass-shooting.

Oliphant, Baxter. Bipartisan Support for Some Gun Proposals, Stark Partisan Divisions on Many Others. Pew Research Center, 23 June 2017, www.pewresearch.org/fact-tank/2017/06/23/bipartisan-support-for-some-gun-proposals-stark-partisan-divisions-on-many-others/.

Smith, Tom W. “Public Opinion about Gun Policies.” The Future of Children, vol. 12, no. 2, Children, Youth, and Gun Violence, 1 July 2002, pp. 154–163. JSTOR, JSTOR, www.jstor.org/stable/1602745?seq=1#page_scan_tab_contents.

“477 Days. 521 Mass Shootings. Zero Action From Congress.” The New York Times, The New York Times, Editorial Board, 2 Oct. 2017, www.nytimes.com/interactive/2017/10/02/opinion/editorials/mass-shootings-congress.html.

 

The Benefit of Female Education on the World

Thirty seconds. That is all the time it takes for thirteen underage girls to be sold into a marriage, turned into a breeder of sons and unwanted daughters, and imprisoned in a lifetime of anguish and abuse. This is the fate that awaits many women in third-world countries. Many of these women have never stepped foot into a school, never savored a good book or written a letter, and were never given a chance to escape an endless and vicious cycle. However, there is one glaringly present solution that will stop this cycle: educating women. Despite it being deemed unnecessary in many developing countries, educating a girl has countless, profound effects on the future of her country and the world at large. According to the New York Times article by Nicholas Kristof, “What’s So Scary About Smart Girls?”, educating women can double a country’s labor force, save the lives of thousands of children who would have been born to uneducated and impoverished mothers, and create a more stable political environment. These are the reasons the United Nations and various other organizations have strived to fund and improve education in developing countries, as detailed by the article “Education and the Developing World.” As shown in the documentary directed by Richard Robbins, Girl Rising, many girls in these countries are victims of sex trafficking, sexual assault, and arranged marriages. Fear of sexual assault, a belief that girls are only useful for marriage and bearing children, and the high cost are reasons that parents keep their daughters home from school. Despite these adversities, the benefits of educating girls greatly outweigh the negatives. Due to its potential for enhancing global economies and communities and providing girls in underdeveloped with a shield against injustice, the education of women is an extremely essential task that must be collectively undertaken around the world.

The education of women has a large capacity for boosting the economy and benefiting the political environment of a country. Research has shown that there is a 10% increase in wages per year of schooling that one has completed, which will eventually lead to widespread economic growth. It has also been demonstrated that by educating females alone, there will be a 40% decrease in malnutrition (“Education and the Developing World”). Educated women can enter the working world, doubling the formal labor force and thereby raising the living standard. This shows how educating girls can have a large effect on her community and country. The political stability of countries will also be improved with the education of girls. Many of the girls who are oppressed in today’s world belong to war-torn countries that are unfortunately still shrouded in backwards beliefs. Perhaps, if they educated more girls, these countries would experience peace as educating girls supports a civil society, democracy, and political stability.

The political situation of a country is also affected by the rate of unemployment, as more people out of work results in political upheaval. In fact, there has been shown to be a 4% increase in chance for a civil war for every 1% increase in the unemployed population aged 15-24 (Kristof 2). Educated women can help reduce the bulge in the youth population by having smaller families and creating stability. A study performed in Nigeria found that for each additional year of primary school, a girl has 0.26 fewer children (Kristof 2). Female education also improves the health conditions in a community as educated women are more likely to make intelligent choices that will benefit their children. According to Girl Rising, putting every child in school could prevent 700,000 cases of HIV each year. Children are also likely to live longer with educated mothers because women who have gone to school are more likely to seek prenatal care and 50% more likely to immunize their children (“Education and the Developing World”). All research points to one obvious conclusion: an educated mother means a healthier child.

The fact that 66 million girls are currently out of school worldwide has devastating effects on their lives. In many developing countries, girls are subject to sex trafficking and sexual assault and are often forced into arranged marriages at very young ages. Even more shocking, girls in modern and prosperous countries experience similar circumstances. According to Kristof, 100,000 girls under the age of eighteen are trafficked into commercial sex in the United States every year. The range of abuses women experience also includes sexual assault. 150 million girls are victims of sexual violence a year, 50% of them under the age of fifteen (Robbins). The fear of girls being sexually assaulted is a reason that some parents choose not to send their daughters to school.

Around the globe, 33 million fewer girls are in school than boys. Everything a family has goes into educating and priming a boy for life, as shown in Girl Rising when the profits from a girl’s marriage are used to buy a car for her brother. Many countries around the world do not offer public schooling, and parents are reluctant to use their limited funds to pay for the books of a girl. Another obstacle to education is that many girls enter marriage very early in their lives. Every year, fourteen million girls under the age of eighteen are married. Many of these girls die soon after from childbirth, the number one cause of death for girls between the ages of 15-19 (Robbins). These horrifying circumstances are often brought out by an archaic view that people have about the status of women. People have the belief that girls are only expected to marry, bear sons, and work in the household. They are dangerously unaware about the potential of a woman. Fortunately, an inexhaustible desire to learn and change the world is still present in oppressed women. Amira, a woman featured in the documentary Girl Rising who was married and had a son by age twelve, shares this message of hope, “I will find a way to endure, to prevail. The future of man lies in me… look me in the eye. I am change” (Robbins). Educating girls will help them escape from upsetting injustices. Girls with eight years of education are four times less likely to be married as children and are twice as likely to send their own children to school (Robbins). Women who are given the gift of an education also often feel an obligation to pay it forward. Suma, a Nepalese girl who was liberated from slavery, now works to make sure no young women endure the hardships she did. Angeline Mugwendere, a Zimbabwean girl whose education was paid for, is now the director of an organization that helps impoverished girls in Africa go to school (Kristof 3).

All countries should join the effort to educate girls worldwide. It has been shown to have incredible effects on the countries where it has taken place. After Bangladesh gained its independence, there was a renewed emphasis on education for both genders. Now, there are more girls in high school than boys. Many of these girls will grow to form the foundation of the Nobel Peace Prize winning Grameen Bank and other important Bengali institutions (Kristof 3). South Korea, which once had an average annual income of $890, has also shown advancements due to education. Following an effort to spend more money on education, South Korea now boasts an improved labor force, near 100% public school enrollment, and an average annual income of $17,000 (“Education and the Developing World”).

There are some that believe that educating girls would be a waste of valuable defense funds. However, educating girls has countless benefits that cannot be overshadowed by even the most successful military campaign. Educating women is an extremely necessary endeavor and one that most modern nations have the capital to promote. France, which has an economy of 1/10th the size of the United States’, donated 600 million more dollars to education in poor countries. The Netherlands, which has an even smaller economy, was also a leader in improving education (“Education and the Developing World”). The United States should follow the lead of these countries and become forerunners in the fight for widespread female education.

Educating girls can irreversibly alter the economic landscape of an entire nation. The education of girls boosts the labor force and stimulates the economy, increasing a nation’s productivity and wealth. Additionally, educated women have smaller families which raises the standard of living and enables better child care. Having an education also provides women in desperate situations like arranged marriage with a means of escape. All humans have a fire within them, a desire to learn and live to their fullest potential. This fire has been suppressed in girls but with an education, they can find a way to light the spark once more.

 

Works Cited

“Education and the Developing World.” 2012. Print.

Girl Rising. Dir. Richard Robbins. The Documentary Group & Vulcan Productions, 2014. Film.

Kristof, Nicholas. “What’s So Scary About Smart Girls?” The New York Times, 10 May 2014. Print.

 

The Murder of Mary Phagan

In 1913, in Atlanta, Georgia, Leo Frank, the Jewish superintendent of the National Pencil Factory, was tried and convicted for the murder of Mary Phagan, a 13-year-old female worker in his factory. Local newspapers documented the court proceedings in great detail, framing Frank as a corrupt factory owner and a pervert. The Atlantan public followed the case very closely and believed these descriptions of Frank, despite the fact that many of them were made up or exaggerated. Atlantans were so convinced Frank was guilty that, when Governor John M. Slaton commuted Frank’s sentence from the death penalty to life in prison, an outraged mob swarmed Frank’s cell, took him away, and hanged him outside Mary Phagan’s house. During a time when lynching was very prevalent in the South, this lynching was unusual: it was one of the only lynchings of a white man. In one sense, the lynching was a manifestation of anti-Semitism, which had been progressing in Atlanta as the city’s Jewish population had rapidly increased over the last century. The lynching was also the result of class tensions in Atlanta, as the city industrialized, and the working class felt mistreated by wealthy, powerful factory owners like Leo Frank. Decades later, as new evidence and testimonies revealed that Frank was innocent and the guilty person was most likely the African American janitor, Jim Conley, it became clear that Frank’s conviction was also closely related to the tensions between the Jewish and African American communities in Atlanta. Overall, Leo Frank’s trial and lynching exposed the profound divisions in Atlanta’s society in the early twentieth centuries — between the wealthy and the poor, Jews and anti-Semitic Gentiles, and Jews and African Americans.

 

The Leo Frank Case

On the night of April 26, 1913, Mary Phagan’s dead body was found in the factory’s basement. That morning, which was Confederate Memorial Day, Mary Phagan had gone into the pencil shop at which she worked to collect her pay of $1.20. However, she never came home. Newt Lee, the factory’s night watchman, found her body, brutally bruised and bloody. He contacted the Call Officer, W.F. Anderson immediately, exclaiming that, “a white woman has been killed up here!” When the detectives arrived at the scene, they originally thought that she was a black woman because she was covered in soot from her head to her toes: “her features — even her eye sockets and nostrils — were caked with soot, and her mouth was choked with cinders.” When they arrived at the scene, the only clues the detectives found were two murder notes next to the body. The first note read, “He said he wood love me land down play like the night witch did it but that long tall black negro did boy his slef,” and the second note read, “Mam that negro hire down here did this i went to make eater and he push me down that hole a long tall negro black that hoo it wase long sleam tall negro i wright while play with me.” The detectives assumed that the notes were written by the murderer to direct the suspicion towards someone else, or possibly written by Mary as a way to help them identify her murderer. Basing their initial judgment on the notes, officers arrested Newt Lee, since he fit the “tall black negro” description in the first note and had found Mary’s body.

On behalf of the Atlanta Police Department, Detective Black stepped in to solve the crime. From the beginning, he was opposed to the idea of convicting a black man, as he did not think such a conviction would satisfy the public. He famously said, “The murder of Mary Phagan must be paid for with blood. And a Negro’s blood would not suffice.” Detectives later confirmed that Newt had not been around the factory when Mary was murdered, so he was released as a suspect. Quickly, detectives shifted their focus to Leo Frank, who appeared nervous when first accompanied by detectives to the scene of the crime. Frank was arrested and brought to court where, instead of acting nervous as he was before, he appeared calm and confident. Over the course of the trial, his calm was shaken as witnesses provided evidence that he had made sexual comments and advances towards Mary Phagan and other little girls in the factory. Moreover, there were questions about his alibi, and his lawyers struggled to prove that he had not been at the Pencil Factory during the murder. The evidence gathered, and public suspicion grew as the press printed shocking stories framing Frank as a perverse, evil factory owner. On May 23, 1913, the grand jury indicted Leo Frank for Mary Phagan’s murder.

The most significant testimony against Frank, which is widely believed to have convinced the jury he was guilty, was that of Jim Conley, a black man who worked as a janitor in the factory. Conley was a criminal himself, having already served two sentences on the chain gang and one time for attempted armed robbery. The police questioned Conley about the murder since they found him rinsing out a stain from his shirt, which he claimed was just a rust stain. The police did not arrest him because he told them he was not near the factory the day of Mary Phagan’s murder because he claimed he was drunk all day. He also told them he could not read or write, so they suspected he could not have written the notes next to her body. When he was later called in for another affidavit, he told a different story, claiming that he had seen Frank murder Mary Phagan and that Frank had forced him to help move the body. Rather than being suspicious of Conley’s changing story, detectives helped him correct his facts, and the press praised Conley for coming forward.

After the jury convicted Frank, his attorneys tried to overturn the decision, gathering evidence to build a case against Conley. They learned that Conley had confessed about the murder to multiple people and even threatened to kill those he told if they told anyone else. Leo’s attorneys collected medical evidence that established that Mary was actually murdered much later than when Hugh Dorsey, Frank’s prosecutor, claimed. Most importantly, though, when Leo was not in the factory. They wanted to appeal the case to the Supreme Court, but the Court refused to review the case, despite Justices Oliver Wendell Holmes and Charles Evans Hughes dissenting. They argued that the trial had been influenced by newspapers and general public sentiments, which meant that it had been unfair. As they wrote in their dissent, “Mob law does not become due process of law by securing the assent of a terrorized jury.” Governor John M. Slaton reviewed the entire case and decided to commute Frank’s sentence to life in prison. Georgia’s public was outraged when they heard this news. Riots erupted, leading Governor Slaton to institute Martial Law.

An angry mob raided the prison and captured Frank. They took him to Marietta and hanged him facing Mary Phagan’s house. He helplessly dangled there for hours, “head snapped back, chin resting in the noose’s bottom coil dangled from above.” Almost the whole city came to witness this disturbing event. Most Atlantans did not view it as tragic or upsetting but rather as an act of justice. One woman said, “I couldn’t bear to look at another human being, hanging like that… but this — this is different. It is all right. It is — the justice of God.” Some Atlantans, however, recognized this lynching as an injustice. An article published in The Atlanta Constitution ten days after the lynching emphasized the event as a setback for rights and freedom for all people, declaring, “We may regret and deplore, but the stain is there. In it the name and the identity of Leo Frank are but an atom. The great question others will ask is, ‘What surely can Georgia offer of the enforcement of constitutional rights and the protection of the laws?’”

Atlantan and global newspapers had played a very crucial role in the trial and lynching, printing sensationalist headlines and inflaming public outrage. After Mary’s murder, Monday’s issue of The Georgian gave five pages to the story. The paper had recently been acquired by newspaper tycoon William Randolph Hearst, and he saw Mary’s murder as an opportunity to increase his paper’s readers through dramatic, shocking coverage. The Georgian’s main competitor, The Atlanta Constitution, followed after The Georgian, covering the case in a way that dramatized it to capture readers. As the case unfolded in court, the two newspapers competed with each other, each one trying to write more shocking, eye-catching headlines than the other. These two newspapers were largely responsible for framing Frank as a pervert in the eyes of the public: a few days after the murder, The Georgian ran a story about the National Pencil Factory being a seedy business that was unfit for women to work in, with the headline, “NUDE DANCERS’ PICTURES ON WALLS.” The article also emphasized that the Pencil Factory was located near a street with a lot of prostitutes. George Epps, a 15-year-old who gave a testimony, said that Mary Phagan had been afraid of Frank, that Frank would “try to flirt with her” and “winked at her,” and that she had had him [Epps] walk her home from the factory sometimes. Because of that, the next morning the Constitution’s headline read, “FRANK TRIED TO FLIRT WITH MURDERED GIRL SAYS HER BOY CHUM.”

The sensationalist headlines also made the factory out to be emblematic of the problems of industrialization and factory work, portraying Frank as a greedy Jew and a boss with no qualms about child labor. Many poor, white, working-class Atlantans bought into the newspapers’ portrayal of Frank, viewing him as the ultimate villain of industrialization; these sentiments were a crucial driving force behind his lynching. However, a minority of privileged German Jews saw these newspaper articles as stirring up public outrage against one of their own and viewed this outrage as not necessarily proportional to the evidence against him.

Seventy years after Frank’s trial, new evidence and a review of the old evidence of the case proved that Frank was indeed innocent. Alonzo Mann, who had been a 14-year-old worker at the factory during the time of Mary Phagan’s murder, did an interview in which he confessed that he saw Conley murder Mary Phagan.

”Many times I wanted to get it out of my heart,” Mr. Mann told interviewers. ”I’m glad I’ve told it all. I’ve been living with it for a long time. I feel a certain amount of freedom now. I just hope it does some good.” Mann submitted to a lie detector test and a psychological stress evaluation and ended up passing both. The New York Times conducted a two-month investigation into Mann’s claims, and it reported that his confession was accurate. To explain why he had not come sooner, he told interviewers that Conley had told him, ”If you ever mention this, I’ll kill you,” which intimidated him and kept him from coming forward. Frank’s conviction and lynching should be reexamined in light of this new evidence, and both must be understood as the result of the anti-Semitism and social tensions that were so prevalent in Atlanta at the time.

 

Anti-Semitism

In the early twentieth century, anti-Semitism was spreading throughout America and especially growing in the South. Powerful individuals, such as Georg Von Schönerer and Karl Leuger, were outspoken and active in their efforts to villainize the Jews. A prominent industrialist figure at the time, Henry Ford, was particularly famous for his strongly anti-Semitic beliefs, which he was able to spread widely because he owned his own newspaper, The Dearborn Independent. “Ford wanted to assert that there was a Jewish conspiracy to control the world. He blamed Jewish financiers for fomenting World War I so that they could profit from supplying both sides. He also accused Jewish automobile dealers of conspiring to undermine Ford Company sales policies. Ford wanted to make his bizarre beliefs public in the pages of the Dearborn Independent.” Ford was not alone in his strongly held anti-Semitic views, and the kind of sentiments he expressed were pervasive throughout America, especially in the South.

At the time of Frank’s trial and conviction, Jewish immigration and involvement in Atlanta made the Jews a significant presence in the city. Six hundred Jews were living in Atlanta in 1880, which was a large number compared to the twenty-six that were living there in 1850. Several synagogues were built during this period of time due to this influx of Jews. During Reconstruction, many Atlantan Jews became prominent and involved in the city’s economy because their ties to Northern Jews allowed them to build their businesses back up more quickly than other whites whose businesses had been devastated by the Civil War. From 1881 on, Atlanta also began to receive some Jews from Eastern Europe and the Ottoman Empire.

As the Jewish presence in Atlanta grew, so did social tension. The Atlanta race riots took place in Atlanta from September 22nd to 24th, 1916. During these riots, white mobs killed African Americans, damaged their property, and wounded many other people. The riots were seen as the manifestation of frustration with the job competition poor whites felt with blacks. The 1881 strikes against the Elsas family’s Fulton Bag and Cotton Company also highlighted the growing social tension of the times. These strikes were the result of wage disputes, the hiring of black women, and the problem of child labor. These strikes, as well as the Atlanta race riots, show that this period in Atlantan history was defined by social unrest and frustration with the power dynamics in society. On top of racial tension, Jewish prominence in the social hierarchy also disturbed many Atlantans, especially poorer gentiles, who thought of themselves as racially superior and did not like feeling inferior to Jews in any way.

During this turbulent time, many Southerners developed a phobia of foreigners. While Northern Jews were making an effort to include new Russian Jewish immigrants in their communities, Southerners had strong feelings about the types of immigrants who were coming over and joining their America, and set up immigration bureaus in order to attract what they considered to be the “Best Type” of immigrant — immigrants of European heritage. For immigrants of other backgrounds, living in the South could be difficult and even dangerous. For example, nineteen Italians in Louisiana were lynched because of a fear of them associating with black people and of them being an inferior race. Jews were widely considered to be an inferior race, and so Jewish immigrants were not among the “Best Type” of immigrants, in the eyes of most Southerners. It was said that “Southern attitudes toward [Jews] had been an amalgam of affection, tolerance, curiosity, suspicion, and rejection.” During periods of stress in society at large, Southerners would lash out at Jews who acted differently from them. As scholar Leonard Dinnerstein wrote, “Jews were considered ‘rebels against God’s purpose,’ and many a Southern Christian mother lulled her children to sleep with fables of Jewish vices.” Religious teaching played a large role in getting Southern Christians to loathe Jews, with many ministers preaching, “The Savior was murdered by Jews.” One Baltimore minister said that, “of all the dirty creatures who have befouled this earth, the Jew is the slimiest.”

The widespread reaction to Leo Frank’s trial — and the public’s overwhelming belief in his guilt — is a testament to the intense anti-Semitism that was underlying Atlantan society at the time. Leo Frank was very involved in the Jewish community in Atlanta. He was the president of the B’nai B’rith organization for community service. His religion was an important part of his identity, and many Atlantans did not like him because of it. The Macon Daily Telegraph noted the effect that Frank’s trial and lynching had on Atlanta’s Jewish community: “… the long case and its bitterness has hurt the city greatly in that it has opened a seemingly impassable chasm between the people of the Jewish race and the Gentiles. It has broken friendships of years, has divided the races, brought about bitterness deeply regretted by all factions. The friends who rallied to the defense of Leo Frank feel that racial prejudice has much to do with the verdict. They are convinced that Frank was not prosecuted but persecuted. They refuse to believe he had a fair trial…” (The Macon Daily Telegraph). Leo Frank was widely compared to Alfred Dreyfus, a Jew in France who was wrongfully convicted of espionage largely due to the jury’s anti-Semitic sentiments. A New York Times headline read, “FRANK LYNCHING DUE TO SUSPICION AND PREJUDICE.”

Jews in Atlanta and across America believed Frank was a scapegoat for the city and the South’s anti-Semitic feelings. As a prominent member of the Jewish community, Frank represented a social group that was threatening and unsettling to gentile Atlantans. As scholar Jeffrey Melnick wrote, “There is little doubt that Frank’s status as a capitalist roused great enmity during the trial and after, and that the specific conceptions that circulated were inseparable from the negative connotations surrounding his Jewishness.” Jewish newspapers at the time tried to combat the information being disseminated by the larger gentile publications, arguing that he was innocent and only being targeted because he was as Jew. “He was sacrificed because he was a Jew, and a Northern Jew, at that. But, thank God, his sufferings are all over at last. If he had lived, his life would have been a torture to him, and they might have killed him in a worse way. Race hatred and political ambition have been satisfied.” Jewish publications, most significantly The Jewish Exponent, were outspoken in blaming Jim Conley for the murder:

The suspicion that was directed against him by the perjured testimony of a self-confessed negro accessory to the killing of Mary Phagan, who was left off with the ludicrous punishment of one year’s imprisonment, was fanned to a flame by the demagogism of a Solicitor General anxious for only political advancement and by the anti-semitic prejudice of a mob instigated by yellow journalists and mendacious Ishmaelites of the Tom Watson type. Frank was victimized because he was a Jew.

Jews throughout America believed that Frank was a martyr, suffering the consequences of a crime he did not commit simply because he was Jewish. As The Jewish Exponent printed three days after Frank’s lynching, “Frank underwent a martyrdom as horrible as any man has suffered. It has borne himself throughout this ordeal as a brave man and as a loyal Jew should.”

Despite recognizing Frank as the scapegoat for anti-Semitism, the broader Jewish community was slow to mobilize around his case while it was in trial. Frank’s powerful friends sought help from The American Jewish Committee, an organization set up by wealthy Jews who wanted to provide help to other Jews who were being denied important civil rights because of the age’s anti-Semitism. In Frank’s case, the American Jewish Committee president decided that “whatever is done must be done as a matter of justice, and any action that is taken should emanate from non-Jewish sources.” The president recognized the important role that the media was playing in Frank’s case, and so he wanted to influence the Southern press to shape opinions in favor of Jews and to establish “a wholesome public opinion which will free this unfortunate young man from the terrible judgment which rests against him.” The Committee agreed Frank’s case was an American Dreyfus, but it was divided on what to do. While Marshall and other committee members gave support however they could individually, the Committee did not act quickly enough and therefore never gave Frank any official help.

 

Class Tensions

Jews at the time were viewed as economically prosperous and thus became the scapegoat for issues caused by industrialization in the South. As factories were being built across the South, the rich factory owners grew richer as poor whites found themselves working for very low wages. Many families sent their children off to work in factories during the day to have some more income, which led to widespread public frustration with the issue of child labor. Depressed and dissatisfied workers in the South saw blaming the Jews as a way to relieve tension and frustration they had built up for many years. Georgia had had a small but very “prosperous, tight-knit community” of Jews for a long time before the twentieth century. However, as the Jewish population in Atlanta increased exponentially by the 1890s, tensions between the Jews and gentiles began to grow. The gentiles began to blame Jews in part for “the chaotic conditions in the city,” including prostitution and gambling, and the media printed a lot of outrageous, dramatic stories to stir up anti-Semitic public sentiments. Gentiles became jealous of the amount of money Jews were making as factory owners and fearful of the idea of rich Jewish men pursuing gentile women. Burton J. Hendrick famously wrote “The Great Jewish Invasion,” as well as several articles in McClure’s Magazine, about how the Jews were too ambitious and taking over every important aspect of city life.

As they followed the murder trials, Atlantan newspapers framed Frank in the context of the city’s working class frustrations with industrialization. The case took place during a time when labor unrest and tensions were higher than ever before. Workers believed they were not being paid fairly, and the working conditions in the factories that were springing up were terrible. White workers were especially frustrated, as they felt their jobs being threatened by black workers. A few decades before Frank’s trial, there were the aforementioned strikes against the Elsas family’s Fulton Bag and Cotton Company, which took place due to labor disputes and competition for jobs from black women. They were a testament to poor white workers’ frustration with the fact that they felt they were not being paid enough, and that working conditions were terrible. During the Leo Frank case, the National Pencil Factory was portrayed as an immoral place to work, unfit for women, and Frank was framed as an evil, perverse boss who did not care at all for the well-being of his employees. Because Frank was a Jew, Atlantans were already primed to see him as greedy and evil, so newspapers did not have a difficult time portraying him as a stereotypically cruel, greedy boss. Frank came to represent all the problems with industrialization that were disadvantaging so many Atlantans, which is why they felt so vehemently convinced of his guilt and his deserving to die.

Through their coverage of the case, the press especially portrayed Leo Frank as the emblem of what many people thought was the most terrible aspect of industrialization: child labor. At the time, Georgia was one of the worst states when it came to regulating child labor laws, allowing ten-year-olds to work 11-hour workdays in mills and factories. Frank’s trial came at a time when many provocative stories were already being published in newspapers about child labor in factories. Georgians were desperate to get rid of child labor: “‘Thy Kingdom Come’ means the coming of the day when child labor will be done away with, when every little tot shall have its quota of sunlight and happiness.” The fact that Mary Phagan had been only thirteen when she was murdered allowed the newspapers to frame the case as a perfect example of the evil that children could experience in their factory jobs. Frank, as the accused murderer, was portrayed as the stereotypical factory owner who exploited children. The implication that Frank might have raped Mary Phagan before murdering her only increased the public’s sense of Frank representing the way industrialization corrupted children. Indeed, as the trial progressed, its main focus became the suspicion that he had raped Mary Phagan. The testimonies against him introduced this suspicion, with many of Mary’s friends saying that Mary was made uncomfortable by Leo and that he always “wanted to talk to her.” The fact that Frank was considered ugly and unattractive made it easier for the Atlantan public to imagine him as a pervert.

In the end, Frank came to represent all the things wrong with Atlantan society at the time. Jeffrey Paul Melnick put it best when he said that Frank was:

identified as a ‘capitalist,’ doubly a capitalist, since to the lumpen Socialist mind of the American Populist capitalist equals Jew, and the two together add up to demi-devil. And in certain regards, the record seems to bear them out, for Frank did hire child labor, did work it disgracefully long hours of pitifully low wages; and if he did not (as popular fancy imagined) exploit his girls sexually, he failed in on their privacy with utter contempt for their dignity. Like most factory managers of the time, he was — metaphorically at least — screwing little girls like Mary Phagan.

 

Black-Jewish Relations

The crucial testimony that convicted Frank was delivered by Jim Conley, the janitor for the National Pencil Factory. After suspiciously changing his story multiple times, he gave a testimony in court in which he claimed he had helped Frank move the body after the crime, thereby admitting he was involved in order to blame Frank. He also claimed that he had helped Frank write the murder notes that surrounded Mary Phagan’s body, saying that he could not have written them himself because he did not know how to write. Playing into African American stereotypes, he convinced the detectives that he, as an uneducated, drunk African American, was incapable of the level of complex thinking that would be necessary to murder someone and frame someone else for it. Sixty-nine years later, when Mann came forward and confessed to having seen Lee murder Mary Phagan, it became clear that Conley had been capable of this deceit and had effectively carried it out. Regardless of how aware he was of what he was doing, Conley had played into crucial tensions in Atlantan society at the time in order to shift the blame onto Frank.

The living and working conditions for African Americans in Atlanta at the time were brutal. Jim Crow laws had established restraints on all public spaces, so black people lived lives very segregated from white America. A few decades after they had been granted legal freedom, African Americans were still denied many basic American freedoms in practice. They wanted to move up in society, but whites continued to find ways to shut them out of public places and disenfranchise them. African Americans were deeply frustrated with this state of affairs, and could not communicate with most Southern whites, who felt threatened by the idea of African Americans rising through the social hierarchy and changing the power dynamics. Rather than seeing blacks as disadvantaged, white people viewed them as lazy, urban people and blamed nearly all the problems of the city on the bad character of the city’s black population. The Atlanta race riots took place in Atlanta from September 22nd to 24th, 1916. During these riots, white mobs killed African Americans, damaged their property, and wounded many other people. The riots were the manifestation of pent-up feelings of frustration at the job competition poor whites felt with blacks, as well as other crucial tensions between the races. This racial conflict was the backdrop for the Frank case to unfold against, and it is part of the larger narrative about race in Atlanta at the time.

The Leo Frank case took on an important symbolic meaning in America and got at the heart of a tension between African Americans, represented by Jim Conley, and Jews, represented by Leo Frank. The anti-Semitism that was pervasive in the South had spread from the white gentiles to the African American community, who were distrustful and resentful of Jews’ economic success, which they viewed as keeping them in their lower social status. Because Jews were economically successful, they saw themselves as above African Americans. Leo Frank’s case was not just the first major case in which a black man’s testimony was important in convicting a white man, but also the first major case that pitted Jews and African Americans against each other and gave African Americans the upper hand. This tension was most obvious when officials wanted to arrange a meeting between Frank and Conley to see what would happen when “the negro [would] be quizzed in the presence of the man whom he accuses… his every action and look as he sees Frank’s eyes upon him will be followed closely by detectives and by the solicitor himself, and a crisis in the case may develop from the meeting.” However, the meeting did not happen because Frank decided he did not want to meet face-to-face with Conley. This important decision sent the signal that he thought of himself as racially and socially superior, which infuriated the people of Atlanta. Rather than seeing Frank as one of them because he was white, Atlantan gentiles saw him as an other because he was Jewish, and his insistence on his racial superiority called even more attention to his Jewishness.

Ultimately, the case was crucial in the narrative about the power hierarchy in the industrial South, and so Atlantans were predisposed to suspect evil and deceit from Jews, while expecting African Americans to be stupid and lazy. Jim Conley behaved in certain ways that whites expected him to, and played into the narrative of being a dumb factory worker in order to make sure people would conclude he was incapable of committing a crime and covering it up. Conley gave the appearance of fitting into the social order that Jim Crow laws had established, projecting an image of the kind of black person that Southerners felt used to and therefore did not see as threatening. In contrast, Frank was seen as very threatening, as he represented the stereotyped, rich Jews building businesses, becoming influential, and threatening the social order. The American Israelite captured the truth of the matter, which was hidden underneath these racial tensions, when it printed a piece that read:

The Dorseys, the Browns and the Watsons have succeeded in bringing about the murder of an innocent man because he was a jew, in order to protect themselves against the truth that must have come out at some time of their guilty knowledge, and to render powerless the vicious and criminal negro, the real murderer of Mary Phagan, whom they have been shielding.

The fact that Conley was not convicted in the case or villainized by the Atlanta public is also due to the positions of blacks and Jews in society. An important reason Conley wasn’t focused on too much as a suspect is because he wasn’t an authority figure, and the case was occurring at a time when people were suspicious of authority figures. However, another significant reason is that, while there were many opportunities to kill a black man in Southern society at the time, there were not many socially acceptable reasons to lynch a Jew. As anti-Semitism and antagonism grew in the South, people were eager to convict a Jew since it was so rare. Agreeing with Detective Black’s statement that “a Negro’s blood would not suffice,” Detective Watson famously said, “Hell, we can lynch a nigger anytime in Georgia, but when do we get the chance to hang a Yankee Jew?” In the end, the fact that Jews were perceived as superior to African Americans in Atlantan society worked against Leo Frank. He represented a hated social group within the city that Atlantans did not usually have an opportunity to commit violence against, and so lynching him had a special allure for Atlantans.

 

Aftermath

The lynching and false conviction of Leo Frank had a profound impact on American society. First and foremost, it was a warning to Jews in Atlanta, who were now divided from the rest of the city by the “chasm” that the intense anti-Semitism surrounding the case had created. Frank’s lynching was a sign to Jews across the country that anti-Semitism was a powerful force in America that was threatening their lives and freedom. After Frank’s death, many Jews came together to start the Anti-Defamation League, which was an organization that worked to fight anti-Semitism and preserve the reputations of Jews.” Unfortunately, the Anti-Defamation League would be necessary in the years to come: Leo Frank’s experience was a precursor to many other horrible manifestations of anti-Semitism that would happen in the twentieth century.

As Jews became a more isolated community within Atlanta and across the country, the white gentiles also came together to preserve their spot in the social hierarchy. Within Atlanta, many of them found Frank’s trial and lynching had confirmed the importance of preserving white gentile dominance in the South: “A short time after the lynching of Leo Frank, 33 members of the group that called itself the Knights of Mary Phagan gathered on a mountaintop near Atlanta and formed the new Ku Klux Klan of Georgia.” For most Atlantans, lynching Frank seemed like “the justice of God,” the right way to preserve their spot in the hierarchy in their society. Both Jews and African Americans would continue to be marginalized, threatened, hurt, and killed in Southern society because of their race. African Americans, in particular, would continue to have to fight against the stereotypes of blacks as lazy, criminal, drunks — the kinds of stereotypes that Conley had played into during his testimony and his attempts to frame Frank.

The Frank case also contributed to the ongoing discussions of the problems having to do with industrialization. It helped expose the ways that factory owners mistreated their workers, as the newspaper articles about Frank focused largely on his cruelty as a boss and his inappropriate comments. It also added to the discussions of child labor, which had already been happening but now had a new, disturbing example to add to the list of reasons that child labor should be abolished or at least regulated. It would take more years, more newspaper articles, and more public outcry for the problems in factories to be addressed, but the industrialization-focused anger that Frank’s case revealed was the beginning of the force that moved those reforms forward.

Ultimately, Leo Frank’s trial and lynching got at the heart of several key themes in Southern society at the time: anti-Semitism, racial hierarchies, and labor dynamics. The case exposed many huge problems facing society, but at the time, rather than helping people better understand these issues and work to resolve them, the Frank case seemed to divide social groups further and increase the tensions between them. Only with some distance could historians look back and understand the case fully in its context, and use it as a window into these different dynamics and problems that have had a lasting impact on American society. Perhaps the most important lesson to be found in Leo Frank’s experience is the importance of reexamining history to understand the trends that have shaped our society into what it is today, and the truths that might still need to be uncovered.

 

Bibliography

Alphin, Elaine Marie. Unspeakable crime: the prosecution and persecution of leo frank. Carolrhoda , 2014. Print.

“Anti-Semitism in the United States.” Henry Ford Invents a Jewish Conspiracy. N.p., n.d. Web. 29 May 2017. <http://www.jewishvirtuallibrary.org/henry-ford-invents-a-jewish-conspiracy>.

Dinnerstein, Leonard. The Leo Frank case. Athens: U of Georgia Press, 2008. Print.

“FRANK LYNCHING DUE TO SUSPICION AND PREJUDICE.” New York Times (1857-1922): 4. Aug 20 1915. ProQuest. Web. 9 May 2017

Fulton Bag and Cotton Mills Digital Collection. N.p., n.d. Web. 29 May 2017. <http://www.library.gatech.edu/fulton_bag/index.html>.

“GEORGIA’S DISGRACE COMPLETE.” The American Israelite (1874-2000): 4. Aug 19 1915. ProQuest. Web. 9 May 2017

“GEORGIA’S SHAME!” The Atlanta Constitution (1881-1945): 6. Aug 18 1915. ProQuest. Web. 9 May 2017

“Girl murdered in pencil factory.” History.com. A&E Television Networks, n.d. Web. 30 May 2017. <http://www.history.com/this-day-in-history/girl-murdered-in-pencil-factory>.

Jacobs, Peter. “The lynching of a Jewish man in Georgia 100 years ago changed America forever.” Business Insider. Business Insider, 18 Aug. 2015. Web. 29 May 2017. http://www.businessinsider.com/leo-frank-lynching-in-georgia-100-years-ago-changed-america-forever-2015-8

“Jewish Community of Atlanta.” New Georgia Encyclopedia. N.p., n.d. Web. 29 May 2017. <http://www.georgiaencyclopedia.org/articles/history-archaeology/jewish-community-atlanta>.

“Leo M. Frank Lynched– Georgia’s Lasting Disgrace.” The Jewish Exponent (1887-1990): 9. Aug 20 1915. ProQuest. Web. 9 May 2017

Melnick, Jeffrey Paul. Black-Jewish relations on trial: Leo Frank and Jim Conley in the new South. Jackson: U Press of Mississippi, 2000. Print.

“The Murder of Leo M. Frank.” The Jewish Exponent (1887-1990): 4. Aug 20 1915. ProQuest. Web. 9 May 2017

“NEGRO CONLEY MAY FACE FRANK TODAY.” The Atlanta Constitution (1881-1945): 5. Jun 13 1913. ProQuest. Web. 9 May 2017

Oney, Steve. And the dead shall rise: the murder of Mary Phagan and the lynching of Leo Frank. New York: Vintage , 2004. Print.

Rawls, Wendell Jr. “AFTER 69 YEARS OF SILENCE, LYNCHING VICTIM IS CLEARED.” The New York Times. Mar 8 1982. ProQuest. Web. 2 Feb 2017.

“Witness Swears He Saw Frank Forcing Unwelcome Attentions upon the Little Phagan Girl.” The Atlanta Constitution (1881-1945): 2. Aug 20 1913. ProQuest. Web. 9 May 2017.

 

Work of Tomorrow

Toll scans replace tollbooth operators, ATM and pay sharing apps replace bank tellers, drones replace pilots and delivery workers, and robots replace factory workers at manufacturing assembly plants. Labor unions decry the imminent threat to the global job market posed by automation, and some economists predict that 47% of American workers have jobs at high risk of likely automation in the next twenty years. The question then becomes who, what line of work, exactly faces risk of the inauspicious effects of automation, as the United States and other developed nations have previously overcome several waves of industrialization and advancement of technology before without devastating impact to human employment. McKinsey & Co., a private management consulting company, estimates that an “automation bomb” in the United States will cost manual laborers nearly $2 trillion in lost annual wages. Analysts predict that the next phase of automation will adversely affect both blue-collar, manual labor and white-collar, information and service workers relatively equally. Yet a contrasting perspective by some analysts suggests that automation may rather spur further job growth, in new and innovative fields.

No simple policy decision or law will eliminate or even curtail automation since automation is rooted in the theory of capitalism which maximizes profit through supply and demand. Employers seek more profit through increases in revenue and reducing expenses, including labor wages. McKinsey and Co. defines the ideal employee as one who is highly productive in his craft (thus eliminating the need for many, less productive workers) and requires less pay. As technology advances, the preference for business owners seems clear: use robot workers and produce a larger profit margin. Although capitalism was founded on the premise of improved social mobility for all individuals, it is paradoxical since automation likely widens wage gaps, as company executives grow wealthier from profit margins while middle class workers lose their jobs or experience reduced wages.

A common misconception of automation is that only blue-collar laborers will be affected. While blue-collar workers are similarly directly impacted by a loss of jobs due to automation, white-collar professionals also face competition from superior technology. One of the most promising technological developments of the 21st century is that of artificial intelligence. Artificial intelligence (AI) has essentially developed and granted cognitive capabilities to machines previously thought only able to perform repetitive and mundane tasks. Now, researchers have programmed “smart” machines and robots to work on complex legal tasks, investigate cases of fraudulence for insurance companies, and identify algorithmic business decisions by assessing the current market, among other high level tasks. Entry-level employees without sophisticated skills look small and meager in comparison to computers. As businesses seek a competitive edge over their rivals, artificial intelligence provides that sophistication.

However, automation cannot fully eliminate all jobs that exist in society and, in many cases, employees and job positions evolve and improve their skill sets to match forecasted changes in the labor market. As robots assume more menial and repetitive tasks in the manual labor market, a new line of workers will arise to supervise and tend to these machines. Much of the changes in the workplace due to automation, will revise job titles and expand the fields of engineering and technologies associated with the automation of manual labor activities. The brunt of the impact that automation brings to the job economy will come in the current generation of workers, as the shift from manual labor to technological tasks occurs. Unfortunately, economists predict that there will be significant layoffs, particularly in manual labor. There are limited opportunities for professionals working today to have the retraining needed to accommodate these colossal shifts in the operations of companies. But the next generation of workers is becoming well prepared for the ever growing field of technicians or engineers. From the STEAM education movement to the rise of computer science classes in primary schools, humans recognize the need to adapt to changing work demands of our time.

Automation has long been an increasingly dangerous threat in global society, affecting not just a single person or nation, but the job economy as we know it. Machines may hinder social mobility for members of all classes unless change occurs immediately and assurances are created to protect the jobs of workers from expanded automation, especially on foreign soil. Despite the possibility of new industries accompanying automation, the lives and financial well-being of the current generation are at risk.

 

Works Cited

Automation and Anxiety,” 6/25/16, The Economist.

Ignatius, David. “The Brave New World of Robots and Lost Jobs,” 8/11/16, The Washington Post.

 

A Mindful Macbeth: How “Hand” is Used in Macbeth to Represent a Relationship Between Mind and Body

We usually think of our hands as fairly physical things — almost distant things; we don’t regularly consider what they are doing or how we control them. Not so much for Macbeth. In William Shakespeare’s classic Macbeth, power-hungry Macbeth murders many for the Scottish throne, which witches tell him he will gain. Because Macbeth is set in the 11th Century, all of these murders are physical — all of them done by hand. Because of Macbeth’s desire for power, though, the fire driving the murders is solely in his head. Throughout the narrative, the word “hand” often symbolizes the connections and separations between Macbeth’s body and Macbeth’s mind.

In Act 1 of Macbeth, Shakespeare uses the word “hand” to symbolize a separation between mind and body, specifically within Macbeth. In Act 1, Scene 4, Macbeth is speaking about murdering King Duncan. He says, “Stars, hide your fires; / Let not light see my black and deep desires: / The eye wink at the hand; yet let that be, / Which the eye fears, when it is done, to see” (1.4.57-60). Here, “hand” is being used both literally and metaphorically; it is literally in reference to the murder that Macbeth’s hand will help commit, but it is also using the action of Macbeth’s hand stabbing Duncan to represent the whole idea of Duncan’s murder — both the desire and the act. It is also interesting that there is a distinction here between “hand” and “eye.” Shakespeare is noting the difference between the more physical aspects of a body, in this case Macbeth’s hand, and the more mental ones: what Macbeth’s eye sees. Macbeth is afraid of seeing himself — of realizing that he is about to murder a friend. As readers, we can assume that the separation between mind and body — between eye and hand — that Macbeth is exhibiting originates in this fear of himself. Later in Act 1, Lady Macbeth is speaking to Macbeth, and Macbeth has just said that Duncan is coming that day and leaving the next. She speaks,

O, never

Shall sun that morrow see!

Your face, my thane, is as a book where men

May read strange matters. To beguile the time,

Look like the time; bear welcome in your eye,

Your hand, your tongue: look like the innocent flower,

But be the serpent under’t. (1.5.71-77)

In this quote, Lady Macbeth is describing very literally that Macbeth’s hands (“hands” representing Macbeth’s whole physical body) need to seem innocent. In the previous use of “hand,” Shakespeare distinguishes “hand” from “eye,” but here it is all representing what Duncan is supposed to see. But, similar to the previous example, Shakespeare is noting a separation between mind and body — Macbeth’s body must be welcoming, but his mind must be deadly. In both of these instances, Shakespeare makes a point of noting the separation between the parts of Macbeth’s body Macbeth can control and the parts of Macbeth’s mind Macbeth can control and how they contradict each other.

Other times in Macbeth, Shakespeare uses “hand” to demonstrate a connection between Macbeth’s mind and body. In Act 2, Macbeth says (in a soliloquy), “Is this a dagger which I see before me, / The handle toward my hand? Come, let me clutch thee. / I have thee not, and yet I see thee still” (2.1.44-46). The repetition of hand is interesting here — handle is not actually a form of hand but sounds repetitive when read aloud. Shakespeare may have chosen to emphasize this to show the connection Macbeth is feeling between his hand and the dagger. Macbeth is wondering if this dagger is here as a sign that he should murder Duncan with his dagger and his hands. This example suggests a leading somewhere: the handle of the dagger is leading Macbeth. It is almost as if Macbeth has no control here — the tone is passive; he has no choice but to be led by his mind’s creations. His body is acting under his mind’s tricks, a separation between action and desire similar to the previous example, but more importantly a connection between his entire mind and body; a connection so strong that Macbeth’s body is functioning under only his mind’s “tricks” — his mind and body are inseparable.

Connection and separation are opposites, and Shakespeare often treats them that way, but he also sometimes uses the two within the same lines or moment in Macbeth. In Act 4, Scene 1, Macbeth has just found out from Lennox that Malcolm fled the country. He is panicked and has just seen the Weïrd Sisters’ prophecies, so he is also confused and doesn’t know what to think. He says,

Time, thou anticipatest my dread exploits:

The flighty purpose never is o’ertook

Unless the deed go with it; from this moment

The very firstlings of my heart shall be

The firstlings of my hand. (4.1.164-168)

Here, Macbeth is himself (as opposed to Shakespeare) drawing the connection between mind and body; he is noticing that often your mind has an idea, but your body doesn’t execute it. Here, he also seems to be noticing or pointing out, though, that his mind is what is driving his body, almost that his mind is in control of his body. In this example, Macbeth is noticing a connection between his mind and body, but also that what his mind wants is separate from what his body does. Shakespeare is illustrating a broader relationship between Macbeth’s mind and body.

In Macbeth, Shakespeare often uses the word “hand” to symbolize the relationship between Macbeth’s mind and body. Sometimes he uses it to show the connections, sometimes to show the separations, and sometimes both. As we see in waddling toddlers or talking babies, right from the start our society establishes a relationship between the mind and the body — some babies develop physically first, some mentally; people often are either “smart” or “athletic.” We categorize people into mind and body — our society treats them as separate. As Shakespeare teaches us, though, our minds and bodies are separate sometimes, in sync other times, and sometimes both. So the next time you hear someone talking about meditation or breathing exercises or the new popular adult coloring books — or the next time you are using any of these yourself — remember to recognize both the separations and connections between your mind and body. Take it from Macbeth.

 

Good Night, Bad Night: The Black Night in Macbeth

The night is alive, and similar to a human, it may be allied with. It has a peaceful side and a dark one. In Shakespeare’s tragedies, the night takes on the darker role, but in his comedies, such as The Merchant of Venice and A Midsummer Night’s Dream, it takes on the lighter, more peaceful one. When the night is dark, nature becomes more creepy, and the night becomes more evil. In Macbeth, Lady Macbeth and Macbeth, under the influence of three supernatural sisters, find the need to fulfill their dirty desire for power. They do anything to do this, including murder, starting a war, and allying with the dark side of the night. They use the dark and evil side of the night to help them gain power by inflicting harm and confusion on the rest of the Scottish kingdom. There is contrast between how the night is presented in Shakespeare’s comedies versus his tragedies.

In Shakespeare’s comedies, the night is often illustrated as a peaceful and quiet time. It is when everything and everyone rests. For example, in The Merchant of Venice, Scene 5, Act 1, Lorenzo says, “How sweet the moonlight sleeps upon this bank! / Here will we sit, and let the sounds of music / creep in our ears: soft stillness and the night / become the touches of sweet harmony” (5.1.52-55). Just reading this quote may calm the reader because the language is soft and soothing. It has anything but a negative connotation. Shakespeare also uses the night in a positive way in Act 1, Scene 1 of A Midsummer Night’s Dream. Lysander says, “Tomorrow night when Phoebe doth behold / her silver visage in the watery glass, / decking with liquid pearl the bladed grass / (a time that lovers’ flights doth still conceal)… ” (1.1.209-212). To clarify, Lysander is saying, “Tomorrow night, when the moon shines on the water and creates beads of pearly light on the grass (the time when lovers are most concealed and can run away).” Lovers may choose the night out of all times because it is the most peaceful and quiet time, so they won’t get disturbed. The moon shines on the grass and the water in a beautiful, unusual way during the night, and it also is the time when everyone rests. In these quotes, the night takes on the role of good, peaceful, quiet, and beautiful.

Shakespeare, equally skilled at creating unsettling and violent moods, changes his definition of the night in his tragedies. In Macbeth, the night morphs into a dark, evil, strange, and creepy phenomenon. An example of this is in Act 2, Scene 4, where an Old Man is speaking. He is talking about how the night that Duncan was murdered in was terrible and strange. The old man says, “… Hours dreadful and things strange; but this sore night / hath trifled former knowings” (2.4.3-5). In modern English, this means this night has been so spooky and scary that what we used to think was terrible is hardly so. From this, it may be drawn that the Old Man is referring to the night to represent the murder, as if the night itself was the terrible, scary thing that made it so that the Scottish people had a new understanding of what is truly horrific. In Act 2, Scene 3, Lennox, as well, deciphers that the night is evil. Here, he is exclaiming how odd the night that Duncan was killed in was. Lennox says:

The night has been unruly: where we lay,

our chimneys were blown down; and, as they say,

lamentings heard i’ the air; strange screams of death,

and prophesying with accents terrible

of dire combustion and confused events

new hatch’d to the woeful time: the obscure bird

mischievously the livelong night: some say, the earth

was feverous and did shake. (2.3.28-36)

In other words, this night has been been chaotic. The wind blew down into people’s chimneys as they slept. Some people say that they have heard cries of grief in the air, strange screams of death, and voices predicting terrible things in the woeful future. Here, Lennox is describing how the night was unusual. Shakespeare is using it to represent the shock and horror as people find out about Duncan’s death. Lennox knows that the night is becoming more evil before he found out about Duncan’s death. As well as the Old Man, Lennox is describing how the night has been odd. It’s almost like Lennox knew that something happened before he found out about it. These lines demonstrate how the night is showing its evil side over its good side. They may also outline an idea for the reader relating to Macbeth and Lady Macbeth allying with its dark side.

As Macbeth moves on, Macbeth and Lady Macbeth learn to trust the night and seek refuge in its evil side more and more often. This has negative impacts on the people of the Scottish kingdom. This is demonstrated in Act 3, Scene 2, when Macbeth feels that he must kill Banquo if he wants to stay king. He is basing this on the knowledge he gained from the witches. Macbeth says to Lady Macbeth:

Be innocent of the knowledge, dearest chuck,

till thou applaud the deed. Come, seeling night,

scarf up the tender eye of pitiful day;

and with thy bloody and invisible hand

cancel and tear to pieces that great bond

which keeps me pale. (3.2.47-52)

This may be interpreted as, “it is better that I don’t tell you until after it is done, when you applaud me for what I did. (Stops speaking to Lady Macbeth) Come, night, allow my killers to be stealthy and cover me of this deed. Allow your invisible hands to end Banquo’s life, which brings me fear.” Therefore, Macbeth wants the night to come. He is using it to cover up his killing Banquo and to allow his murderers to be unseen while doing it. Macbeth is starting to use the night as an ally to cause confusion and be destructive. This is different than how he used to use it, in which he would rest himself and allow other people to rest during the peaceful, quiet time. Now Macbeth uses the night as a murder weapon. This affects the rest of the Scottish kingdom in that the people now cannot rest either. An example of this is in Act 2, Scene 3, the famous Porter scene. Porter talks about Macbeth’s castle and how it has transformed in a negative way. The reader knows of Duncan’s death in this scene, but Porter does not. Porter says, “Here’s a knocking indeed! If a man were porter of / hell-gate, he should have old turning the key” (2.3.1-2). Essentially, Porter is comparing Macbeth’s castle to hell and his job to the person in control of hell’s gates. After Macbeth and Lady Macbeth use the safety of the night to kill Duncan, the doorman of their castle (Porter) thinks that Macbeth’s castle is no longer what it used to be. When Macbeth and his wife rely on the night’s aid in murder, people sense that the night has become evil. This is also illustrated in the previous Old Man and Lennox quotes. They all believe the night to be chaotic and evil. They each said things relating to the night being horrible, disturbing, and hell-like. In addition, I feel that Macbeth and Lady Macbeth become more and more evil as they continue to use the dark side of night in their dirty business. They not only are inflicting harm on others intentionally for their own gain, but they are also harming the rest of the Scottish kingdom. They are affecting everyone else’s daily lives and sleep routines. They are creating fear among the Scottish people, which is one of the classic aspects of evil characters.

In conclusion, the night is often interpreted as a peaceful and quiet time in Shakespeare’s comedies, but in Macbeth, it consistently plays a darker, more evil role. In Macbeth, Macbeth and Lady Macbeth use the night for their gain in power, and it costs them their original empathetic personalities. They become full of hatred and darkness. This may remind a reader of classic devil-inspired action. A character teams up with the devil and then is under the influence of him. In Macbeth, the night represents the devil, and as the book progresses, Macbeth and Lady Macbeth find the need to use the night to pursue dark thoughts into actions. Perhaps when people use the night for things besides peaceful activities, like sleep and renewal, they may become dark and evil like they are under the influence of the devil.

 

Energy, Empowerment, & Entrepreneurship: Female Figures in American Literature

“Thou ill-formed offspring of my feeble brain,” begins Puritan poet Anne Bradstreet in “The Author to Her Book” (1678), adding “Who after birth did’st by my side remain / Till snatcht from thence by friends, less wise than true / Who thee abroad exposed to public view” (Bradstreet 1-4). Here, the Puritan author demonstrates how there are other roles in society women can fulfill, but they do not necessarily take advantage of those roles due to the possible, fearsome consequences. Both the narrator and Bradstreet herself struggled with traditional male images symbolizing poetic creation. Many critics — specifically literary critic Patricia Cadwell — now praise Bradstreet for her efforts for being “the founder of American literature” and her role in exposing the evils of patriarchal tradition (Cadwell 138). In truth, various works of American literature emphasize the female figure’s thirst for equality through the continuation of restrictive, outmoded ideologies pertaining to gender rights. Through the figures’ journeys, readers are inspired to continue forwarding the empowerment of women. In regards to Bradstreet, the early poet exposes the realistic struggles of women through their exposure of the evil, patriarchal tradition and the nonexistent changes 200 years later. Her emphasis on the necessity for support of the fearless, undermined female figure who bravely, as later author Nathaniel Hawthorne states, “strike their roots into unaccustomed earth” (Hawthorne 13), encourages readers to seek new ideologies, following in the footsteps of those before them.

To explain further, in writing The Scarlet Letter (1850), Romanticist Nathaniel Hawthorne brings light to the truth about female oppression while simultaneously using the infamous Puritan adulterer, Hester Prynne, as a model of a woman who dares to push social boundaries. By writing about an extreme event 200 years before his time, Hawthorne emphasizes how little the standards have changed for women in America. To continue, in stating that “Women derive a pleasure, incomprehensible to the other sex, from the delicate toil of the needle” (Hawthorne 6), the novelist underscores that women do have a clear, domestic role. Nevertheless, the Romantic novelist does not believe that such a role is the only one that women can fulfill. He later demonstrates Hester’s inner strength to stand alone against a group of male magistrates: “Never! […] I will not speak!” (50), she declares, refusing to name the father of her illegitimate child. Here, Hawthorne brings light to the perception of women in Puritan society and how Hester’s character is made to signify the change in society or the move from a blind faith in tradition and into a new era of mutual understanding (Baym). Similarly, American playwright Arthur Miller’s The Crucible (1953) emphasizes the corrupted image of women in Puritanical America through their involvement in the Salem Witch Trials. In writing “‘She is telling lies about me! She is a cold, sniveling woman, and you bend to her! Let her turn you like a — ’ ‘Do you look for whippin’?’” (Miller 22), the author demonstrates, through figures Abigail Williams and John Proctor, how women who fought back against the lies of society were continuously shunned and dubbed “wicked” (20). Through their perilous journeys in Puritanical America, both Hester Prynne and Abigail Williams are satirical symbols of the non-developing status of women in American society demonstrated, by Miller and Hawthorne, through their “so-called” preposterous actions that further blind society from seeking a solution.

Furthermore, as demonstrated in Puritan author Mary Rowlandson’s narrative, Narrative of the Captivity and Restoration of Mrs. Mary Rowlandson (1682), these preconceived notions about female figures and womanhood manipulate the vulnerable minds of society. When taken captive by a tribe of Native Americans, the eponymous author continuously doubts her newfound survival strengths, as she demonstrates in writing “I thought my heart and legs, all would have broken, and failed me” (Rowlandson 3). Here, the author brings light to her perspective on her own mentality, that is, essentially, degrading due to the lack of strict, Puritanical standard of women, but, it is her later realization of her self-power that empowers her to break stereotypical tradition. As a result, present day critics — such as Rebecca Blevins Faery — refer to the narrative as a “proto-epic in scope in the founding of national identity (and literature)” (Faery 259), for its removal of Puritanic notions towards the behavior of women. It is through the eponymous author’s fear in disobeying the identity her society has painted onto her, that she discovers an alternative reality for herself: a hungered addiction for the wild, empowering, feminine animal within. Additionally, American author Ralph Waldo Emerson supports the idea of a personal identity in his memoir “Self-Reliance” (1841). In regards to Mary Rowlandson, Puritanical notions are what “scare [her] from self-trust” (Emerson 44); but, it is her “feminine rage” that “the indignation of the people is added” (Emerson 43). Nevertheless, Emerson’s writings introduce readers to the unfortunate reality of past American society; regardless of his efforts, women, similar to Mary Rowlandson, are continuously perceived themselves to be incapable of self-sufficiency. As the eponymous author further engulfs herself into a world of preconceived notions, she strengthens the impenetrable sphere of stereotypes, that surrounds the world and American literature thus far.

Notwithstanding the dubbed “fearsome” ideology surrounding the entrepreneurship of female figures, Romantic poet Emily Dickinson bursts into the sphere of American literature with her arduous, cleverly hidden pinpoints to the reality of independent women in American society. As she writes in her poem “I’m “wife” — I’ve finished that”, “I’m “wife” — I’ve finished that — / That other state — […] / It’s safer so — ” (Dickinson, “I’m wife” 1-4). Here, the poet reveals, through a young girl’s contradictory feelings, the reality of marriage and its prevention of female self-identities, labelling women as the possession of their husbands. Additionally, Dickinson implies, with this innovative ideology, that a woman who is not married is capable of more, without having others interfere such as a husband might. As literary critic Mary Loeffelholz reflects in her journal Dickinson and the Boundaries of Feminist Theory, the poet’s primary role is in breaking the boundaries of female stereotypes through the figures in her poems: “Over and over in these poems and prose passages, borders and boundaries exist to be breached” (Loeffelholz 111). Likewise, in continuation of this revolutionary trend, Dickinson presents a similar message in her poem “We outgrow love like other things,” in writing “[w]e outgrow love like other things / And put it in the drawer” (Dickinson, “We outgrow” 1-2). Here, Dickinson describes how people can outgrow love like an antique fashion mirroring how, in society, women are taught that their looks are important in the pleasing of men. Women were rarely independent and declined to practice reason; but, Dickinson demonstrates here that these looks will continuously outgrow each other, removing the need for male judgement on the image of women. In short, Emily Dickinson truly was a feminist writer who lived ahead of her time, through her painting of the female figure’s identity and her exposé of societal falsehoods. Truly, Dickinson is a literary incarnation of the fearless Joan of Arc; she raises her sword high in the air and ripping apart the gilded fabrics of American literature.

As coined by American author Mark Twain, the Gilded Age was a revolutionary period in American literature that brought light to the “underbelly” or false perfections of the American society. Similarly, Realist author Kate Chopin highlights in her short story “The Story of an Hour” (1894) the gilded truths within female figures, specifically pertaining to those held in the restrictive chains of marriage. “‘Free, free, free’” begins the story’s protagonist Mrs. Louise Mallard, who has just received word of her husband’s death, “‘free! Body and soul free’” (Chopin 757). Here, the author highlights the protagonist’s hidden emotions within her marriage and how Louise’s initial reaction was due to her chains being removed from an accustomed Earth, not a shattered heart. Additionally, this story brings light to the risk women writers faced in being absolutely objective: it was a risk to being morally ambiguous, and the only acceptable way to depict such immoral scenarios was to — as literary critic Karin Garlepp Burns writes — “undermine the exaggerated objective mode” (Burns, “The Paradox” 30). On the other hand, in Mark Twain’s Adventures of Huckleberry Finn (1884), illustrator Edward Winsor Kemble’s image “Indignation” demonstrates the inner anger of female figures; the title itself is ironic as it is defined as anger or annoyance provoked by what is perceived as unfair treatment (in this case unfair treatment of women). In a moment of anger where her eyes were “ablazing higher and higher” (Twain 199), Kemble depicts Mary Jane Wilks with an image of rage and disgust on her face (Figure 1) contradicting the stereotypical image of women in American society — at the time — as depicted in Charles Dana Gibson’s plethora of “Gibson Girl” images — specifically “The Hero… Discovered in the Act of Carrying on Two Conversations at a Time” (1903). For example, Kemble displays Mary Jane as an uptight, rigid woman, whereas Gibson paints women wearing very low-cut, loose dresses highlighting how they are merely objects meant to appeal to the likeness of men (Figure 2). Regardless of Kate Chopin and Edward Kemble’s attempts to instill the image of independent, proud women, Mark Twain — who was somewhat tolerant of empowering, female figures claiming his daughter “was all [his] riches” (Burns, “Mark Twain”) and his “gilded” world — discards these ideologies amongst the glamour and ostentatious lifestyle of the Lost Generation. As depicted in the 2000 Penguin Modern Classics cover of F. Scott Fitzgerald’s The Great Gatsby (1925), women are viewed, merely, as objects of lust and pleasure (Figure 3): the “beautiful little fool[s]” with painted, gold faces (Fitzgerald 17).

Furthermore, as demonstrated in American author F. Scott Fitzgerald’s The Great Gatsby (1925), the resurgence of empowering, female figures is diminished through the temptations and scandals of elitism and the lavish lifestyle of the wealthy. In ironic connection with the writers of this era — coined “The Lost Generation” by author and mentor Gertrude Stein — the robust, astute minds of these women are lost within dreams of satisfaction and fulfillment of “the American Dream” (as coined by James Truslow Adams). In writing “[s]he wanted her life shaped now, immediately [… ] of love, of money, of unquestionable practicality” (Fitzgerald 151), the author emphasizes — through the female protagonist Daisy Buchanan — the image that women are fundamentally incapable of making up their minds without an intelligent man by their side. This overarching claim entraps women in cultural and gendered constructions of being a rich wife and “‘nice’ girl” (149). As aforementioned, upon speaking of her daughter’s future, Daisy remarks “‘I hope she’ll be a fool — that’s the best thing a girl can be in this world, a beautiful little fool’” (17). Daisy is not a fool herself, but, this is somewhat sardonic. While Daisy refers to the social values of her era, she does not seem to challenge them. The older generation values subservience and docility in females, and the younger generation values thoughtless giddiness and pleasure-seeking. In writing “[s]he is a victim of a complex network” (Fryer 165), literary critic Sarah Beebe Fryer unveils Daisy’s true intentions, highlighting how readers should continue to support her decisions despite them often being against the empowering morals of female figures. Regardless, Daisy Buchanan is regarded as the counterexample of female empowerment as she is presented with the opportunity to provoke her knowledge; but, in turn, she wallows away in her silence. In conforming to the social standard of American femininity in the 1920s, Daisy is, essentially, held back by the leash of pearls around her neck, preventing her from continuing the parade of fearless female figures as literature has so far presented.

Regardless of her degradation to the societal power of women, F. Scott Fitzgerald introduces the idea of “unattainable girl”: a female figure who is out of reach from the controlling, wanting power of another figure. As written in American playwright Lorraine Hansberry’s A Raisin in the Sun (1959), twenty-two year old Beneatha Younger is an incarnation of the “unattainable girl” through her difficulties with her conservative mother and her anti-marriage attitude: “‘I’m not worried about who I’m going to marry yet — if I ever get married’” (Hansberry 50). Here, the author brings light to Beneatha’s hidden strength shown through her defensive attitude towards her own morals “by forgoing blasphemous outbursts” — as American author Mary Ellen Snodgrass writes in her article “A Raisin in the Sun” (Snodgrass). Not only is Beneatha not interested in getting married and being cared for by a man, but she is also convinced that she alone can choose the direction and outcome of her life. Similarly, Mary Anne, a Vietnam soldier’s girlfriend in Tim O’Brien’s Things They Carried: A Work of Fiction “Song Tra Bong” (1990), echos Hansberry’s emphasis on the gilded strengths of women through her exercising in total agency over her life “with different forms of expressions” (53), and addiction to the wild nature of Vietnam. An unnoticed counterexample to stereotypes of American women’s participation in war, Mary Anne, who enters as a soldier’s girlfriend but leaves as a soldier herself, “ma[kes] you think about those girls back home, how pure and innocent they all are, how they’ll never understand any of this” (O’Brien 108). Here, O’Brien emphasizes in the short story how the women who go to war don’t fulfill their typical gender roles, but rather, take on characteristics generally associated with men because the intense circumstances of war demand those qualities in its soldiers: “she quickly fell into the habits of the bush” (94). As American literature dictates, those who do not follow the status quo of their role as women unravel American society and the accepted standard of gender and identity. Neither Beneatha nor Mary Anne don their skirts in place for camouflage, but, through their energetic attitudes, they paint their faces red preparing for a never ending, fearsome fight towards changing the outlook of female figures.

In terms of The Color of Water: A Black Man’s Tribute to His White Mother (1995), American author James McBride demonstrates the empowering, determined work ethic of female figures throughout their fragmented lives and haunted pasts. His mother, Ruth McBride, is perceived by her children as an empowering, spirited matriarch. However, a layer of Ruth’s personality retains the sorrows and regrets of her childhood. As she states, “‘We had no family life. That store was our life’” (McBride 41), the author brings light to Fishel Shilsky’s unloving, patriarchal nature in which he ruled his household. In turn, Ruth successfully runs her family with love, along with a similarly tight rein; she disciplines her children to answer directly to her, demonstrating her assertive, controlling power regardless of her haunted past. Additionally, McBride emphasizes his mother’s unseen strength through the difficulties she faced as a single mother of twelve children who strived to grant her children with the best education possible. Through her hard work ethic, Ruth is able to send her children to some of the finest colleges in the country which is, as Frances Winddance Twine, Professor of Sociology at the University of California, states in praise, “an amazing accomplishment for even the most privileged of white women” (Twine 152). Moreover, the critic agrees with McBride’s revealing of the hidden strength of women in stating “we should not assume that there are no more like [Ruth], in America’s past and in its future” (154). In short, Ruth McBride forges her own strange life, but she triumphs as the matriarch of an outstanding family, creating a self-sufficient world for them. While the book’s title is in reference to the color of God, truly, it is a reference to the myriad of colors within the book that satirically emphasizes how people cannot be defined by their color — whether they are black or white, or pink or blue.

In the case of fearless female figures, American literature has dubbed them, thus far, as “feeble” (Bradstreet 1), “delicate” (Hawthorne 77), “careless” (Fitzgerald 179), and “coy and flirtatious” (Tim O’Brien 95). All of the following statements are degrading and subject to the opinion of men that are far from the supportive, romantic equals women desire to coexist with. On the other hand, women are regarded as “liberated” (Hansberry 63), “sivilize[d]” (Twain 283), “nonchalance” (James McBride 8), and “free” (Chopin). Notwithstanding the development of history or the years in which these pieces were created, the trend of male figures shaping the role of what the female figures represent is continuous. However, in the case of female figures like Beneatha Younger, the element of love and infatuation in another figure comes into play bringing light to the question of what role do male figures truly play in a female figure’s story. Are women disregarded as “fearless” or “empowering” simply because they have found a man to live with for the rest of their days? Is marriage a binding contract to an unequal communion between man and woman? To writers — such as Emily Dickinson — marriage is “safer” than the “pain” of being single in society (Dickinson, “I’m wife” 4, 10), but, to female figures such as F. Scott Fitzgerald’s Daisy Buchanan — regardless of her careless persona — love is “mak[ing] a fool of” yourself while looking into “well-loved eyes” (Fitzgerald 96, 131). Ultimately, as seen in present day American society, it is unclear whether feminism and the role of empowering female figures alludes to revolutionary women who never marry, or to those who find love and continue to remain strong regardless. American literature rewrites this psychomachiac struggle over and over again, never revealing the answer and furthering the inequality between genders; but, nevertheless, encourages readers to shatter societal, preconceived notions, breaking the gilded sphere of stereotypes.

Works Cited:

Baym, Nina. “Revisiting Hawthorne’s Feminism.” Hawthorne and the Real: Bicentennial Essays, edited by Millicent Bell, Columbus, Ohio State Univ. Press, 2005, pp. 107-24. Google Books, books.google.com/books?hl=en&lr=&id=24HXF1jsga4C&oi=fnd&pg=PA107&dq=Revisiting+Hawthorne’s+Feminism&ots=2fKELQ6eDa&sig=gPA0ETgGdki-YQQREcABDfsxHTk#v=onepage&q&f=false.

Bradstreet, Anne. “The Author to Her Book” (1678). The Heath Anthology of American Literature, 4th ed., vol. 1. Edited by Paul Lauter. Houghton Mifflin, 2002, p. 390.

Burns, Karin Garlepp. “The Paradox of Objectivity in the Realist Fiction of Edith Wharton and Kate Chopin.” Journal of Narrative Theory, PDF ed., vol. 29, no. 1, Winter 1999, pp. 27-61.

Burns, Ken, producer. “Ken Burns’ Mark Twain: Part 2.” SAFARI Montage. PBS, 2001. Accessed 23 Feb. 2017.

Cadwell, Patricia. “Why Our First Poet Was a Woman: Bradstreet and the Birth of an American Poetic Voice.” Literature Criticism from 1400 to 1800, vol. 30, 1996, pp. 136-44. Gale Literary Sources, go.galegroup.com/ps/i.do?p=GLS&sw=w&u=gree48311&v=2.1&id=RQBNBM478344547&it=r&asid=9b9e9d1a9d71032d7dae20afbc16941c. Accessed 23 Feb. 2017.

Chopin, Kate. “The Story of an Hour” (1894). Kate Chopin: Complete Novels and Stories, by Chopin, edited by Sandra M. Gilbert, 2nd ed., Library of America, 2008, pp. 756-58.

Dickinson, Emily. “I’m ‘wife’ – I’ve finished that.” The Complete Poems of Emily Dickinson, by Dickinson, edited by Thomas Herbert Johnson, Little Brown Company, 1960, p. 94.

————————“We outgrow love, like other things.” Wikisource, 1 Mar. 2013, en.wikisource.org/wiki/We_outgrow_love,_like_other_things. Accessed 2 Apr. 2017.

Emerson, Ralph Waldo. “Self-Reliance” (1841). American Literature: Essential Short Works. Convent of the Sacred Heart School (Greenwich, CT), 2010, pp. 39-44.

Faery, Rebecca Blevins. “Mary Rowlandson Maps New Worlds: Reading Rowlandson.” Literature Criticism from 1400 to 1800, vol. 66, 2001, pp. 256-67, go.galegroup.com/ps/ i.do?p=GLS&sw=w&u=gree48311&v=2.1&id=MIEAFK694681793&it=r&asid=0437e7e25ef889188ea4f896a2c9c081. Accessed 5 Apr. 2017.

Fitzgerald, F. Scott. The Great Gatsby (1925). Scribner, 2004.

Fryer, Sarah Beebe. “Beneath the Mask: The Plight of Daisy Buchanan.” Critical Essays on F. Scott Fitzgerald’s The Great Gatsby, by edited Scott Donaldson, Boston, Hall, 1984, pp. 153-65.

Gibson, Charles Dana. The Hero…Discovered in the Act of Carrying on Two Conversations at a Time. JPEG file, 1903.

Hansberry, Lorraine, and Robert Nemiroff. A Raisin in the Sun (1959). Vintage Books, 1994.

Hawthorne, Nathaniel. The Scarlet Letter: And Other Writings. Edited by Leland S. Person. W.W. Norton, 2005.

Kemble, Edward Winsor. “Indignation.” Adventures of Huckleberry Finn: An Authoritative Text, Contexts and Sources, Criticism, by Mark Twain and Thomas Cooley, 3rd ed., New York, W.W. Norton, 1999, p. 199.

Loeffelholz, Mary. “Dickinson and the Boundaries of Feminist Theory.” The Emily Dickinson Journal, vol. 1, no. 2, Fall 1992, pp. 121-22, muse.jhu.edu/article/245241. Accessed 8 Apr. 2017.

McBride, James. The Color of Water: A Black Man’s Tribute to His White Mother (1995). Riverhead Books, 1996.

Miller, Arthur. The Crucible: A Play in Four Acts (1952/53). Penguin Books, 2003.

O’Brien, Tim. “Sweetheart of the Song Tra Bong.” The Things They Carried: A Work of Fiction (1990), Mariner Books/Houghton Mifflin Harcourt, 2009, pp. 85-110.

Rowlandson, Mary. “Narrative of the Captivity and Restoration of Mrs. Mary Rowlandson” (1682). Project Gutenberg, www.gutenberg.org/files/851/851-h/851-h.htm#link2H_4_0002. Accessed 13 Feb. 2017.

Snodgrass, Mary Ellen. “A Raisin in the Sun.” Encyclopedia of Feminist Literature, 2006, fofweb.infobase.com/activelink2.asp?ItemID=WE54&WID=11130&SID=5&iPin=EFL621&SingleRecord=True. Accessed 3 Apr. 2017.

Stubbs, John C. “Hawthorne’s The Scarlet Letter: The Theory of the Romance and the Use of the New England Situation.” PMLA, digital ed., vol. 83, no. 5, Oct. 1968, pp. 1439-47.

Twine, France Winddance. “The White Mother.” Transition, no. 73, 1997, pp. 144-54, www.jstor.org/stable/2935450. Accessed 2 Apr. 2017.

Unknown, illustrator. The Great Gatsby. Penguin Modern Classics, 2000.

 

Appendix

“Indignation”

Figure 1: Image of Mary Jane Wilks in Mark Twain’s Adventures of Huckleberry Finn illustrated by Edward Winsor Kemble.

“The Hero…Discovered in the Act of Carrying on Two Conversations at a Time”

Figure 2: Image of a man simultaneously carrying two conversations with two “Gibson” girls in Charles Dana Gibson’s Eighty Drawings: Including “The Weaker Sex: The Story of a Susceptible Bachelor”.

“Cocktails and Conversations”

Figure 3: Cover of the 2000, Penguin Modern Classics edition of F. Scott Fitzgerald’s The Great Gatsby, for which the illustrator is unknown.

 

Whitewashing: Bringing Color to the Screen

Earlier this year, movie audiences saw Scarlett Johansson, a Caucasian actress, play Motoko Kusanagi, a Japanese girl-turned-cyborg, in the film Ghost in the Shell. In the past several years, they have also seen Emma Stone as Allison Ng, a character of Chinese and Hawaiian descent, in Aloha; Jake Gyllenhaal as the title character in Prince of Persia: The Sands of Time; Joel Edgerton and Christian Bale as Ramesses II and Moses, respectively, in Exodus: Gods and Kings; and Rooney Mara as Tiger Lily in Pan — all white actors in roles meant for people of color.

This practice of casting white actors as non-white characters, known as whitewashing, has become all too common in Hollywood. Whitewashing, however, is not a new phenomenon; it has endured for centuries. In the 19th and early 20th centuries, minstrel shows, which featured white performers in blackface, inaccurately and derisively portrayed black people. More recently, roles such as The King in The King and I and Mr. Yunioshi in Breakfast at Tiffany’s — considered iconic 20th-century movie characters — were cast with white men in yellowface. While instances of whitewashing today are slightly less egregious, they still result in less representation for minorities, reinforce ugly stereotypes, and detract from an artistic work’s authenticity.

Despite the backlash against whitewashing, directors and filmmakers continually defend questionable casting choices with seemingly pragmatic excuses. They rationalize that blockbuster films need an A-list star as headliner, and unfortunately, the majority of A-listers are white. This concept does make sense, especially as larger movie studios are typically risk-averse and usually greenlight movies on the condition that big names are attached. At the same time, however, many films with whitewashed casts and “big-name actors” — including Ghost in the Shell, Aloha, Prince of Persia, Exodus, and Pan — have bombed at the box office. While these movies do poorly in part due to the protests and boycotting that accompany casting controversies, they are also just not believable, genuine works of art, and despite the popularity of lowbrow fare these days, audiences do respond to works that are good quality. With the growing popularity of sites like Metacritic and Rotten Tomatoes, audiences are too sophisticated now to blindly follow any “big-name” actor in an ill-suited role and suspend their disbelief. Network TV shows like Black-ish and Fresh off the Boat have caught on and struck a chord with people, expanding the demographic of viewers, all the while addressing important issues of race and subverting stereotypes.

Another excuse that filmmakers use — the one I see as the most desperate — is the “best person for the job” pretense. Naturally, the people behind a project want it to reach its full potential. However, the proposition that only one person can be right for a role in a field as subjective as art is dubious. In actuality, people have their own biases and are drawn to certain kinds of personalities, usually those most similar to theirs. Because the ones in power are predominately white, their visions of the pivotal characters tend to mirror their own experiences. These feelings are natural, and in some cases, the creatives in charge have to just go with their gut because objective measures are unsatisfactory or impossible to obtain. That being said, in my experience as a Broadway performer, I saw a number of actors perform the same role, both from behind-the-scenes as well as from the audience’s perspective. Different performers elicited different responses from the crowd — laughs and applause in varying places, possibly more on one line and less after another. The theatergoers who had seen all of the actors performing the same part tended to be divided over whom they felt was best. Critiquing art is not a quantitative matter. Saying one’s own artistic interpretation is fact is simply wrong, and the notion that artists should be pitted against one another in competitive fashion is antithetical to the whole meaning of art.

The primary roadblock to greater representation for minorities is the idea in media that white is the default race. Too often, everyman is equated with the white male — meaning non-white romantic leads and action stars are few and far between. These portrayals only serve to perpetuate stereotypes and worsen biases; earlier this year, Steve Harvey made a joke entirely centered on the concept that Asian males could never be seen as attractive. Film and TV have reinforced certain racist attitudes — all black people are considered “thugs,” all people of Arab descent are seen as “terrorists,” all Asians are “nerdy IT guys.” Races have been identified with particular stock parts.

The best way to combat these types of ideas is to depict people of color as three-dimensional characters and cast them as a wide variety of roles. Placing people of color and their stories in the foreground — as the true focus of the narrative — opens up all kinds of possibilities. The film The Big Sick, for instance, stars a Pakistani man and a Caucasian woman as the central couple. At a larger studio, Kumail Nanjiani, the movie’s writer and lead actor, would likely never have been given the go ahead; executives would have contended that he was not “believable” as a romantic lead, despite the fact that the script was based on his real-life marriage. Fortunately, Nanjiani was able to star in his own movie, breaking an enduring stereotype in the process. This casting, and others like it, will hopefully lead mainstream viewpoints in a more progressive direction.

I have personally had to deal with derogatory preconceptions in my own life. As a male ballet dancer, I have been the recipient of a good deal of demeaning remarks. Thankfully, these comments have never slipped into violence or anything severe; most of the time, they simply come from a place of ignorance and a lack of exposure to the art form. Recently, I performed at a children’s hospital in New York for elementary school-age children. I expected to receive some mildly offensive reactions, but to my surprise, the kids appeared to admire my dancing — the athleticism of my jumps and pirouettes. I now realize that they had yet not been corrupted by society’s judgment of the male ballet dancer. Children are very impressionable and are especially influenced by the media they consume. As media forms become increasingly prevalent in our culture, sending the right message to future generations is critical. When movies and television shows reflect the diversity of the real world, they send the message that anything is possible. Kids of color should not feel as though they are constrained by their race.

While newer generations are more aware of ingrained and insidious racist stereotypes, progress toward inclusivity remains very gradual. In late August, the actor Ed Skrein stepped down from the movie Hellboy after learning that his character in the source material was Japanese-American. In doing so, he risked a great deal; he gave up a sizable role in a potential blockbuster and may have fractured valuable relationships with Lionsgate, a leading entertainment company, and Hellboy’s producers. However, if he had stayed on the project, he would have faced criticism — similar to that leveled at Johansson, Stone, Gyllenhaal, Bale, Edgerton, and Mara — for co-opting a role created as Japanese. What Skrein did was honorable, and very few actors would have been willing to withdraw from such a hyped project. Although his decision was a step forward, it did not bring about any systemic change. In a business as difficult and fickle as film, putting the onus on the actors to turn down valuable roles is unfair. The responsibility should fall on those in charge.

The solution to greater representation for minorities in Hollywood requires a multipronged approach. Network television and especially film have the most barriers to entry — countless executives have to approve every creative decision throughout the entire process. Hollywood is very much a hierarchy, and the key decision makers — the ones who say what is produced and what is not — are almost all white males. More diversity is needed at the top of the pyramid. One example is film producer Charles King, who within the last few years launched a new media company called Macro. The works that Macro helps develop and fund are stories told from the unique perspective of people of color. Although the backing of higher-ups is absolutely crucial, it is also important that people of color themselves have more opportunities to produce their own content. Critics and audiences alike can discern when a piece is authentic or not. The “Thanksgiving” episode of Master of None, Indian actor Aziz Ansari’s comedian-auteur show, follows the journey of Denise (played by Lena Waithe), a black lesbian, as she grows up, becomes aware of her sexuality, and comes out to her family. Because Waithe (along with Ansari) wrote the episode and drew from her real-life experiences, the story received universal critical acclaim, even garnering an Emmy for best comedy series writing — making Waithe the first black woman to win in that category. Shonda Rhimes, a prolific television producer and showrunner, has her own highly-rated night of programming on a major network which includes two shows with black female leads. This kind of content has demonstrated the popularity of more diverse characters and viewpoints.

In other forms of media and the arts, however, people of color are a commanding force. In the music industry, black artists in particular dominate the charts and win a plethora of awards. This year’s Grammy nomination leaders are Jay-Z, Kendrick Lamar, Bruno Mars, Childish Gambino, Khalid, No I.D., and SZA — all people of color. What accounts for this disparity between music and film is that black musicians and singers were given a voice much earlier. When Berry Gordy Jr. founded Motown in 1959, he gave black artists an opportunity to have their music produced and distributed. Motown paved the way for other record labels that would support black artists. Once these artists reached a certain level of fame, not only did their success snowball, but they also were able to have greater control of the music they made.

Additionally, in general, music has fewer barriers to entry than film or TV. A singer-songwriter can upload original music online with no more than an internet connection and a camera. On the other hand, a self-produced movie will likely be noticeably amateur. On other platforms that are easily accessible — YouTube being the prime example — people of color are well represented. YouTubers Ryan Higa, GloZell, KSI, Germán Garmendia, Evan Fong, and Mariand Castrejon Castañeda all have millions of subscribers and views. Their channels run the gamut from comedy to music to gaming to beauty. All of these personalities expanded their subscriber base organically by putting up content that was authentic to them. They did not have to deal with rooms of executives and focus groups to determine their appeal.

What media bigwigs need to realize is that whitewashing is not a sustainable business model. Our culture, especially the younger generations, is becoming more enlightened and has higher expectations for media reflecting society at large. Not only do people expect more, but they are also willing to publicly call out whitewashing; social media has mobilized an activist army. Bringing in a diversity of voices and perspectives has resulted in both critical and commercial success. But without the production of innovative content and the support of decision makers, effecting a change will be difficult.

 

Bibliography

Baker, Calvin. “A Former Superagent Bets Big on a More Diverse Hollywood.” The New York Times, 4 Oct. 2017, nytimes.com/2017/10/04/magazine/charles-king-superagent-diverse- hollywood.html. Accessed 5 Oct. 2017.

Bernardi, Daniel and Michael Green. Race in American Film: Voices and Visions that Shaped a Nation. ABC-CLIO, 2017.

Couch, Aaron and Borys Kit. “Ed Skrein Exits ‘Hellboy’ Reboot After Whitewashing Outcry.” The Hollywood Reporter, 28 Aug. 2017, hollywoodreporter.com/heat-vision/ed-skrein- exits-hellboy-reboot-whitewashing-outcry-1033431. Accessed 2 Sept. 2017.

Cruz, Gilbert. “Motown.” TIME, 12 Jan. 2009, content.time.com/time/arts/article/ 0,8599,1870975,00.html. Accessed 29 Nov. 2017.

Gross, Terry. “How A Medically Induced Coma Led To Love, Marriage And ‘The Big Sick.’” NPR, 12 Jul. 2017, npr.org/2017/07/12/536822055/how-a-medically-induced-coma-led- to-love-marriage-and-the-big-sick. Accessed 26 Aug. 2017.

Hibberd, James. “Shonda Rhimes dramas deliver ratings record.” Entertainment Weekly, 21 Nov. 2014, ew.com/article/2014/11/21/shonda-rhimes-ratings/. Accessed 29 Nov. 2017.

Littleton, Cynthia. “Lena Waithe Makes Emmy History as First Black Woman to Win for Comedy Writing.” Variety, 17 Sept. 2017, variety.com/2017/tv/news/lena-waithe-wins- emmy-black-woman-comedy-writing-1202562040/. Accessed 19 Sept. 2017.

Lynch, Joe. “Grammys 2018: See the Complete List of Nominees.” Billboard, 28 Nov. 2017, billboard.com/articles/news/grammys/8047027/grammys-2018-complete-nominees-list. Accessed 29 Nov. 2017.

McAlone, Nathan. “Most popular YouTube stars in 2017.” Business Insider, 7 Mar. 2017, businessinsider.com/most-popular-youtuber-stars-salaries-2017/. Accessed 2 Sept. 2017.

NPR Staff. “Diversity Sells — But Hollywood Remains Overwhelmingly White, Male.” NPR, 28 Feb. 2015, npr.org/sections/codeswitch/2015/02/28/389259335/diversity-sells-but- hollywood-remains-overwhelmingly-white-male. Accessed 2 Sept. 2017.

Sun, Rebecca. “The Disturbing History Behind Steve Harvey’s “Asian Men” Jokes.” The Hollywood Reporter, 13 Jan. 2017, hollywoodreporter.com/news/disturbing-history- behind-steve-harveys-asian-men-jokes-963735. Accessed 2 Dec. 2017.

Toll, Robert C. Blacking Up: The Minstrel Show in Nineteenth-Century America. Oxford University Press, 1974.

Yang, Jeff. “Whitewashing Hollywood Movies Isn’t Just Offensive—It’s Also Bad Business.” Quartz, 18 Apr. 2017, qz.com/960600/whitewashing-ghost-in-the-shell-and- other-hollywood-movies-isnt-just-offensive-its-also-bad-business/. Accessed 12 Nov. 2017.

 

Frankenstein, Not Gloria Steinem

Mary Shelley, author of Frankenstein, was the daughter of Mary Wollstonecraft, an early feminist, and William Godwin, a progressive and an anarchist who raised her with values which advocated social justice and reform. One might thus expect Shelley’s writing to be alive with strong female personalities and feminist ideas. In Frankenstein, however, both the presence of women and their depth of character are limited. Throughout the novel, women play a decidedly secondary role, even to the extent that its very premise is about bypassing the most important biological function of the female.

All of the main characters in Frankenstein are male, and all female characters occupy surprisingly passive roles; even Elizabeth Lavenza, one of the people dearest to Victor Frankenstein, is not spared this treatment. Fostered by a poor, Italian family as a toddler, Elizabeth is adopted and introduced as a “pretty present” for Frankenstein, who “interpret[s] [these] words literally and look[s] upon Elizabeth as [his] — [his] to protect, love, and cherish… till death she [is] to be [his] only” (37). It seems that Elizabeth comes close to accepting this relationship herself, growing up to care more about Frankenstein’s well-being and happiness than her own; she writes to him, “But it is your happiness I desire as well as my own when I declare to you that our marriage would render me eternally miserable unless it were the dictate of your own free choice… if you obey me in this one request, remain satisfied that nothing on earth will have the power to interrupt my tranquillity” (192). She is willing to sacrifice marrying the person she loves if it will make him in any way unhappy. Although selfless, Elizabeth’s prioritization of Frankenstein over herself is extreme, as is Frankenstein’s own self-absorption. Upon returning from England, haunted by the death of Clerval and the monster’s threat, he finds that Elizabeth is “thinner, and [has] lost much of that heavenly vivacity that had before charmed” (194). However, he expresses no concern, maintaining that her “compassion [makes] her a more fit companion for one blasted and miserable as [he is]” (194). This lack of consideration for his soon-to-be-wife, and indeed his satisfaction that she has also suffered, is telling of their relationship, one between a dominant man and a submissive woman. Before their marriage, Frankenstein decides that he will finally tell Elizabeth about the monster, but only once they are husband and wife:

I have one secret, Elizabeth, a dreadful one; when revealed to you, it will chill your frame with horror, and then, far from being surprised at my misery, you will only wonder that I survive what I have endured. I will confide this tale of misery and terror to you the day after our marriage shall take place, for, my sweet cousin, there must be perfect confidence between us. But until then, I conjure you, do not mention or allude to it. This I most earnestly entreat, and I know you will comply. (193-194)

Not only does he demand that she marry him without knowing this monstrous secret, one which may alter her impression of him, but he even orders her not to mention the subject until their union is finalized, with no doubt that she will obey him. That he assumes her blind devotion and that she fulfills this assumption are indicative of her passive role. Frankenstein also tells her exactly how she will react once she learns the truth, namely with compassion for him rather than reflection upon her own danger. (Based on her prior behavior, this reaction seems plausible.) Furthermore, when the creature tells Frankenstein that he “shall be with [him] on [his] wedding-night” (173), Elizabeth is so subordinate in Frankenstein’s mind that he does not consider the possibility of Elizabeth’s being the target of the threat. After they marry, deluded on account of this egotism, he orders her to return to her room, never thinking that she might be important enough to be the object of the threat. She obeys without question, even though this is her wedding night, a time that husband and wife typically spend together. Even as Frankenstein’s wife, Elizabeth fails to stand up to him or for herself, and she thus does not evolve over the course of the novel.

Like Elizabeth, Justine Moritz is a poor little girl, “saved” by the Frankensteins. Mistreated by her mother, Justine is brought into the household by Caroline Frankenstein, where she finds a better quality of life than the average servant, as Elizabeth proudly states. In this way, her fate has been determined by others, similarly to Elizabeth’s. This is also reminiscent of Caroline’s introduction to the Frankensteins; after her father’s death, Alphonse Frankenstein “[comes] like a protecting spirit to the poor girl, who commit[s] herself to his care” (34). This manifestation of passivity equates to a lack of control in one’s own life. Later, when Justine is accused of murdering William, she once again leaves it up to others to decide her fate: “I commit my cause to the justice of my judges, yet I see no room for hope. I beg permission to have a few witnesses examined concerning my character, and if their testimony shall not overweigh my supposed guilt, I must be condemned” (85). She places her life in the hands of the friends who testify on her behalf and the judges who will vote, offering only a weak defense of her innocence. Once she is found guilty, the pastor “threaten[s] and menace[s] [her], until [she] almost [begins] to think that [she is] the monster that he [says she is]” (88). She is swayed by the pastor to do the unthinkable, to confess to a sin of which she is not guilty. On account of her passivity, Justine is influenced to commit the shameful sin of lying.

Beyond the individual characters, Frankenstein is at its core a story about neglecting women and not allowing them to fulfill their role in creating life. By producing the creature without the use of the female body, Frankenstein defies the natural order of the world and consequently becomes “insensible to the charms of nature… Winter, spring, and summer [pass] away during [his] labours; but [he does] not watch the blossom or the expanding leaves… so deeply [is he] engrossed in [his] occupation” (56-57). He disregards women, thus disregarding the natural way of creating life, and essentially disregarding nature, an act as sinful as it gets for Romantics. If not already clear, this is made abundantly so when the creature is first born: “His jaws opened, and he muttered some inarticulate sounds, while a grin wrinkled his cheeks. He might have spoken, but [Frankenstein] did not hear; one hand was stretched out, seemingly to detain [him], but [he] escaped and rushed downstairs” (59). This behavior — smiling and reaching out (non-maliciously) — mirrors the way a baby acts towards his or her mother, the first person to receive and care for him or her. Frankenstein is unable to fill this role himself, and his mistake is fatal. The creature begins life without a family and must navigate adolescence on his own. He describes his first days of life to Frankenstein: “A strange multiplicity of sensations seized me, and I saw, felt, heard, and smelt at the same time; and it was indeed a long time before I learned to distinguish between the operations of my various senses” (105). Again, this behavior illustrates the early days of a newborn’s life. The creature continues to grow from this state of infancy but at a rapidly accelerated pace. He soon learns about sleep, hunger, and thirst, as well as the danger of fire, after he places his hand inside the flame for warmth. He must learn all of this through trial and error, while human babies have parents, specifically mothers, to help them through the process. He even learns about love not from a mother but from the De Laceys, by observing Agatha’s father smiling at her “with such kindness and affection that [he] [feels] sensations of a peculiar and overpowering nature… a mixture of pain and pleasure, such as [he] [has] never before experienced” (111). These are feelings that one typically first experiences with a mother, but the creature is never exposed to them prior to this moment. With no family of his own, the creature calls the De Laceys his “protectors” and considers them to be “superior beings, who would be the arbiters of [his] future destiny” (117), much in the same way that children idolize their parents and expect them to shape their future. His desperate search for a family proves how beneficial it is for life to begin in the presence of parents. The creature hears “how all the life and cares of the mother [are] wrapped up in the precious charge” and realizes that “no mother [has] blessed [him] with smiles and caresses,” leaving him to wonder “what [is he]?” (123-124). Without a mother or other relation, he has no idea who, or what, he is; mothers are thus integral to one’s identity. In time, the creature discovers Frankenstein’s identity and cries to him, “you were my father, my creator; and to whom could I apply with more fitness than to him who had given me life?” (141). He feels utterly rejected and alone, and blames his creator. But this is only the explicit abandonment; the more significant abandonment is the creature’s lack of a mother, or Frankenstein’s decision to give the monster life but withhold a mother from him. It is ultimately this sense of abandonment and the consequent rage that lead the monster to evil and cause him to seek revenge on humanity through murder. Frankenstein is thus a novel about the dangers of men bypassing women.

Even the creature is a male character and is thus susceptible to this chauvinism. In demanding that Frankenstein create a female monster like him, he proves himself willing to subject another to his fate. He says, “I demand a creature of another sex, but as hideous as myself… we shall be monsters, cut off from all the world; but on that account we shall be more attached to one another. Our lives will not be happy, but they will be harmless, and free from the misery I now feel… neither you nor any other human being shall ever see us again” (148). He has predetermined her fate: they will move to South America, live off of nuts and fruit, sleep on dried leaves, and both will be content but never happy. Much like Frankenstein with regard to Elizabeth, the creature does not stop to think that a female monster might not agree to live out his fantasy, let alone tolerate being around him. Frankenstein, however, does consider this possibility, and worries that she may be “ten thousand times more malignant than her mate and delight, for its own sake, in murder and wretchedness” (170). What finally drives him to refuse the creature’s request is the fear that they would want children and that “a race of devils would be propagated upon the earth who might make the very existence of the species of man a condition precarious and full of terror” (170-171). Frankenstein destroys the female creature with his own hands, “trembling with passion” (171). This image of him ripping a woman apart speaks volumes. He is terrified to create a female monster capable of birthing children, and it is thus the reproductive power of women that scares him and that serves as the basis of the novel.

At first glance, the lack of women, specifically strong, complex women, in Frankenstein is obvious. However, upon further examination of the book’s plot and message, it is revealed that the main storyline of the novel can be distilled into men bypassing women and attempting to take the female reproductive responsibility into their own hands. The ultimate results of this betrayal of nature — the deaths of William, Justine, Alphonse, Clerval, Elizabeth, Frankenstein, and the monster — are catastrophic. Perhaps it is in this subtle way that Mary Wollstonecraft and William Godwin’s influences are present in Shelley’s masterpiece.

 

Works Cited

Shelley, Mary. Frankenstein. Penguin Classics, 2005.

 

Why The United States Constitution Established a Just Government

As the 1790s neared in the newly formed United States, it became evident that the Articles of Confederation — the very document that established an independent nation — had to be rewritten. From new ideas emerging from the Enlightenment reverberating throughout Europe, to perceived inequitable treatment leading to chaotic outbursts of unchecked outrage and fury such as Shay’s and Whiskey Rebellions, the young nation was ready for change. Thus, the document that would dictate the lives of future generations for the next two hundred and fifty years was crafted: the United States Constitution. The document embarked on and succeeded in the seemingly insurmountable task of cultivating a potent government whose potency is not so strong as to reminisce about the monarch the colonies just escaped. It took a weak confederacy of states plagued with instability and chaos to construct a centralized government while simultaneously incorporating a system of checks and balances. It established a Bill of Rights to relinquish any fears of mimicking the very government that quashed independence and limited freedom. While the document had some downfalls that juxtaposed the very ideals and fundamentals that the “supreme law of the land” was founded upon, such as failing to protect citizens in times of war, upholding the act of slavery for another eighty-five some odd years, and limiting the rights of women, it left room to amend these shortcomings and evolve to what society and human nature would eventually become with advancements in philosophies and technologies. The United State’s Constitution is inherently just because of its ability to acknowledge its faults and grievances and change accordingly; this adaptability comes from the Elastic Clause, an organized legislative representation selected by the people of the United States, and the presence of the Bill of Rights.

The true justice of the United States’ Constitution came from its ability to adapt itself toward changing philosophies. Article V of the original document states that the document could be “amended” if “two thirds of both houses deem[ed] it necessary.” Thus, the ability of the government to adapt not only technologically, but also ideologically, with passing time was granted. While changing ideologies are often theorized as having to happen gradually over a long span of time, there have been instances where the Constitution was able to make necessary changes more rapidly. This capacity of the government to adapt to changing values both rapidly and gradually is a pertinent characteristic of its justice. For example, the Eighteenth Amendment was swiftly passed in 1920 as a result of the prohibition movement, immediately prohibiting the consumption of alcohol. While in theory, restricting alcohol consumption would encourage men to spend more time with their families and lower crime rate, it ended up having the opposite effect, bringing alcohol underground and leading officers to take bribes. Because the detriments of Prohibition proved to outweigh the benefits, leaders were able to use the Elastic Clause in the Constitution to pass the Twenty-first Amendment, repealing Prohibition and allowing the law to revert back to a more suitable philosophy. Gradual changes in ideals have also been able to be met using the Elastic Clause of the Constitution. The slowly evolving issues of slavery and women’s rights were important considerations neglected in the original documents of the United States Constitution. However, the amendment process has proven its capability to modify: the Thirteenth, Fourteenth, and Fifteenth Amendments served as examples of this fact, abolishing slavery and granting more rights to African Americans. Later, the Nineteenth Amendment gave women the right to vote. While these changes certainly did not make up for the hardship inflicted, and it would be another hundred years until segregation would end, the justness of the Constitution provided the structure to enable the changes to take place when society was ready.

While the Elastic Clause of the United States Constitution played a critical role in determining whether or not the government was in fact able to remain just, other factors such as the implementation of the legislative branch of government also perpetuated its justness. The ability of citizens to elect representatives in this particular branch of government contributes immensely to the justness of the United States government as a whole. Although Alexander Hamilton argued that the legislation was not just, insisting “a large [sum] of people is not necessary for thorough representation”, no matter how large the group of representatives was, it was the inequity among different groups of people at the time that inhibited true democracy. Even if the Anti-Federalists claimed everyone should have thorough representation, any individual who was not white or male during this time period had no voice and nobody advocated for the possibility of them getting one.  Even if this was the cultural reality at the time, the Constitution had everything it needed to correct these grievances, and eventually would do so when society was ready.

The legislative branch was not the only point of contention between the Federalists and the Anti-Federalists. One of the most crucial aspects to ensure a just government that perhaps even settled the Federalist/Anti-Federalist debate was the adoption of the Bill of Rights. The Anti-Federalists refused to sign the Constitution without said rights. This was due in part to the fact that the Bill of Rights guaranteed essential liberties what would be known as the first ten amendments of the document that was aimed to prevent the cultivation of a monarchy. These rights directly juxtaposed the experiences prevalent in the British monarch, citing the rights against “quartering” soldiers and the right to “search and seizure” which necessitates a warrant before searching private property without probable cause. The Bill of Rights would become essential in ensuring limited power to the executive branch of government, and because of this structure, it would remain just.

While there are several flaws that could be ascertained through close examination of the United States Constitution, it is imperative that one takes into account the time period and circumstances under which it was written. Critics of the United States Constitution point to specific times in the country’s history where the government failed to uphold constitutional rights, especially in times of conflict or war. While the Bill of Rights guaranteed American citizens the “freedom of speech, religion, and press,” historians who question the justice of the United States Constitution note that these rights have been specifically challenged throughout the nation’s history.  In 1798, John Adams passed The Sedition Act, limiting freedom of speech and press, as the United States prepared for the Quasi War with France. In recent years, suppression and discrimination have violated freedom of religion, brought on by fears of national security. However, while this prejudicial repression should not have been condoned, it has proved to be the only possible way to avert higher casualties and more violence. For example, had President Abraham Lincoln been more sensitive toward constitutional liberties and not suspended habeas corpus, the Civil War could have ended with more fatalities, as well as the demise of the Union. This would have come with issues such as slavery taking even longer to dissolve, for different values would have been imposed separately rather than being blended. The notion of slavery not being abolished is inarguably far worse than a short suspension of civil liberties.

Despite its shortcomings, the United States Constitution succeeded in taking an unstable, loose confederation of states and creating a centralized government, not so strong as to limit liberty, while simultaneously balancing state and federal control. Although at the time of its ratification, major contradictions to justice were prominent — and civil liberties were not always upheld during times of conflict — the Constitution’s ability to change itself, even today, enables the United States’ government to remain just. Only time will tell whether or not American leaders and their people will continue to use the elasticity of the Constitution to ultimately serve and protect all people.

 

Dinah’s Voice Must Always Be Heard; A Speech Examining Vayishlach (Genesis 34) Through A Feminist Lens

Hi! Thank you all for coming today, it means a lot to me and my family. So, a bunch of things happen in this portion, but today, I will be focusing mainly on Dinah’s story, which by the way, is a total misnomer because she has no voice in this story. So, a quick recap for all of you who have zoned out for the last thirty minutes, Dinah’s story goes like this. Once upon a time, Dinah, the only named daughter of Ya’akov and Leah, went walking in search of other girls in the land of Chamor. Shechem, Chamor’s son, “vayikach Dina” — or “takes” her. What happens after she has been taken, is debated. Some say it is rape, but others say it is a “humbling” of Dinah. Shechem then begs his father to make it so Dinah will be his wife. Chamor approaches Ya’akov with a proposition: if you let us take your daughter as a wife to Shechem, we will give you anything you want. They also propose that their families or “tribes” should intermarry. To do this, Chamor will give Ya’akov all the Chamorite daughters in return for all of Israelite women. Sounds fair, right? Ya’akov willingly passes off the decision making to his sons Shimon and Levi. The now very angry sons agree to Chamor’s request under one term, all the men in Chamor’s tribe must get circumcised. Chamor agrees to this unusual request. Now, the Torah is careful to note that the brothers make their deal with guile. This comes up again when the brothers decide to avenge Dinah, or rather to avenge their family’s name. They launch a surprise attack on Chamor’s tribe, killing all the Chamorite men while they are in pain from being circumcised in adulthood. Shimon and Levi also steal all their belongings and women. Ya’akov gets terrified that the brothers’ actions will cause other people to retaliate against him and his family, and therefore decides to move far away. The story ends with Dinah’s brothers answering “Should they have treated our sister like a prostitute?” I guess they have no regrets.

Okay! One thing that stood out to me was the verb vayikach or “and he took.” This is the same verb that is used when someone takes an object or a man arranges for a woman to become his wife. In biblical society, to take a woman as one would take an object is just normal. That being said, to take a woman without her father’s consent, as in Dinah’s case, would be a culturally unacceptable occurrence. Shechem didn’t initially ask Ya’akov’s permission to “take” Dinah. So Shimon and Levi felt the need to go after Chamor’s family not out of love for their sister, but rather because Shechem committed a property crime against Ya’akov and his family. During the negotiation with Chamor, Dinah has absolutely no say in what happens to her. Instead, her brothers decide to avenge the infringement of their ownership of their sister by stealing some more women from Chamor and killing all of the men in Chamor’s tribe. This stealing does not necessarily imply rape, but it does imply that women can be traded and given and treated as objects rather than thinking people. All of this is reported with limited criticism of Shimon and Levi’s actions. That’s a problem. While most of us would agree that some kind of consequence is needed for Shechem’s actions, Dinah’s brothers’ actions are not morally superior and demonstrate no greater respect for women and their sister. Before I talk about how this reflects on our society today, I would like to make it clear that I’m am not saying that direct sexual violence and the broader objectification of women are of the same magnitude. They’re not. But, a society that normalizes the objectification of woman is one that is less likely to condemn sexual violence or not even recognize it as such.

This set of hypocritical attitudes about the justification of misogynistic behaviors is prevalent in today’s world. Just as the taking of Dinah and the stealing of the Schehemite women are both instances of the objectification of women, we see that many men think it’s fine for them to exploit women in small or large ways, but when their peers do the same, it’s not acceptable. Over a year ago, footage was released of Donald Trump talking to Billy Bush about being able to sexually aggress against women without consequence because he’s a wealthy media star. This gave many people another reason to despise Mr. Trump. While some of his fellow politicians condemned the disgraceful way Trump acts and talks, they did not frame it as societal backwardness; instead, they proudly stated, I would be offended if these comments, behaviors, or attitudes were aimed at MY daughter, MY wife, MY mother, MY women. These statements, while appearing to be honorable and a step in the right direction, still perpetuate the idea that women can only be considered in relation to men and objects of men and are important because of their connection to a man.

Vay’aneha — the verb after the first use of vayikach has several meanings, and that is where the debate over what happened to Dinah spurs from. A common translation of this word is that Dinah was violated. But what does violate mean? One understanding is that Dinah was sexually violated, or raped. The second more conservative approach is that Dinah was violated because Shechem did not ask the permission of Ya’akov before taking Dinah. Both interpretations reflect poorly on how their society treats women. The fact is that we have no clue what happened because Dinah has no voice in the story! She has no voice in being taken, in the negotiation with Chamor, or in the decision made by her brothers to attack Shechem. She is merely the object that is under dispute. No matter how you approach the story, one part or another is unsettling or disturbing. If you think Dinah got raped, that’s disturbing, if you think the brothers’ actions were uncalled for, that’s disturbing, if you think that there are lots of patriarchal attitudes engraved in, the fact that this is a religion and community that many people rely on for answers and we look up to these patriarchal ideas, is disturbing. And the feeling that I am left with is something is really, really wrong and that needs to change!

For millennia, the voices of women and others who have been sexually assaulted have been suppressed. As I’m saying these words, many women are speaking up about their experiences with sexual violence and the effects a patriarchal society has had on them. But right now, we’re standing in midst of a cultural and socio-sexual hurricane. It’s taken thousands of years for Dinah’s voice to finally be heard. From this discussion, I don’t want you to only take away the fact that our culture is woven in with stale and ancient thoughts relating to women. As a society, we are working on changing. Even though certain laws and leaders seem to be trying to roll things back, the amount of attention and conversations that come out of today’s movements are the preliminary step to a change in societal thinking. It’s our responsibility to learn from the wrongdoings that are described in the Torah and the wrongs that are perpetuated by the way that the story is told, and make sure that Dinah’s voice will always be heard.