Trojan Tub Entertainment

Monday, December 27, 2010

Is The Theater Dead--Or Is It Killing Us?

It’s good news that Christopher Tierney, the stunt actor who on December 20 fell 30 feet while playing Spider-Man on Broadway is recovering well from his injuries (a hairline skull fracture, four broken ribs, a bruised lung, internal bleeding, and cracks in three lumbar vertebrae). But it’s worrisome that this is actually only one of four injuries that Spider-Man: Turn Off the Dark has inflicted upon its performers, to include a concussion and a broken wrist. They might think of renaming the show, Spider-Man: Dial 9-1-1.

But more worrisome than the physical dangers involved with the show—because it is in the deepest sense the cause of these dangers—is Broadway’s increasing reliance upon spectacle. In his Poetics, written in the late 4th century B.C., Aristotle cautioned playwrights to consider spectacle as the least important ingredient of a well-made tragedy (and presumably any piece of theater). The production of spectacular effects, he wrote, “depends more upon the art of the stage machinist than of the poet.” What should be of chief concern to the “poet,” said Aristotle, was the crafting of plot, a sequence of actions and their consequences that of itself, wholly apart from spectacle, would produce the desired emotional impact on an audience. Spectacle, in short, is at best a supplement to plot.

Broadway producers and their book writers would do well to spend an afternoon with the Poetics. At present, they are looking to economically successful Hollywood films as their main source of safe inspiration. Thus in the past decade or so Broadway musicals have included adaptations of Hollywood films such as Mary Poppins, Shrek, Titanic, The Lion King, Legally Blonde, Hairspray, Dirty Rotten Scoundrels, and now Spider-Man: Turn Off the Dark, whose conception by Julie Taymor, score by U2’s Bono and Edge, 41-member cast, 18 orchestra members, and 27 aerial stunts, including a climactic fight over the heads of the audience, costs about a million dollars a week to stage (meaning, according to one report, that the show will virtually have to sell out for years just to break even).

This emphasis upon adaptation evinces a dire lack of imagination on Broadway. Compared to a movie screen, upon which there are no strictures upon time and place, a theater stage is a confining arena. And yet, this confinement has always been an important part of the magic of the theatrical imagination. Shakespeare’s As You Like It and Thornton Wilder’s Our Town were staged with minimal props and scenery, and yet the Forest of Arden and Grover’s Corners have, and will always have, more of a hold upon our imaginations than the stuntman-filled skies over Spider-Man’s metropolis. G.K. Chesterton once said something to the effect that it is the frame as much as the composition that makes a painting what it is. Would that Broadway would better appreciate its frame, rather than trying to break through it by spectacular live renditions of computerized special effects.  

A re-appreciation of the confines of the theatrical stage would force producers and writers back not only upon plot, but also upon what for Aristotle are other important ingredients of a well-crafted tragedy: “character," "thought," and “language.” After finishing the Poetics, folks on Broadway might bone up on the possibilities of these elements by turning to two standard-bearers in the art of musical theater, namely, the team of Gilbert and Sullivan, and the book writer and sometime lyricist P.G. Wodehouse. In refreshing their imaginations at these pools, today’s musical theater impresarios and artists would be reminded of the marvelous way in which structure, character and language grip an audience far more than aerial flights and impressive mobile scenery. This is because while spectacle may momentarily titillate our sense of excitement, it is plot, character, thought and language that make more satisfying and lasting impressions upon the mind.

Alas, as so many of Broadway’s Hollywood adaptations have been successful, it is unlikely that popular musical theater will soon experience an Aristotelian renaissance. But such a renaissance would surely be good for our souls, and judging from the debacles of Spider-Man: Turn Off the Night, it seems it wouldn’t be so bad for the ribs, lungs and skulls of Broadway performers, either.  

Wednesday, December 22, 2010

Getting Out of the Catholic Ghetto

I’ve been thinking about a recent post by Kevin O’Brien on the blog, End of the Modern World, which O’Brien writes along with Joe Trabbic and my friends Steven A. Long and Joseph Pearce. O’Brien’s post, “What I Learned from Show Business,” is a thoughtful rumination on the challenges facing Catholic artists in today’s entertainment market. Here’s his conclusion:

the upshot of all of this is that Catholic artists must begin to recognize that the market of regular people will indeed pay for good content, but that such content must be developed and marketed to them, keeping in mind that it may be art, but it’s also a business (without keeping this in mind, Catholic artists are bound to get taken advantage of, as all talent tends to be taken advantage of). To fall back either on empty formulas with bad content (as some producers do) or to get lazy and rely on the contrived market that will accept bad content without complaint (as many who produce for the Catholic Ghetto do) is wrong.

So if empty formulas and bad content for contrived markets are wrong approaches, what’s the right approach? What for Catholic artists and their audiences should count as good content?

First of all, it’s worth noting the truth in the view taken by those whom O’Brien calls the Catholic Ghetto. These folks, I take it, want art to witness to, confirm, and encourage their Catholic beliefs. Better than that, they want art to be powerful enough to convert unbelievers—to be apostolic in its impact. Now, there is nothing inherently wrong with these desires. A work of art made by a Catholic should, in some way—and there is certainly more than one way—witness to, confirm, and encourage Catholic belief. Otherwise, how could it be said to be Catholic? Flannery O’Connor, always so wise on these matters, writes this about the Catholic novelist in her essay, “Novelist and Believer” (from her collection, Mystery and Manners), a point that is relevant to all the arts:

Great fiction involves the whole range of human judgment; it is not simply an imitation of feeling. The good novelist not only finds a symbol for feeling, he finds a symbol and a way of lodging it which tells the intelligent reader whether this feeling is adequate or inadequate, whether it is moral or immoral, whether it is good or evil. And his theology, even in its most remote reaches, will have a direct bearing on this.

Part of O’Connor’s point here is that every novelist, every artist, constructs his work from an estimate of what he believes life is all about. So that if the theology of the Catholic artist does not influence the way in which he sees reality, he is not being true to himself as an artist.

But another part of O’Connor’s point is that great fiction demands something more than belief. O’Connor speaks of the good novelist’s need to find “a symbol for feeling,” a symbol that will bear the weight of the truth that the novelist wants to communicate. The finding of such a symbol is a matter of craft, not of faith. It is a matter, as O’Connor argues in her essay, of penetrating reality and finding there the concrete materials by which the truth can be communicated in a way that delights the senses, the emotions, the memory, and the imagination. “Ever since there have been such things as novels,” writes O’Connor,

the world has been flooded with bad fiction for which the religious impulse has been responsible. The sorry religious novel comes about when the writer supposes that because of his belief, he is somehow dispensed from the obligation to penetrate concrete reality. He will think that the eyes of the Church or of the Bible or of his particular theology have already done the seeing for him, and that his business is to rearrange this essential vision into satisfying patterns, getting himself as little dirty in the process as possible.

So the artist has an obligation. His job is not to make his beliefs look pretty or feel soothing, but to get his hands dirty with the way things really are. The Catholic artist has a distinct advantage in this, as his faith helps him see the full breadth and depth of reality. Yet this does not dispense him from looking at that reality with his own eyes. 

But the Catholic audience also has its duties. It has an obligation to learn how to appreciate craft as craft, and not just for the beliefs that are represented in it. In the greatest Catholic art, craft and belief are not separated, but they are distinct. If they were not distinct, we would not be able to rate Dante higher than the smarmiest Christian greeting card. 

What about wanting Catholic art to change lives? It’s sure nice when it does. Wasn’t there a murderer in Texas who turned himself in after seeing Mel Gibson’s The Passion of the Christ? But we must understand that while a work of art may dramatically change lives, it is not an essential feature even of a Catholic work of art that it do so. Essential to great art is the beautiful rendering of a vision of how things are. Whether that vision changes someone’s life depends on the disposition of that person, the circumstances he is in, and above all, God’s grace. And it's good to keep in mind that God can use just about anything as a vehicle for grace—even bad art. 

Monday, December 20, 2010

On Zombies and Metaphysics

Think Mad Men is the hippest thing on television? Your brains must be exposed. Zombies are where the action is. 

In a recent piece in the New York Times, Chuck Klosterman observed that roughly 5.3 million people watched the first episode of The Walking Dead on AMC, “a stunning 83 percent more than the 2.9 million who watched the Season 4 premiere of Mad Men.” Zombies are crawling out of their graves and staggering in armies down the broad avenues of our popular culture. Not even poor Jane Austen has escaped their grasp. Indeed, when it comes to being the living end of popular culture, zombies are making vampires look, well, dead.

But why this renewed fascination with zombies? Klosterman offers this intriguing possibility: “Zombies are just so easy to kill.”

“If there’s one thing we all understand about zombie killing,” Klosterman writes, “it’s that the act is uncomplicated: you blast one in the brain from point-blank range….” And then you do it again…and again…because zombies roam in packs. There’s always more of ‘em. And they keep coming…and coming….so you must keep shooting, and shooting…

“Every zombie war is a war of attrition.” And isn’t that, as Klosterman argues, what so much of modern experience feels like? “[Z]ombie killing is philosophically similar to reading and deleting 400 work e-mails on a Monday morning or filling out paperwork that only generates more paperwork, or following Twitter gossip out of obligation, or performing tedious tasks in which the only true risk is being consumed by the avalanche.”

For Klosterman, it’s because modern life makes us feel like we are being hunted, that we are in constant danger of being consumed, that the archetype of the zombie has returned with such vehemence. Moreover, if Klosterman is right, it’s the mindless, moaning life-suck that is our unreflective use of electronic media that, more than anything else, gives our days that Night of the Living Dead feel. Apropos of zombies and electronic media, Klosterman quotes a disturbing, but perhaps prophetic, passage from an essay by the writer Alice Gregory: “It’s hard not to think ‘death drive’ every time I go on the Internet….Opening Safari is an actively destructive decision. I am asking that consciousness be taken away from me.”

The novelist Walker Percy, in his essay “Diagnosing the Modern Malaise” (from his collection, Signposts in a Strange Land) argues that human beings in our time suffer from an “ontological impoverishment.” This is a fifty-cent phrase that simply means that for many in our culture, there is little if anything outside the self that counts as really real. The self—its wishes and needs, desires and ambitions—is hard substance. Everything else is alien. Or to put the point more ominously, everything other than the self is something that might threaten the self.

Percy goes on to say that, for many, this ontological impoverishment is experienced, however unconsciously, as an impoverishment, as a condition of alienation and anxiety that cries out to be relieved. Many seek such relief in promiscuous sex (see Mad Men). But others seek it through violence.

The kind of pornographic violence which many video games and zombie shows help to indulge, offers to those who submit to it a peculiar brand of ecstasy. Our word “ecstasy” comes from the Greek ek-stasis, literally “to stand outside of oneself.” There is sexual ecstasy and there is spiritual ecstasy. But violence also, in its twisted fashion, promises to transport the self beyond its fretful condition. The chaos and barbarism, the death-in-lifeness, of post-modern culture—so evident, as Alice Gregory sees, on the Internet—threatens to consume the self. Yet the self can feel alive again, feel all the power of being a self again, if it will take out a gun and blow the zombies away. That frisson of ecstasy as the blood and the brains splatter implies a sinister metaphysic: the self’s reality can be affirmed only as long as it keeps shooting.

Because more zombies are coming. You can be sure of that.

The Patron Saint of Bloggers

It's G.K. Chesterton, according to Francis Phillips in her marvelous little post on Why GKC? Read Phillips for yourself.

Saturday, December 11, 2010

For Narnia!

Last night, on its opening night, I went with my family to see The Chronicles of Narnia: The Voyage of the Dawn Treader (dir. Michael Apted). I am very grateful for this franchise. How many movies can an entire family, from adults to grade-schoolers, go see and all enjoy, without any fear of inappropriate content, on the one hand, or of utter boredom, on the other? Sure, the Narnia films aren’t the only films that try to hit all four of the main demographic quadrants (males, females, adults, kids), but most family-oriented films these days (the usual Disney and DreamWorks fare) are driven by a brand of snarky, crass humor that, for this correspondent anyway, is wearing a little thin. So bravo for the Narnia franchise, for its tales of swashbuckling adventure, its unabashed portrayals of self-sacrifice, its humor, its protection of innocence, and, not least, its admirable commitment to at least some of the Christian allegory found in C.S. Lewis’s children’s books.

That being said, I don’t believe Dawn Treader worked as a movie as well as it might have done. About halfway through I realized that I really wasn’t that engaged with the story. And I think the reason why is because the central plotline did not connect with the emotional lives of the characters in a significant way. When Edmund, Lucy and Eustace Scrubb are first taken into Narnia and onto King Caspian’s ship, the Dawn Treader, Edmund asks Caspian, “So why are we back here? What’s the adventure?” The very fact that Edmund doesn’t know what the adventure is gives the tale an arbitrary feel, as though any adventure will do. That feeling of arbitrariness abides even as Caspian and the gang get their adventure underway. The challenges faced by our heroes present themselves too much like elements in an obstacle course—external hurdles without deep internal significance. True, Lucy is shown struggling with her envy of her older sister Susan’s beauty; Edmund is shown struggling with his desire for wealth, power and glory; and Eustace is shown struggling with being, as he himself later puts it, “a sod.” But Lucy’s and Edmund’s struggles, at least, are minor episodes, difficulties encountered and dispatched with without very much trouble, and without a strong connection to the central plot. Eustace’s accidental metamorphosis into a dragon is more intimately connected to the plot, and more importantly, the beginning of a profound transformation of his character.

The screenwriters—Christopher Markus, Stephen McFeely, and Michael Petroni—certainly had challenges of their own in adapting Lewis’s rather episodic novel, to the point where they felt compelled to invent the unifying goal of the plot: the recovery of the seven swords of the long-lost Narnian lords, necessary for the destruction of Dark Island. This goal certainly gives our heroes an end to achieve and an evil to fight, but again, all on the external level. The net effect is exciting, but emotionally not very satisfying.

Great credit is due to the filmmakers, however, especially for the portrayal of the noble mouse Reepicheep (voiced by Simon Pegg, taking over from Eddie Izzard), and to Will Poulter, for his superb turn as Eustace Scrubb. And kudos, too, to the filmmakers, for including the lines, straight out of the book, where Aslan explains to Edmund and Lucy, as they depart Narnia for the last time, that they will see him again in their “own” world: “But there I have another name. You must learn to know me by that name. This was the very reason why you were brought to Narnia, that by knowing me here for a little, you may know me better there.”

Dawn Treader was co-financed and released by 20th Century Fox, which took over the Narnia franchise from Disney when the latter was disappointed with the performance of the second installment in the series, Prince Caspian. According to the LA Times, the first installment in the series, The Lion, The Witch, and the Wardrobe, grossed $65.6 million on its opening weekend; Caspian $55.5. Expectations for Dawn Treader’s opening weekend are even lower, but here’s hoping that the film will have legs, and do even better than Caspian over the longer haul. For this franchise deserves all the chances it can get, including the chance to find better solutions to story problems. 

Wednesday, December 8, 2010

Sucking the Life from Our Children, Part 1

I've been meaning to post the notes of my talk, "Sucking the Life from Our Children: Hollywood and the Romance of the Living Dead," delivered a few weeks ago (November 20) at the annual Fall conference of the Notre Dame Center for Ethics and Culture. The conference theme was Younger Than Sin: Retrieving Simplicity Through the Virtues of Humility, Wonder & Joy. Thanks again to David Solomon and his crew for the invitation and for putting on such a splendid conference. 

As the complete set of notes would make for a rather lengthy blog post, I am going to break them out into several posts (parts 2 and 3 are below). These are just notes, so thanks in advance for your patience in reading them. Comments always welcome. Streaming video of the talk will also soon be available on the NDCEC's website. 


1.    I’m very happy both to be a part of this conference’s concentration on works of children’s literature…the greatest works of children’s literature help cultivate humility, wonder & joy and have enjoyed an enormous and positive cultural influence
2.    But today, I want to consider works of children’s literature—more specifically, what the publishing business terms works of middle grade and young adult children’s literature—as well as a raft of movies and television shows, principally aimed at young people, that are wielding a massive and negative (chilling) cultural influence: works that feature vampires (+ zombies). “Hollywood” stands in for all of popular culture.
3.    The ubiquity of the vampire in today’s popular culture…those of you who think you know how widespread this phenomenon is…pop into an airport bookstore on your way home…stroll past the middle grade and young adult sections at Barnes & Noble…scan the posters at the local Cineplex…see your TV listings: vampires are everywhere. 
4.    A quick look at 
5.    Why this emphasis upon the vampire?
a.    in one sense, vampires are not new—as we’ll see, their influence on popular culture goes back at least as far as the early 19th c.;
b.    at bottom, vampires are images of Satan, he who is neither living nor dead, the devourer of human flesh (a perversion of the Blessed Sacrament), he who holds out a bogus version of “eternal life.” 
c.     The vampire is the opposite of what Georges Bernanos said about the Blessed Virgin: the vampire is not so much “younger than sin,” but preternaturally youthful because of one of the most horrible of all sins: cannibalism.
d.    op-ed by Guillermo del Toro (director of Pan’s Labyrinth) in July 29, 2009 New York Times: literary and cinematic vampires reflect our need for the supernatural—for immortality; they also reflect a strange combination of sex and death—in a twisted parody of a sexual encounter, the vampire offers a grotesque version of “eternal life.” As del Toro says, Eros and Thanatos fuse in an archetypal embrace. 
e.    yet still, in the last 5 years or so the resurgence of the vampire in popular culture reflects a profound cultural shift.
6.    I want to show how recent vampires reflect this cultural shift by considering three vampire stories:
a.    John William Polidori’s short story, “The Vampyre” (1819)
b.    Stephenie Meyer’s Twilight series (dir. Catherine Hardwicke)
c.     The 2010 film, Let Me In (dir. Matt Reeves), based on the 2008 Swedish film, Let the Right One In, based on the novel of the same name by John Ajvide Lindqvist
7.    ThesisAs we compare the “classic” vampire story, initiated by Polidori, to our contemporary vampire stories, we see a dramatic change in how the metaphor of the vampire is deployed. More particularly, the vampire for us has ceased to be an image of unmitigated evil. Rather, the vampire has become, on the one hand, an image of the Romantic ideal of authenticity or self-actualization; and on the other hand, an image of the sheer unreality of evil (or conversely, of the unreality of innocence).
8.    As we consider these stories, we will see in particular an attack upon the very notion of innocence. The innocent young (on the threshold of adulthood), who originally were the victims of vampires, in our time have become the vampires, reflecting the notion that there is no such thing as innocence. The whole concept of innocence is “devoured.” 

Sucking the Life from Our Children, Part 2

Lord Byron on the Prowl
John William Polidori’s “The Vampyre” (1819) 

1.    The Romantic Period and Gothic romance (Romatic period: 1780-1830); “along with the Frankenstein monster, the vampire is one of the major mythic figures bequeathed to us by the English Romantics” James B. Twitchell, The Living Dead: A Study of the Vampire in Romantic Literature (Durham, N.C.: Duke University Press, 1981), ix.
2.    Why the Romantic taste for the Gothic?
a.    A reaction against Enlightenment rationality…a search for mystery + anxiety about modernity, repressive fears; a glimpse of the “otherness of cosmic indifference.”
b.    An important aspect of the anxiety expressed by the Gothic is anxiety about the innocent, the virtuous who are vulnerable to (sexual) predators…
3.    Thus we find: the critique of Gothic romances in Jane Austen's Northanger Abbey (published posthumously in 1817): indulgence in the Gothic corrected by virtue: a young woman at the threshold of adulthood encounters the horrible, even a kind of vampire-like figure in Captain Tilney. But rescued by the virtues of his son, Henry.
4.    Interestingly, the first published vampire story, Polidori’s “The Vampyre” is also a cautionary tale (sets the tone for later vampire tales, including Bram Stoker’s Dracula (1897).
5.    Set the stage: Villa Diodati, Geneva 1819. Lord Byron, Percy Bysshe Shelley, Mary Shelley, Claire Clairmont, and Polidori. Summarize the story.
6.    Polidori’s vampire, Lord Strongmore, is a critique of Byron himself, as well as of the Byronic hero. The Byronic hero: “proud, moody, cynical, with defiance on his brow, and misery in his heart, a scorner of his kind, implacable in revenge, yet capable of deep and strong affection” Thomas Babbinton Macaulay, quoted in The Oxford Illustrated History of English Literature, p. 297. Think also of Milton’s Satan, the dark and discontented heroes of Gothic novels (Heathcliff).
7.    Lord Strongmore, like Byron, is a lord; although with a “dead grey eye,” has a mysterious power over other people…is said to have the “serpent’s art”; a pretense of virtue among the virtuous, he gives money to profligates; a destroyer of the needy around the gambling table; a spoiler of feminine (especially young feminine) virtue. He is MacIntyre’s “aesthete,” who preys upon other people, taken to an absurd, macabre degree. 
8.   Lord Strongmore: “It had been discovered, that his contempt for the adultress had not originated in hatred of her character; but that he had required, to enhance his gratification, that his victim, the partner of his guilt, should be hurled from the pinnacle of unsullied virtue, down to the lowest abyss of infamy and degradation: in fine, that all those females whom he had sought, apparently on account of their virtue, had, since his departure, thrown even the mask aside, and had not scrupled to expose the whole deformity of their vices to the public view.”
9.    For Polidori, the vampire is a demon who preys upon the virtues of innocent young women and therefore needs to be destroyed. 

Living in Twilight
The Twilight Series by Stephenie Meyer

1.    “Twilight, in these southern climates, is almost unknown; immediately the sun sets, night begins….” Polidori, “The Vampyre,” (40). Polidori gives us a world of clear lights and darks, in the Twilight series we move into a world where all is “twilight”… a place where good and evil have to be reconceived
2.    sketch the basic contours of the story; show scene of Edward Cullen and Bella Swan in the woods when she finally discovers he's a vampire
3.    Strange reversals: in Edward Cullen, we have something of a return of the Byronic hero, as a vampire—but this time not as villain, but as hero. And in Bella we have the innocent who wants to, and eventually becomes, a vampire.
4.    The “vampire” in these books is not intrinsically evil—there are good vampires and evil vampires. Wanting to feast on human blood is simply a natural necessity. So what makes for a good vampire?
5.    It is to act against prohibitions. Edward’s vampirism, plus the Volutri, prohibits romance with a human; Bella’s humanity prohibits romance with a vampire. And yet…they fall in love.
6.    Image of the book's front cover: the apple. The Mormon understanding of the Fall. “Twilight is a romantic retelling of the story of Man’s Fall presented in the engaging and exciting wrappers of a romance and an international thriller” (John Granger, "Mormon Vampires in the Garden of Eden," Touchstone Magazine, November/December 2009). True, Genesis says “You will surely die.” But this death is salvation. Ending with the quotation from Meyer herself (“choice”).
7.    Choice. It is because they choose to disregard everything—devouring humans + human life—for the romantic relationship as vampires. Eros and Thanatos fused in archetypal embrace.
8.    And thus a kind of “innocence” is regained. The virtues of his “true self” are honesty, integrity, benevolence, generosity, sensitivity. The virtues of authenticity (Charles Taylor). Or the choice to be a humanitarian (the character of Carlisle). Vampires with liberal-romantic sensibilities. But the image of the apple (if not Meyer herself) attempts to persuade us that “innocence” is achieved by acting against all prohibition. It attempts to persuade us that there is truth in the serpent’s words: “You certainly will not die! No, God knows well that the moment you eat of it your eyes will be opened and you will be like gods” (Genesis 3:4-5).

Sucking the Life from Our Children, Part 3

Our Children, Our Vampires
Let Me In (2010, Dir. Matt Reeves), based on the 2008 Swedish film, Let the Right One In (2008, Dir. Tomas Alfredson), based on the novel of the same name by John Ajvide Lindqvist

The page numbers below refer to the copy of the script of Let Me In (written by Matt Reeves) available here.  

1.   Now, because of what for many is the alluring appearance of Edward and Bella’s romance, the true horror of this serpent’s lie is cloaked. The horror is not cloaked in Let Me In.
2.   show trailer: (2:16)
3.   Sketch the basic story—Owen is a lonely boy, parents divorcing, preyed upon by bullies; Abby is a lonely girl who is a vampire. Like Twilight, it’s a story of “doomed lovers,” though in this case it’s a couple of twelve year-olds. (The film makes references to Romeo and Juliet.) And, like Twilight, we have a kind of vampire hero—or heroine.
4.   But there’s a big difference between the two stories. One critic calls Let Me In the anti-Twilight.
5.   Owen sees Abby attack someone. The movie asks whether evil really exists (show script: “Do you think there is such a thing as evil?” pp. 72-74) and its answer seems to be, “no.” The use of Reagan’s “Evil Empire” speech (p. 6) the mother’s Christianity—these reflect traditional categories of good and evil that are ineffectual in this world. The parents both in Twilight and Let Me In are not just absent, they are helpless children themselves.
6.   Owen confronts Abby. Are you a vampire? Abby: “I need…blood. To live. Yes. (p. 75) The little girl’s desire to prey upon humans is a natural necessity, neither morally good nor evil.
7.   Owen identifies with Abby. (pp. 81-84). “Who are you? I’m just like you.” The film tells us that we are all child-predators. Abby is Owen and Owen is Abby.
8.   But unlike Twilight, there is nothing to be done about Abby’s vampire nature. She is who she is and there’s no changing it. She has an animal’s brutal appetite and it needs to be satisfied. She and Owen do manage a kind of friendship, but it’s a deranged kind: Owen will simply become one more in (presumably) a long line of caretakers. By the end of the story, he is simply enabling her vampirism.   
9.   Abby “rescues” Owen from the bullies at school. But unlike Edward Cullen’s heroism, Abby’s vampire heroism does not cut against the grain of her vampire nature. True, she restrains herself from attacking Owen, but in the end it’s her brutal killing of the boys who bully Owen at school that “saves the day.” The only heroism that exists is revenge in the midst of a vengeful world.
10. So, in Let Me In, we come full circle back to Polidori’s idea of the vampire as predator, but now the predator is a female child (who used to be the victim), and what’s more, this child-predator is the (tragic) heroine. And this is what we are left with: we are all vampires now. There is no innocence left untainted by our desire for blood.


Polidori’s predator-vampire has degenerated in our culture into (a) a romantic hero who overcomes his predator nature in order to achieve authenticity; or (b) an image of the basic condition of all of us: fallen, devoured, wholly incapable of “innocence.”
To make children & adolescents vampires is at least to call into question, if not to deal a death blow to, the very notion of innocence.
 This degeneration tracks the cultural degeneration of the last two hundred years of modernity. We have lost our faith in innocence. Without a Christian sense of the human supernatural  destiny, or even a Christian sense of the natural order, we are left with ourselves. We either cling to some vestige of the life of virtue, or we simply abandon all pretense of the good life. 

Wednesday, December 1, 2010

Epic Mickey

The devolution of the once-great Walt Disney Company continues with the release yesterday of a new video game featuring Mickey Mouse. Entitled Epic Mickey, it was designed by prominent game-designer Warren Spector for Nintendo Wii. With little sense of irony, the game features Mickey trapped in a dark world called “Wasteland,” the dustbin of old or no longer useful Disney characters, products and ideas. In Wasteland Mickey finds a nemesis in none other than the character who might have been the iconic cartoon figure that Mickey Mouse became—namely, Oswald the Lucky Rabbit, the rights to which Disney lost to his distributor in 1928.

But most interesting, and also most disturbing, about Epic Mickey is what Warren Spector had to say about the game, and gaming in general, in a recent interview with Guy Raz for NPR’s All Tech Considered.

What games do that other media can’t, according to Spector, is allow the player to make decisions that influence the action. In Epic Mickey, this dynamic of choice goes so far as to allow the player to decide what kind of hero Mickey is going to be: as Spector puts it, either a “sweet, friendly guy,” or the kind of hero who says, “I’ve got a world to save, I just want to do the most efficient thing possible.” At the end of the game, as Mickey’s personality and the player’s personality fuse, “every player is going to create his own unique experience, tell their own story, and define for themselves what makes Mickey a hero.” The reality, of course, is not quite so Promethean, as all the available choices—including (no surprise) the ability to turn Mickey into a “naughty” Mickey—are programmed into the game. But fostering even a virtual sense that a player has the power to manipulate Mickey’s character is itself alarming.  

The video game as an exercise in creative self-definition, transforming the very concept of heroism that we found, once upon a very different time, in Disney’s own Snow White.

As far as cultural shifts go, that’s epic. 

Tuesday, November 30, 2010

On Saturday Morning I Went to See Thomas Jefferson

As portrayed by an historical interpreter, that is, in Colonial Williamsburg. Colonial Williamsburg, for those who don’t know it, is the premier “living history” district in the United States. It is a full-scale reconstruction of what was, for most of the 18th century, the capitol city of colonial Virginia. Some of the buildings, such as the Governor’s Palace (that’s the British governor) and the Virginia House of Burgesses (the colonial legislature) are reconstructed entirely from plans and drawings, though other buildings, such as the Magazine (where the ammunition was kept—and then snatched by Governor Dunmore when the colonists became too uppity), are restored versions of the original buildings.

Some have complained that Colonial Williamsburg is too clean-swept a version of 18th-century colonial American life (see Colonial Williamsburg’s Wikipedia page for an overview of these gripes). It is, no doubt, more picturesque than its original, and is most certainly designed with the tourist (and his credit card) in mind. But overall I found its commitment to historical accuracy extraordinary. I was most impressed by the way in which Colonial Williamsburg encourages reflection on the deepest political questions of American public life.

The interpreter who gave us the tour of the House of Burgesses, for example—where the scene is set in June of 1776—did a remarkable job at expressing what was at stake in Patrick Henry’s argument for an entitlement to (not mere tolerance of) free religious exercise—an argument still resonating in our own time in the debate over the proposed mosque at Ground Zero. Our interpreter was also excellent at revealing the anxieties felt by colonists at the prospect of going to war with Great Britain, the mother country.

The Thomas Jefferson interpreter, for his part, after a half-hour speech delineating the Jeffersonian understanding of political liberty, fielded questions (in character) from the audience. To the question on many people’s minds—why did you, Jefferson, keep your slaves while in principle you opposed the institution of slavery?—the interpreter offered a strikingly plausible argument, focusing on the gradual restriction of slavery in the colonies followed by the gradual elimination of the entire institution. It was the wrong argument, but it was as compelling an answer to the charge of hypocrisy in Jefferson as one will find, and also one which has some resonance in contemporary debates about how best to eliminate the practice of abortion.

So if Colonial Williamsburg has elements of a theme park—what of it? It is a place of family entertainment, but an entertainment that invites the spectator to think through the foundations of the American experiment—something we often forget to do when we’re absorbed by distractions such as Bristol Palin’s performance on Dancing With the Stars.

Colonial Williamsburg does not caricature our Founding Fathers, or portray them in hagiographic terms. Through the method of historical interpretation, it seeks to portray the emerging American experience in all its philosophical complexity. And for that alone Colonial Williamsburg is a triumph, and well worth the money I spent on imitation 18th-century wine goblets.      


Tuesday, November 23, 2010

The Hollowing Out of Godric's Hollow

Harry Potter and the Deathly Hollows, Part 1, has been receiving very good, and well-deserved, reviews, and raked in 330 million dollars at the box office on its opening weekend. It’s an extremely entertaining film, and in almost every aspect comes close to matching the value of the book. As must be, the film has to do some things differently from the book, not least having to end at the mid-point, but the film’s artistic choices rarely leave one unsatisfied. There is a lot of story to cover, even to include just half of J.K. Rowling’s massive, 759-page seventh volume in her epic tale, and the film covers it well, albeit at a breakneck pace.

But David Yates, the director, and Steve Kloves, the screenwriter, did make a few artistic decisions that left me unsatisfied… (SPOILER WARNING: if you haven’t seen the film and do not want to hear specifics about it, now is the time to stop reading, hopefully to return after you’ve seen it.)

I didn’t think it right, first of all, that the film chose not to portray the Dursley’s departure, especially Dudley’s and Harry’s final farewell, where the two boys make a certain peace with one another. The Dursley’s play such an important role both in the books and in the films, they deserved better here at the end. Also, I liked the scene (not in the book) where Harry and Hermione, after Ron runs out on them, relieve the tedium and frustration of their search for the horcruxes by dancing playfully to a song on the radio—it’s just the sort of thing that two lonely and bored teenagers, even ones with feelings for other people, would do in such a situation. But I didn’t think the embrace of the Riddle-Harry and Riddle-Hermione in the scene where Ron destroys the horcrux-locket with the sword of Godric Gryffindor needed the Riddle-Harry and Riddle-Hermione to be—even in highly-stylized fashion—nude. This goes farther than the book (see Chapter 19, “The Silver Doe”), and takes the film beyond the level of appropriateness, in this parent’s opinion, for children, say, under fourteen. But then again, I think the book itself is not appropriate for children under fourteen.

But perhaps most noteworthy of all, the film hollows out the beautiful Christian imagery that J.K. Rowling builds into one of the most moving chapters of the book, Chapter 16, “Godric’s Hollow,” where Harry and Hermione (before Ron returns) travel to Harry’s birthplace of Godric’s Hollow to search for the sword of Godric Gryffindor. The film does have Harry and Hermione pass a Christian church, where we hear the congregation singing a Christmas carol, and Hermione does observe that it’s Christmas Eve. But then the film ignores the words on the gravestone of Kendra and Ariana Dumbledore: “Where your treasure is, there will your heart be also,” a direct translation of Matthew 6:21. And it also ignores the words on the gravestone of Harry’s parents: “The last enemy that shall be destroyed is death,” from 1 Corinthians 15:26. Most of all, however, the film leaves out Rowling’s image from pp. 324-325 of the book: “Behind the church, row upon row of snowy tombstones protruded from a blanket of pale blue that was flecked with dazzling red, gold, and green wherever the reflections from the stained glass hit the snow.” As my friend John O’Callaghan, professor of philosophy at Notre Dame, has pointed out, this image clearly attempts to persuade us of the way in which the light of Christian revelation (in red, gold and green), flowing out from the church in the midst of her liturgy, illuminates and vivifies the human flesh that, to paraphrase St. Peter in his First Letter, withers like the grass beneath the snow and the gravestones. It would have been lovely if this beautiful, intensely cinematic, and profoundly Christian, image from Rowling’s book would have made it into the film.

These are my criticisms. Yet there is much in the film that I do very much like, not least David Yates’s decision to go in many scenes with a Cinéma vérité approach, which gives the film a tenser, grittier, more realistic texture that jibes very well with the darker themes of the story. An interesting "anatomy of a scene" by David Yates—of the scene where Harry, Ron and Hermione fight the Death Eaters in the café on the Tottenham Court Road (in the film, Shaftesbury Avenue)—is currently available on the New York Times website. In this analysis Yates discusses his decision to employ Cinéma vérité.       

I’d love to hear what you liked, or didn’t like, about the film. Meanwhile, have a wonderful Thankgiving and I’ll be back with you next week.

Monday, November 15, 2010

Living in Twilight

This past summer, while visiting Milwaukee, my wife and I had a chance to see a touring production of the musical Wicked. Wicked tells the back-story of the Wicked Witch of the West, a character from Frank L. Baum’s The Wizard of Oz and the 1939 movie starring Judy Garland. The musical, however, has a lot of fun deconstructing the character we know from these two earlier sources. In Wicked, the Wicked Witch of the West is, not evil, but misunderstood. Indeed, she even turns out to be the heroine that helps save Oz from the truly wicked designs of the not-so-benign Wizard.

Wicked’s Wicked Witch is just one example of a troubling phenomenon that has arisen in popular entertainment over the past several years: that of a traditionally “wicked” character playing the role of the hero or heroine.

One of the most popular instances of the phenomenon is Fox’s series, House, where we have a misanthrope who also happens to be a genius clinical diagnostician. But Greg House’s issues are nothing compared to the character of Dexter in the eponymous Showtime series. For in Dexter we have a serial killer—yes, a serial killer—serving as the hero of a police procedural (Dexter is good enough only to kill other killers).

Then there is the recent tsunami of middle grade and young adult novels, as well as movies and television shows, in which vampires serve as heroes and heroines—whether in Stephenie Meyer’s Twilight series, Heather Brewer’s Chronicles of Vladimir Tod, or in the HBO series, True Blood.

Notice that the “heroes” and “heroines” on this list are not merely characters with flaws. No, they are witches, vampires, misanthropes, and serial killers, characters that have traditionally been associated with unmitigated evil, but which are now more associated with good than with evil. What these characters disturbingly represent is the thought there is no such thing as good and evil—there is only a space between, a world of neither dark nor light but of “twilight” (as Stephenie Meyer would have it). In such a twilit world, even a vampire who wants to suck the life out of you, even a witch who torments a kid from Kansas, can be the instrument of salvation.

What does it say about the state of our popular culture, when not even the vampire can be named as evil, and when good is always a compromise with—not an overcoming of—that which is most despicable in human behavior?

Friday, November 12, 2010

Reality Bites Back

A compelling post today from Danielle Bean, a regular writer for the National Catholic Register, on the corrosive cultural influence of reality TV. She cites a book by Jennifer L. Pozner, the executive director of Women in Media & News. The book is called, Reality Bites Back: The Troubling Truth about Guilty Pleasure TV. Here’s a bit from the blurb on Pozner’s website:

On The Bachelor, twenty-five interchangeable hotties compete for the chance to marry a hunky lunkhead they don’t know from Adam. Weepy waifs line up to be objectified for a living (or simply for a moment) on America’s Next Top Model. Wealthy ladies who lunch backstab while obsessing over brand-name clothes, cars and jewels on The Real Housewives Of…everywhere. Branded “ugly ducklings,” appearance-obsessed sad sacks risk their health to be surgically altered on The Swan and Dr. 90210. Starved women get naked for Oreos and men gloat about “dumb-ass girl alliances” on Survivor. Women of color are ostracized as deceitful divas on The Apprentice, lazy or “difficult” on Wife Swap and Bridezillas, and “ghetto” train wrecks on VH1’s Flavor of Love and I Love New York. And through it all, slurs like “bitch,” “beaver,” and “whore” are tossed around as if they’re any other nouns.

And it’s all happening in the name of “reality.”

A few weeks ago the poet Dana Gioia, who served as director of the National Endowment for the Arts during the George W. Bush administration, gave a talk at Baylor University, where I teach. In his remarks he aptly said about reality TV that it gives us “the pleasure of smug superiority over our inferiors.”

What else does reality TV’s take on “reality” tell us about where we are as a culture? As Pozner asks on her website, what is reality TV saying about our understanding of men and women, race and class, love and sex, beauty and violence, advertising and consumption?

I especially liked one comment on Danielle Bean’s post on the Register website. This person quoted Father James Keller, the founder of The Christophers, who said: “There is some value, of course…in turning off vulgar, boring, or subversive radio and TV programs….But the cure does not lie there, for it is like objecting to bad food without providing anything better….New and better writers can be found. They will come from among you…the vast group of Americans who constitute the backbone of our nation and of our Christian civilization.”

It is easy enough to curse the awful food. But who are the writers out there who will provide us with better fare? Who will lead us, by engaging entertainment culture with consummate craftsmanship, from “reality” to reality? Are you one of them?

Let’s hear from you.

Wednesday, November 10, 2010

Younger Than Sin

Readers of this blog may be interested in a conference taking place next week at the University of Notre Dame: Younger Than Sin: Retrieving Simplicity Through the Virtues of Humility, Wonder & Joy. This is the annual Fall flagship conference of the Notre Dame Center for Ethics and Culture, where I was honored to serve as one of David Solomon’s associate directors from 2003-2009. It all begins Thursday evening, November 18 and culminates with a banquet on Saturday evening, November 20.

Here is a description from the Center’s website of what the conference is all about:

In his 2009 Christmas homily, Pope Benedict XVI suggested that it is the "simple souls" who receive most readily the truth and have therefore the shortest journey to make: "[T]he shepherds, the simple souls, were the first to come to Jesus in the manger and to encounter the Redeemer of the world" because they "lived nearby," whereas the wise men, who represent "those with social standing and fame," "arrived later" and "needed guidance and direction." Those who are not "lowly souls who live very close" to the truth but are instead captivated "amid worldly affairs and occupations that totally absorb us," we "are a great distance from the manger" and must undertake an "arduous" journey. We propose to explore simplicity of soul and its attendant virtues of humility, wonder, and joy—the fullness of which Georges Bernanos identified in the state of the Blessed Virgin who, attending at the manger, is described as "younger than sin"—free, with a virtuous simplicity of soul, for her joyful assent to and embrace of the Truth and the Good that has set her free. We, not so "young" as she, must undertake the journey to simplicity by humility, which enables honesty concerning oneself and one’s dependence on others; wonder, which as Aristotle wrote, first leads one to seek the freedom of the truth; and joy, the delight of the soul that is able to apprehend the true and the good and draw them to itself.”

“Such reflections are timely and extend to all disciplines as they can illuminate a culture that, in many ways, has become fragmented in its old age. For the simplicity that manifests and develops itself by humility, wonder, and joy is far from simple-mindedness or naïveté; it is a mature and concentrated and clear-sighted pursuit of the highest truth and the highest goodness by one who is not conquered by the addictive and constantly-changing self-distraction allowed by the iPod, the Blackberry, and the pursuit of acquisitive self-satisfaction. It is a cultivated disposition able to enjoy the simple life, the simple pleasures, and the truth, goodness, and beauty that they disclose.”

The conference is interdisciplinary, so beyond talks and presentations on theology and philosophy, it will also feature many talks on works of literature and film.

I will be giving a talk that Saturday afternoon, entitled “Sucking the Life from Our Children: Hollywood and the Romance of the Living Dead.” In the last four-to-five years there has been a prodigious increase of middle grade and young adult books, as well as movies and television shows, featuring vampires—and even more recently, zombies. Stephenie Meyer’s Twilight series of young adult novels (and the movies made from them) is a major case in point. What is the cultural significance of this phenomenon? Vampires, and Gothic tales generally, have been a part of our literary culture since the early 19th century, when John William Polidori wrote “The Vampyre.” Is the recent spate of vampire stories simply a reprise of the same fixation, or does it have a different emphasis and inspiration? I’ll post some notes from my talk next week. But as I prepare it, I would love to hear your takes on any aspect of this phenomenon. 

Those who can’t attend the conference shouldn’t despair. Many of the talks will be available to read on the Center’s website, and the streaming video of the invited talks, including my own, will also be available on the Center’s website not long after the conference.

I’ll close with some questions about which the conference hopes to generate discussion. I look forward to discussing them with you:

“In what ways does our culture offer opportunities for this simplicity? Or does it not? What are the necessary conditions for persons trying to achieve this ideal, or for families trying to fashion a culture wherein this ideal is possible or for societies trying to determine and pursue the common good?”