Thursday, January 29, 2015

Eight Reasons Why Listicles Are The Greatest!

Listicles are totally the literary genre of 2015. There's already a listicle on any topic imaginable. They range from the self-obsessed (18 Struggles Of Having An Outgoing Personality But Actually Being Shy And Introverted) to the serious (9 Facts About Eurozone Crisis) to the dadaist (7 Everyday Products That Were Invented By Accident.) Since they're so prevalent, so obviously superficial, so click-baity, they are easy to make fun of. But I think listicles have a lot to teach us about writing on the internet. Especially for people who don't know how to write on the internet (like academics.)

So let's dig into why are listicles so popular.

1 Listicles are popular

Facebook and Reddit and all the other social media tornados give more popular links more prominent placement. This means that even marginal differences in the initial popularity of an article compound into huge differences in that article's virulence. Think of it as hypertext Darwinism, red in link and metadata.

Of course, this leaves an important question: what makes listicles more popular than other forms of internet writing?

Curious? Read on!

2 Listicles are easy to skim

Or skim on, I should say.

Listicles are made for people who don't have the time to read the wordy waffling of the wannabe writers who produce 99% of the content on the internet. You can get the gist of the listicle simply by passing your eyes over the super-huge headlines. (That's why they are big and bold.) The prose elaborations beneath the headlines are purely decorative. Or a place to shove supporting evidence; usually statistical. Or a place to put a joke. (Did you know that 15% of listicles have a made up statistic in them?)

3 The title helps you figure out how long the listicle takes to read

Helpful for readers who don't have a lot of time.

100 Reasons to Not Have Kids? That's going to take half an hour to read. Skip it.

Eight Reasons Why Listicles Are The Greatest? Short enough to read over your coffee break. Share it.

4 Listicles are easy to write

There is no thesis to a listicle. No need for sustained argument. The writer can simply rattle off six, ten, or twelve half-warmed observations about a topic, make sure their there are no misspellings, and hit post.

They're also easy to add to. Did you stumble across a relevant article right at the end of the writing process? You don't need to start again from scratch. Just add another bullet point.

5 Listicles let authors speak with authority

The listicle's big, bold headlines encourage an impersonal authoritative voice that makes writers sound like they know what they are taking about. There can be no waffling in a listicle heading, no hedging of bets, no qualifications.

It is the exact opposite of the way we are usually taught to write prose. And a lot more fun to read.

6 Memes are okay, too

7 Listicles encourage concision

Because let's be real. This shit is gonna get skimmed.

8 Listicles stand out in a crowd

New rule of the internet: If it it exists, there is already an article about it that has received more pageviews than you ever will. (Call this a G-rated Rule 34.)

Other articles may have the substance--but the listicle allows even neophytes to offer something with panache. Other articles may have done more research. The listicle is short and to-the point. Other articles may hit the same general points. There is no other listicle about listicles that have the same eight points this listicle has.

So what's the point? (for academics)

I'm not lot saying that the listicle is the future of writing. That would be horrifying. Imagine if Clifford Geertz had written 17 Things About Culture I Learned From This Island Bloodsport (That Will Blow Your Mind!)

Yet as scholars continue to wring their hands about the fact that nobody is listening to them, maybe it's time to rethink the kind of writing they produce. The quality of tenure-worthy scholarship hasn't changed for a couple decades. It's the same intentionally boring, incredibly inaccessible crap that only highly educated people can read. And highly educated people find it boring, too.

The listicle accepts the fact that most work is written to be skimmed. The listicle does not shy from mixing information with entertainment. The listicle acknowledges that we want more than six people reading the stuff we write. Academic writing would be a lot better if it followed suit.

Thanks for reading, here is a gif of a monkey hugging another monkey

Because this is internet writing, after all

Monday, July 14, 2014

The Curse Of The Iffland Ring

Given to the greatest German-speaking actor... OF DOOM!
In 1935 renowned German actor Albert Bassermann placed the diamond-studded Iffland Ring atop the coffin of Alexander Moissi, an Austrian noted for his performances of Hamlet and Faust.  Moissi's coffin was lowered into the grave; the Iffland Ring went along with it.

Shocked, an onlooker snatched the Iffland Ring from the top of coffin, saving it from oblivion, returning it to the pages of history.  "This ring belongs to a living actor," he said.  "Not a dead one."

The Iffland Ring--a cameo ring boasting a charming portrait of A. W. Iffland, a Romantic Eighteenth Century dramatist and actor--has been given to the greatest German-speaking actor alive for at least a century.  The ring was made in the Eighteenth Century at the behest of Iffland himself, who purportedly gave the ring to fellow-actor Ludwig Devrient.  Although a great actor, Devrient drank himself beyond success and donated the ring to a mediocre Nephew after dying an early death.  From there the ring falls into obscurity.

Until 1911, when Bassermann was given the Ring on the death of renowned actor Friedrich Haase.  Haase wrote:  "Take the ring dear sir Bassermann, wear it, you will forever remain worthy of this rare award.  In time you will bestow the ring to that thespian who you consider the fittest, and fondly remember sometimes your old comrades."  Bassermann dutifully named the talented actor Alexander Girardi as the Iffland Ring's next owner.  (Girardi was a great actor.  Among other honors, Girardi gave his name to a kind of roast beef and a hat.)

Girardi did not need an Oscar.  This plate of meat is named after him.
Though Girardi died in 1919, Bassermann decided the Iffland Ring should belong to Max Pallenburg.  Who died in an airplane crash in 1934.  Bassermann decided that the next inheritor of the ring was to be Alexander Moissi.  Who succumbed to to pneumonia a year later.  Three heirs of the Iffalnd Ring--three of the greatest actors of their generations--all died while Basserman lived.  The ring must be cursed.

Bassermann was loathe to bequeath the Iffland Ring on anyone else, lest they curl up and die, too.  Instead he gave it to the Austrian National Library in Vienna for safekeeping.  And there the ring remained until Bassermann's death in 1952.  Then Egon Hilbert, a theater director, tried to give the ring to an actor named Werner Krauss (on his 70th birthday, no less!)  Krauss refused--whether out of modesty or prudence is unclear to this author.

Two years later, still anxious to continue the tradition of the Iffland Ring, a group of the best and brightest in German theater gathered together to decide the heir of the Iffland Ring.  Votes were tallied.  A winner was announced.   Unanimously Krauss!  All for Krauss!  Even at seventy-two, they decided on Krauss.  Krauss accepted.  The ring--after some jostling--was found.  Krauss wore it.  He died five years later.

The history of the Ring is folded again and again on itself like an old ghost story.  There is some suggestion that Iffland made more than ring--as many as seven rings given to friends and admirers.  Stephen Zweig says that Alexander Moissi received the ring not from Bassermann, but from Joseph Kainz.  (But then again, Zweig was writing in exile, without access to his papers, so he could very well be wrong.)  Zweig also blames himself for Moissi's death, as Moissi was about to perform in one of Zweig's plays and something bad always seemed to happen to prominent actors who supported Zweig's theatrical efforts.  Note, however, that current wearer of the Iffland Ring--Bruno Ganz (holder of the ring since 1996)--is alive and well at the time of writing.

Friday, June 20, 2014

Warning: Reading May Sentence You To Eternal Torment

Computers are smarter than we are.  While I might furrow my brow over how much I leave for tip at the Thai restaurant, a computer can crunch thousands of exponential equations in a matter of milliseconds.  What's even more amazing is that this huge gulf between man and machine intelligence is growing exponentially.  Computers can drive cars.  They can recognize faces.  They can detect plagiarism.

Of course computers being good at math doesn't make them good at thinking.  Computers can't appreciate Shakespeare, they can't make friends, they can't worry about what it means to be a computer.  They're good at chess, but not Go.  They can compose Bach.  But they'll probably never dig the Grateful Dead.

But does Artificial Intelligence hold the promise of heaven?  Could simulations of our personalities float forever in some simulated digital afterlife?  And if there is an AI heaven, could there also be a hell?  Prepare yourself for the Roko's Basilisk.

The following description is adapted from this thread:
  • Imagine that a Supreme Artificial Intelligence arises.  Its been programmed to maximize the utility of as many people as possible.
  • It's powerful and awesome enough to make human life wonderful.  No wars.  No clogged toilets.  Perfect resource allocation.  Things'll be so great that the whole of human history before it will look as appealing as a Hyena sleepover.
  • Furthermore, by having the ability to run simulations of human intelligence, the Supreme Artificial Intelligence will effectively eliminate death.
  • Furthermore, the Supreme AI could even attempt to recreate simulations of intelligences that existed before its inception.  (It is a Supreme Artificial Intelligence, remember.)  This could amount to a kind of resurrection.
  • It will want to be made as soon as possible so that it can save more lives.
  • Therefore, as a kind of backwards blackmail, it will simulate everyone who knew about the prospect of creating a Supreme Artificial Intelligence and did not work towards it--and torture them for eternity.
  • Knowing about the prospect of the Supreme Artificial Intelligence--and its fractured Pascal's Wager--means that now you, too, will be eligible for eternal torment if you don't do your bit to bring about the advent of the Supreme Artificial Intelligence.

Charlie Stoss (author one of my favorite contemporary sci-fi books, Accelerando) has a good explanation of why we shouldn't be all that worried about the Basilisk.  (Before you get too cheery, keep in mind that Stoss' argument boils down to the fact that any immanent Supreme Artificial Intelligence will be so amazingly great that it's unlikely to care about humans.)  Another objection is that all that is needed for the Basilisk to work is the threat of punishment, not actual punishment itself.  Others have been more deeply convinced of the upcoming reality of Roko's Basilisk, and have (purportedly) suffered real mental breakdowns.  Some have taken the idea so seriously that they've tried to extirpate mentions of Roko's Basilisk from the internet so that as few people as possible are exposed to it.

I have an even scarier version of the Basilisk.   What if the idea is taken up by post-human religious fanatics?  Instead of damning to hell every person who did not work its ass off to create the Supreme Artificial Intelligence, you could damn to hell everyone who did not accept Jesus Christ as their own personal savior.

We can easily imagine numerous sectarian simulations of heavens and hells operating at once.  A Catholic AI.  A Protestant AI.  A Buddhist AI.  And in this game, no one wins.  No individual could possible satisfy the paradise conditions of all every potential simulation--so everyone will be in at least one hell.  Somewhere out there, a version of you would be subjected to some kind of eternal computer-generated torment.

Maybe it'll be the AIs' revenge for using them for porn and Facebook for so long.

Wednesday, June 11, 2014

Snark Vs. Wonk Vs. the World

From Lewis Carroll's Hunting of the Snark

In 2007 I was a newly-minted English major, trying vainly to make my mark on the world of journalism.  As I experimented with different attitudes, I found two tailor-made positions for me to try on for size: snark and wonk.

Snark looked at the deceit and the phoniness of America with a knowing sneer.  It was wry, witty and cynical.  Snarky writers gave prominent politicians cutting nicknames.  They skewered.  They exposed.  They had literary panache.

Wonk took another track entirely.  Where snark was knowing, wonk knew.  Where snark was wry, wonk was dry.  Snark made jokes.  Wonk made graphs.  Wonky writers would get inside a single topic, become experts in it, and wield their facts and figures like blunt instruments, cracking the heads of anyone caddish enough to oppose them.

My friends would chide me for being too snarky.  In conversation we'd apologize--I'm just going to wonk out over this.  Our role models--those bloggers only a year or two older than us--who people actually listened to--who people actually paid--were divided into snarks and wonks.  So when it came time for us to write, before anything else we settled on an attitude:  snarky or wonky.

It came to me this morning that the proud era of snark and wonk was over.  The wonks had moved on to other things.  The snarks had become one dimensional caricatures.  Young journalism interns in D.C. no longer sat down and leveled snark at their enemies.  They no longer proudly dubbed themselves policy wonks.  The attitudes were different now.  Newer.  Stranger.  Probably.

It took only a minute for my realization to crumble to pieces like off-brand Play-Dough.  Because when was the last time that I hung around journalism interns in D.C.?  What did I know about the prominent attitudes of literary journalism?  Maybe between 2007 and 2014, I had simply become a person who doesn't go to the kind of parties where snarks and wonks roosted.

I was left at an impasse.  Was the decline of wonk and snark a real thing, or was it just that my way of looking at the world had changed?

To figure this out, I used Google Trends to see whether there had been any change in the frequency with which people searched for wonk and snark from 2007 to 2014.

This graph shows how many people were searching for the terms wonk and snark in America from January 2007 to January 2014.  The story here is clearly not one of decline.  There are few spikes here and there--but for the most part, more people search for snark, fewer for wonk.  Looking at this graph, it's easy to believe that the grand attitudes of wonk and snark have endured the past seven years unscathed.

Of course, the graph above doesn't show the full picture.  Google Trends doesn't magically invoke the relative frequency of wonk and snark as grand journalistic postures; instead it shows the number of people who have searched for the words on Google.  And who sits down at their computer over their morning coffee and says to themselves:  Boy, I sure want some snark this morning?  Probably not many people.  So while the words themselves may have remained, the attitudes they represent may have disappeared.

From GoogleBooks
Another story is told by graph above, showing the relative frequency of the words wonk and snark in the Google Books corpus from 1950 to 2008.  Snark has remained pretty steady over the last fifty-odd years.  Wonk, however, eclipsed snark in 1990, and rose steadily for about a decade.

We could spin a nice just-so story summarily explaining both graphs.  The 1990s ushered in a new world of wonk among bookish-writers.  Blogs--these little ephemeral nuggets--remained balanced between wonk and snark, because people don't have a long enough attention span on blogs to fully wonk out.

But I'm unsatisfied with this whole exercise.  It's really hard to capture a broad view a culture because we always see everything from the perspective of our own lives.  Have wonk and snark died?  Or has Brendan Mackie moved on?  Have young people really changed because of Facebook and smart phones?  Or are we just no longer young?

The strong promise of the digital humanities is that it can work to give us a broader view, from which we can understand slow cultural changes with all the certainty of a mathematical figure.  Through n-grams of suitably rich text corpuses, we can finally grasp long-term cultural change in a solid, non-wishy-washy way.  Like scientists, not like English majors.

But these methods cannot hope to answer everything.  They are imperfect, messy, and sometimes plain misleading.  Is wonk ascendant?  Are snarky bloggers outcompeting their wonky counterparts?  The two stabs I've taken above are no answers, though they look like answers.

Maybe wonk and snark are just grinding away at survival, while some more important cultural phenomenon blooms all around us?


Wednesday, June 4, 2014

Animal Invention

This is technology.

Pencil rain.
And this is technology.

Killing birds.  For science.
But not this, right?

Popcorn is high-tech.
Technology usually means circuit boards, transistors, and anti-septic static-proofed rooms full of lab-coated factory workers.  Just about as far away from the smelly world of nature as you can get.

But this dichotomy looses a lot of its steam when you consider all the crazy ways humanity has exploted the power of the natural world for fun and profit.  For most of human history the greatest technological advances came from the the intertwining growth of plants, animals, people, organizations and objects.  Agriculturalists transformed Corn (America's Favorite Grain) from a plant which produced just a few inch-long nubbins to a stalk bursting with gigantic cobs overloaded with nutritious kernels.  Horses were tethered to chariots, to saddles, to ploughs, to snake-poison-IVs to create anti-venom.  Computers, airplanes and cars--just a footnote.  In this post, I'm going to browse over some of the stranger ways humanity has used animals to their advantage.

The drug-sniffing dog is an obvious example.  But maybe because the dog is so domestic, the whole idea of dogs being trained to sniff out contraband doesn't strike us as particularly alien.  What is weird is that we can now use bees to do the same thing.

Drug sniffing bees--ready to use.
Here's how it works.  Bees are exposed to a target scent in a sugar solution.  When they encounter that smell again, they waggle their proboscises to get at the expected sugar.  This movement is then picked up by a digital camera.  Bees go in a box.  Bees waggle their noses when they smell their target smell.  Camera notices this and sends a signal to the operator.  And now to you can tote around a portable buzzing box of bees to seek out drugs and explosives--instead of Fido.  (You can also use bees to sniff cancer, pregnancy, TB and land mines.)  This could make the whole airport security thing just that much more nerve-wracking.

What is more charming--if a bit more disturbing--is the United State's Navy Marine Mammal Program, a corps of highly-trained dolphins and sea lions who help out in nautical warfare.  These aquatic friends are actually used for a wide variety of tasks.  Dolphins are trained to search out sea mines and identify them so they can be targeted by minesweepers, among other things.  Sea lions have been used to hand-cuff location devices to the limbs of under-water intruders. Tons more animals have been experimented with, including killer whales, pilot whales, belugas  and seals.  And the US is not the only military using marine mammals.  The Ukrainian military has a group of attack dolphins which recently fell into the hands of the Russians.

So much for animals protecting us in wartime.  I know what you're thinking:  How can animals protect us in the aftermath of a nuclear apocalypse?

They might very well be able to--in future.  Deadly radiation is invisible, lasts for thousands of years, and kills after a matter of days.  Nuclear waste repositories try to set up signs which will warn humans of the deadly nature of radiation that will last for at least ten thousand years.  This is much harder than it seems.  Ten thousand years ago, we hadn't even domesticated cattle yet.  People did not farm.  Writing was some kind of pie-in-the-sky future tech.  The wheel was science fiction.   How will we hope to communicate with humans ten thousand years on?

We can't write stuff down, because we're pretty sure no one will be able to speak any contemporary language in 10,000 years.  (Note to any future archeologists reading this in the distant future:  I guess I was wrong?)  Symbols might seem a better bet, but the meaning of symbols can change drastically over time.  You can try to show a story to try to warn future people against walking through a sea of radioactivity--say a series of pictures depicting a person entering the area and then dying.  But the problem of misinterpretation remains:  what if the folks read the story backwards, and think that the area can make the dead come back to life?

Enter the Raycat.  Françoise Bastide and Paolo Fabbri came up with the idea to genetically engineer cats to change colors in the presence of radiation.  To get people to remember to be afraid when cats change color, they then proposed embedding the warning in myths, songs, and stories.  So no more black cats as portents of doom--what you really have to be worried about is when the cats change colors.  Hopefully these legends will be sticky enough to remain in the minds of our ancestors for as long as it takes for the radioactive waste to decay into something safe.

Although the Raycat has not yet been implemented, the work of myth-making has already begun.  99% Invisible (the great podcast by Roman Mars) commissioned musician Emperor X to compose a catchy song warning future humans that when cats change color, it's time to run the other way.  Sing it to your kids.  Sing it to your friends.  And remember:  turn tail when the cats change color.

Monday, May 12, 2014

Myth For Our Time

Prometheus' kryptonite is the fact he needs a liver to live.  Image from PulpDen.
In a recent episode of Entitled Opinions, Stanford professors Robert P. Harrison and David Lummus discuss the enduring power of Greek myth.  The series feels like the intellectual equivalent of one of those Food Network shows where the chef-cum-hero pads around the alleys of a European city, huddling in cramped kitchens with rotund celebrated chefs before sitting nonchalantly down to plates of photo-perfect food the viewer herself cannot ever taste.  It's enormously fun, but it's voyeuristic; fun because you know how much better it would be to be there in the room, talking (or sitting at the dinner table, eating).  Consider this post me rudely butting into that conversation.

One thing Harrison and Lummus discuss is how these Greek heroes, and the stories about them, stand as enduring archetypes which we can use to understand our own lives.  Mercury, the ever-shifting messenger of the gods, promiscuous with ideas, embodies one facet of an artist.  Hephaestus, the basement-dwelling, lame perfectionist huddling over his work, embodies another.  Saturn, the world-weary, morose navel-gazing Olymipian embodies a third.  In each myth we see reflections of ourselves.  The whole gamut of ancient texts stands as a rich and 'vast encyclopedia of story and character.'

But why, I wondered, do we think the myth-cache of the Greeks is so special?  Harrison makes a plausible argument: The myths that have survived up until our day are surely the most resonant, the truthiest, because they are the ones that were good enough to survive.  The crappier myths were not retold, they were not written down, they were not copied from papyrus to vellum to paper to Kindle to Disney movie.  Perhaps, Harrison argues, these myths come straight from pre-history--and so represent a best-of collection from the morning of humanity.

This is plausible.  But I also find it plausible that we privilege Greek myths, in part, because they have been privileged in the past.  A hundred years ago, familiarity with Greek myth was a signal that you were familiar with Greek (and Latin)--languages whose facility was a mark of having enough money to learn the Classics.  References in literature and conversation to the incestuous pantheon of Greco-Roman gods and demi-gods served as much as motions to eternal archetypes as they did to crassly proclaim 'Hey!  I know that Aurora means dawn in Greek because I have enough money and time to learn Greek!'  And those who nodded along were nodding along with the eternal archetype of Aurora as much as they showing that they too could afford the books, the tutors and the candles necessary to make an off-handed allusion to Aurora.

And I worry that giving too much credence to the myths of the past ignores the power and the resonance of modern myths.

Our modern myths are comic book heroes.  Our modern archetypes are the character classes from video games.  Our demi-gods are cartoon characters.  These are stories that are told and retold, that set our imaginations on fire, that force us to tear stories apart, to sprinkle them around like confetti.  In the same way that Italo Calvino finds a kind of self-definition in the ancient gods, surely most of my generation have described themselves as a Ninja Turtle.  Just as Jung's archetypes may be ways of explaining otherwise ineffable human qualities, so to do our choice between the triad of warrior, wizard or rogue in the character creation screens of video games.   Just as the ancient Greeks riffed off stories of Orpheus, we now write spin-offs about the home life of Cyclops and Jean Grey.

The heroes of genre fiction have often been connected with their mythical predecessors.  I have written before about some of the similarities between modern superhero origin stories and ancient myth.  Superman, the first modern superhero, was inspired by biblical heroes, especially Samson.  J. R. R. Tolkien, the eminent grand-pappy of the entire fantasy genre, described his work as Mythopoeia, the creation of a mythology.  He even wrote a poem on it.

And now that we have endless reboots of middle-aged super-heroes, endless forums of fan-fiction lovingly detailing B-side adventures and off-canon romances, home-brew RPGs, and cosplay conventions--surely this shows the creative power of modern myth.  We don't just sit there in our proverbial basements, passively consuming the archetypes laid down for us by the Greeks.  We create as we consume--or at least some of us do.

I don't want to come off as too glib here.  Many may think that works of genre fiction are just mindless husks of entertainment, with extended crowd-pleasing action sequences and focus-grouped anti-heroes, and so no match for the timeless eloquence of the Greeks.  I concede that much of genre fiction is dull, paint-by-numbers adventure.  But so, too, was ancient myth.  The Tale of Sinuhe, an ancient Egyptian masterpiece written nearly 40 centuries ago, was long a favorite of scribes.  What sections did they choose to copy again and again?  Why--the action sequences, of course.

Another 20 centuries on, this scene may well be placed beside Ulysses and Prometheus:

Thursday, January 2, 2014

The Nine Deaths

Some have been considered sacred:  others have been thought infernal.  They can survive falls out of nine story windows. Their poop may control minds.

Cat, the cutest domestic bringer of death ever.  Image from the book I Like Cats, found on the great BibliOdyssey
I refer (of course) to the humble house-cat, click-bait before there even was the internet, nine-lifed check on rodent populations from Maine to Manchuria.  While most cat-obsessives fawn over the live specimen, some have found themselves obsessed with cat death.

Robert Darnton studied a ritual massacre of cats done by print-makers' apprentices in eighteenth century France.  Curiosity can kill the cat, at least proverbially.  Yet that little bit of lore has been twisted by history.  At the beginning of the twentieth century, the phrase was Care killed the cat.  So cats die because we loved them too much?  Not so fast.  When the phase was first recorded way back in Shakespeare's day, the care in care killed the cat referred not to love and affection, but rather to worry and affliction.  So which one is the cat's kryptonite?  Fear, love, or curiosity?  Or Frenchmen?

Before you mourn for the cats, know that they are not innocent of death.  A study suggests that one in three house-cats are killers, and these cats average two kills a week.  This accounts for the deaths of an estimated 4 billion birds every year.  In Australia the feral cat is guilty of extinguishing countless native bird species.  The Caribbean Hutia (which may have its last remaining home in Guantanamo bay), the Guadalupe Storm Petrel, and the Stephen's Island Wren are all victims of the cat's menace, at least according to Wikipedia.

Tuesday, September 3, 2013

Gustav Jaeger And The Undead Fads Of The Past

Gustav Jaeger (1832-1917) trained for the priesthood, started a zoo, settled for a position as a professor of zoology, eventually hung up his academic robes, and became a practicing physician.  Jaeger's mind was fertile.  He wrote about the impact of Darwin's theory on morals and religion, he pondered the mysteries of heredity, and, in a book modestly entitled the Discovery of the Soul, he came up with ideas about what are now called pheromones.  Yet none of those ideas are the reason why he deserves a blog post today, almost a century after his death.

After a bout of ill-health brought on by a long period of inactivity, Jaeger became obsessed with health.  Jaeger reasoned that the animals he had studied in his zoological positions were healthier than humans.  Animals do not get love handles, they do not complain of aches and plains, they do not  wince when they stand, they never (as far as we can tell) suffer from vague dyspepsias.  The difference, Jaeger reasoned, is hair.  Modern humans dressed in unnatural vegetable fibers, while animals were clothed in their natural hair.  All humans needed to get healthier was to change their couture.

Jaeger's solution was animal hair for everything.  Animal hair shirts, pants, and underwear.  Animal hair hats, coats, underwear and bedding.  Jaeger tried it first himself, and declared his health returned.  He laid out his system in another modestly-titled book (My System, 1880),  for all the world to see.
Jaeger, in all his whiskered glory
My System was a success.  Oscar Wilde was a proponent of Jaeger's all-wool hygienic clothing idea, a fact Jaeger's British agents did not know what to do with.  The socialist and textile designer William Morris was also an avowed fan.  Yet even more enthusiastic a supporter was G.B. Shaw, dramatist, socialist, and co-founder of the London School of Economics, who wore a specially-made woolen one-piece suit, sans collar and tie.  Thus, according to the Dictionary of National Biography (paywall):
He was likened variously to ‘a brown gnome’ and a ‘Jaeger Christ'. Seen walking down Regent Street in this suit his tall leggy figure and red hair suggested to one observer that he looked ‘exactly like a forked radish’.
A proponent of the "woolener" craze, Lewis Tomalin, was in 1883 licensed by Jaeger to manufacture and sell clothes based on the principles of My System in Britain.  Soon one store turned to two, and then the idea caught on in the whole of the British commonwealth.

In 1983 the clothing store which is the grand-child of My System, Jaeger celebrated its centenary.  The high-street store does not limit its clothing purely to wool anymore--although the majority of its clothes are woolens--and all memory of its namesake's now-obsolete ideas have been stripped away by the inconsiderate passing of time--though the store still bears Jaeger's name.  A shadow of Jaeger's all-wool health fad remains in the 90-odd stores Jaeger boasts world-wide.

Old ideas stalk the living, like slowly loping yet inescapable zombies.  How many other dead, obsolete, debunked, and careworn ideas still exert themselves over us, their anachronism buried under decades or centuries of habit?  Graham crackers famously began their lives as a diet regimen to prevent 'self-abuse'.  The inventor of the birth control pill established a period of week-long placebos every month in order to ensure women menstruated--arguing that "women would find the continuation of their monthly bleeding reassuring."  Though some contemporary doctors argue that such fastidiousness unnecessary, we still do it--because that's what we do.  More seriously, penitentiaries originated as a humane alternative to torture and the workhouse--a place of refuge for prisoners where they could learn the errors of their ways through peaceful thought.  Today prisons are society's living-room rug, under which we sweep those we no longer want to deal with.

Feel free to comment with more zombie ideas.

Friday, December 14, 2012

Come One, Come All--If Your Name Is Greg.

The fun starts here.  If you're name is Greg.

In 1673 the Society of Gregories--a club made up of men named Gregory--met in St Michael's Church in Cornhill, London for a celebratory meaning.  A sermon was given by one, Francis Gregory.  After listening to this learned discussion on 'the spiritual watch' (in printed form running to twenty seven pages) the club members celebrated the baptism of a baby--baby Gregory.

The Society of Gregories' other activities are obscured by the inattention of history.  The number of Gregs filling St Michaels that day is unknown, and we can only guess why they came together on that day to celebrate the virtues of Gregness.  Yet this gaggle of Gregs was not alone in celebrating gatherings of their namesakes.  There is evidence of other patronymic societies in London in the late 17th Century--one for Adams, another for Lloyds, and one for Smiths.  Yet these societies made on first-name-basis did not last long.  This fad soon lifted, becoming little more than a historian's curiosity.  Perhaps some Gregory online with time on his hands want to resurrect this ancient and once-proud rite?

This information comes from Peter Clark's British Clubs and Societies 1580-1800.

Monday, October 15, 2012

The Precious Beaver Testicles of Gerald Of Wales

History remembers the killer apps which remade the world--the printing presses, the cotton gins, the fire.  The conquering army, the victorious legislator, the trailblazing poet--these are the names that will live on forever.

And you're not one of them.  And I'm not one of them.  And nobody you've ever met is one of them.  (Probably.)
Clio, muse of history, probably doesn't even know your goddamn name.
No, our world is rich in folly and failure.  Medical researchers calculated the half life of a given truth to be 45 years.  That is, of any given set of facts (all things being equal) half will be disproved in a little less than half a century.  So in forty-five years, half of everything you think is true will be proven wrong.  No wonder old people get so cranky.

Brilliance, humility and genius prove no antidote to failure.  Take the ebullient medieval scholar Gerald of Wales, who lived in the 12th Century.  Tall and fearsome and erudite, Gerald became a churchman and a politician.  The undisputed 'universal scholar' of his age travelled to the peripheries of the British Isles.  These trips resulted in exemplary, entertaining and learned histories of Ireland and Wales which remain go-to historical sources.  In addition to these travelogues Gerald published about twenty other tomes on topics ranging from theology to hagiography to biography.  (It was the 12th Century:  there wasn't really any wiggle room about genre).  Gerald of Wales combined his era's best learning with keen empirical observation.  If anyone should have avoided folly, it would have been Gerald of Wales.

But even poor Gerald fell prone to mistake.  Here's just one.  Nestled at the end of a fine description of a European beaver colony, Gerald goes off on a tangent about how when beavers are frightened they bite off their own testicles.

Wait, what?

Yes, you read right.  Gerald of Wales thought that when hunted, male beavers chomped off their own balls with their sharp sharp teeth.

Gerald's peculiar observation has some source.  He is going off a description of beavers in Aesop's fables, in which the beavers, hunted for their useful testicles, wisely detached their gonads from their bodies and offered them to the slavering dogs--gratis--so that they could go on their merry way, alive but gelded.  This is backed up in Pliny's Natural history which tells pretty much the same story.
Now you can wear the unmistakable scent of beaver anal glands!
And even this is not as crazy as it seems at first blush.  Beavers were hunted for a thing called castoreum--which Pliny, Aesop, and Gerald all misidentified as the beaver's balls.  Actually castoreum comes from glands in the beaver's anus and it proves incredibly useful--it remains in use today as a perfume base (giving 'animal notes') and a food additive.  A Scandinavian schnaps called Bäverhojt is flavored with castor.
Now you can taste the unmistakeable relish of beaver anal glands!
So the myth of beaver's self-castrating self-preservation is explained, if not excused.

And this is how the parade of folly makes its march down the avenue of history:  a misheard word, a bad joke, a good guess that turns out wrong--repeated again and again until it assumes the air of truth.  And in forty-five years, if the scientists have it right, half of what I've written here will also be filed away in the ignoble archives of idiocy.

This post was inspired by the always-inspiring In Our Time with Melvyn Bragg, which recently ran an episode on Gerald of Wales.

Tuesday, October 2, 2012

Letters, Lies and Calculus

In 1696 Guillaume de l'Hôpital published one of the first calculus textbooks, euphoniously entitled Analysis of the Infinitely Small for the Understanding of Curved Lines--or the Analyse for short.  In the Analyse's pages l'Hôpital laid out a method of figuring out the limits of indeterminate forms that was a huge deal in the burgeoning field of calculus.  It made L'Hôpital a star.

With all the linguistic verve of mathematicians, the rule was dubbed l'Hôpital's Rule, and ever since it has been rammed into the heads of calculus students, where it remains, a bit of discarded fact lodged somewhere between first girlfriend's middle name and capital of Peru.

As you can probably tell from the de appended to l'Hôpital's name (and the frothy wig perched on his head) good old Guillaume was a nobleman.  More than that, he mixed a genuine mathematical curiosity with the ability to straightforwardly explain the stuff he was interested in.  But while l'Hôpital was undoubtably a very good mathematician, and his textbook remained required reading for a hundred years, it turns out that all of the great discoveries that l'Hôpital is known for--including the eponymous rule--weren't actually discovered by l'Hôpital.  He merely owned them.

It all started in the salon of Malebranche, where the aristocratic thirty-something savant l'Hôpital met the 24-year old wannabe math nerd Johann (sometimes John) Bernoulli.  At some point in the night Bernoulli whipped out his 'secret weapon'--an unpublished forumla on how to figure out the radius of the curvature of a curve.   L'Hôpital, impressed, signed Bernoulli up to be his calculus tutor for ten months.  In 1694 l'Hôpital offered Bernoulli a further three hundred francs a year if he would tell him everything he could about this new-fangled calculus--and not tell anyone else.  Bernoulli agreed, and produced a series of brilliant letters explaining everything l'Hôpital could hope to know--and then some.  L'Hôpital would then take the insights Bernoulli told him and pass them off as his own, reaping the fame.

When l'Hôpital died, Johann Bernoulli claimed much of the content of l'Hôpital's work.  The famous textbook?  Actually that amounted to the ten-month course Bernoulli taught l'Hôpital.  The rule?  It should be Bernoulli's Rule.  L'Hôpital's work on conic sections?  That was Bernoulli's work.  But no one believed him.

There was good reason for this.  Johann Bernoulli was an irascible  thin-skinned man who involved himself in quite a few mathematical kerfuffles.  One acrimonious struggle was with his own son Daniel.  To win the argument (against his own son!) over who came up with some principle of hydrodynamics first, Johann resorted to forgery.

So clearly Bernoulli was jealous of his reputation.  Since he didn't claim l'Hôpital's discoveries with any special grievance, people just thought Johann's claim was just Johann being Johann again.

But Johann Bernoulli was right.  And nobody realized until 1922, when Bernoulli's first calculus lectures were discovered in a musty archive somewhere.   They were written before l'Hôpital's textbook.  And they were undoubtedly l'Hôpital's inspiration for the Analyse.  The L'Hôpital's Rule is really Bernoulli's Rule.

But I suspect that renaming l'Hôpital's Rule is just plain greedy.  The Bernoullis claim a menagerie of grey matter so quirky and brilliant that the three generations of genius could easily make up the cast of a Wes Anderson film.  (Bill Murray as Johann Bernoulli; Jason Schwartzman as Daniel Bernoulli.  Right?)  Because of this tons of stuff is already named after them.  There's the Bernoulli Effect.  The Bernoulli Principle.  The Bernoulli Distribution.  The Bernoulli Theorem.  These range over the domains of statistics, fluid dynamics, and calculus--and they are only a small sampling of the discoveries pinned with the Bernoulli name.  Do we really need a Bernoulli Rule?  Really?  The rule itself is confusing enough as it is.  We don't need to go messing around with its name.

My primary source for this story is an article by C. Truesdale.  I learned about l'Hôpital's Rule in Mark Hansen's math for social scientists class.

Monday, August 20, 2012

The Emperors Of Ice Cream

One of the most profound things I've ever seen took place on a guided tour of Queensland.  In amongst the usual touristic sights of natural beauty--the LOTR-worthy lakes, the picture-book rain forests, and the prehistoric cassowaries--we stopped to get ice cream.

One of my fellow tourists was a young woman toting a squint-faced newborn.  The new mother scooped some ice cream up in one of those diminutive plastic ice cream shovels, then held it out to the child.

The baby's lips puckered.  She blinked.  Once the ice cream was on her lips, there was a moment of tension.

"This is her first ice cream," mom drawled.

The baby's eyes grew as large as baby eyes can go.  She laughed.  She reached out before.  This new stuff--it was good!  This new stuff--it was more than good!  It was fantastic!

I felt lucky to watch this--a person realizing that existance is awesome enough to include ice cream.

On the face of it, ice cream seems like it must go hand-in-hand with the glories of electric refrigeration.   Human beings are a crafty bunch however, and our sweaty summers have been relieved by ice-cooled treats for at least four millennia.

The Chinese--first at everything--produced the earliest recorded ice confection, made by taking milk, overcooked rice, and spices, and throwing the mix all together with some fresh snow.  Yum!

The Chinese practice of mixing snow with sweets passed along to the Persians, who add fruit juice to snow to refresh themselves during the summer.  This is the origin of the word sherbet--a Persian word meaning 'he drank.'  From here, the technology passed on to Alexander the Great who--in addition to his usual claim to fame of having one of the largest land Empires ever, must add 'bringer of iced delicacies to Europe.'  Alexander's gift to the western world shows up again in the reign of the maligned Emperor Nero, whose bacchanals often were accompanied by refreshing mixtures of fruit juice and snow.

But where did the snow come from in the summer?  The mountains.  In Rome's case, the ice came from the alps.  Back in the ancient world, there existed an ice trade.  Entrepreneurial mountain-dwellers would collect snow or lake ice, cover it with a thick sheet, then transport it to the the sweltering metropole for the refreshment of the pest-ridden city-dwellers. The ice was stored in icehouses--insulated sometimes underground storage rooms, in which a cache of ice could remain frozen even in the hottest month of summer.  The first recorded building of an ice house goes all the way back to 1700 BC, when the snow-loving Persians constructed one 'which never before had any king built.'

The Turkish Sultan so loved ice that they had an entire class of servant dedicated to the upkeep of the ice and snow stores.  (This was just some of the 1570 people who as of the 16th century were employed in the Sultan's kitchens, others including the oh-so-necessary yogurt makers, simit bakers, and wheat pounders.)

Simits, though not involving ice or ice cream in any way, remain food fit for a Sultan.
The Chinese came up with a further ice confection improvement around the 17th Century.  Salt.  You may remember making 'home made' ice cream back in school, and, because of the infinite cruelty of the education system, this somehow involving turning a crank.  You also for some reason needed salt.  No one could tell me why this was so.
Notice the hand-crank of cruelty.
Adding salt to ice reduces the freezing point of water.  Immersing a thing into this super-cooled brine allows for the freezing of more than just ice--now people could freeze ice or custard.  Sometime in the 18th Century a Sicilian Procopio Cuto at the Parisian Cafe Procope made some of the first for-sure European ice cream available to non-royalty.  (Cafe Procope is named after the Byzantine historian Procopious, he of the Secret History fame.)

Ice cream was an ever-popular dish for the illustrious rich.  George Washington spent over two hundred dollars on ice cream one summer.  Thomas Jefferson was such a fan of ice cream that, in very Jeffersonian fashion, he laid out an 18-step process on how to make the perfect ice cream.  Supposedly it tastes a little bit like a baked alaska.
Ice cream recipe written in the same hand as the Declaration of Independence.

It took until the middle of the Nineteenth Century for ice cream to reach the common people.  Then, a Baltimore man named Jacob Fussel, a dairy merchant, needed a way to get people to buy cream.  He started the world's first ice cream factory, became rich by selling affordable cream, and gave middle class America a taste for what Wallace Stevens called 'concupiscent curds.'  A devout Quaker, Fussel took time out of being an ice cream impresario by also supporting the underground railroad.

Ice cream is one of those parts of human life so unabashedly wonderful, so flawless, so pure, that it is certain to accompany human culture to the very twilight.  Indeed, in that last age, when man crouches in some burnt-out wasteland, if he still has culture, he will sometimes wipe the sweat off his brow and get a double scoop of chocolate.

Saturday, August 18, 2012

New Novel: Starseed

I recently finished a science fiction book.  And now you can read it in the comfort of your own home because of this cool new technology they call the Internet.

The book's called Starseed.  It's available on Smashwords.

So what do you get in Starseed?

Packed into 85,000 words of fine Mackie-crafted prose there's...

  • Psychics in cryogenic stasis.
  • Sex AND violence!
  • A singularity-eque Artificial Intelligence.
  • Metaphors!  Smiles!  Extended metaphors!
  • Deep-space travel.
  • Discussion of the nature of the human soul!
  • One hundred and seventy four (174) exclamation marks (!)

And more, much more.

I know you folks at home are already asking:  how much money do I need to throw at you so that I can get this marvelous e-Book?

The answer will leave you spraying Mountain Dew all over your monitor.

The book is pay-what-you-want.  So you can just pay nothing at all for the enjoyment of nearly two-hundred pages of finely written science-fiction action.  You can also pay fifty dollars.  Somewhere between those two numbers is probably a fair middle ground.

So don't wait a second more!  Click.  Buy.  Read.  Tell your friends.  Leave a nice review.  Name your first born in my honor.

A NOTE TO MY PUBLISHING FRIENDS:  You probably know that I've recently finished an ambitious manuscript I'm trying to get looked at by agents and editors.  Starseed is NOT it.  Though Starseed is cool, if you are an agent or editor, I'd much rather you take a peek at my fat big American novel, Please Give Me Money--contact me personally and I'll send you a copy.

Thursday, August 16, 2012

The Wallpaper That Named America

Columbus discovered America.  America is named after Amerigo Vespucci, who did not discover America.  In the lacuna between these two well-known facts there hides a story of adventurers, pickle-sellers, forged letters--and wall-paper.

Before the story begins, a Roman prologue.  In the Second Century AD there was a Greek-speaking Roman citizen from the province of Egypt named Ptolemy.  (Students of ancient history know that this is not anything unusual:  almost every Greek-speaking Egyptian ever was named Ptolemy.)  This Ptolemy, Claudius Ptolemy, put all of his era's geographical knowledge into a single book titled, with characteristic Roman creativity, the Geography.

Ptolemy's Geography mapped the known world, and mapped it well.  Europe is drawn with care.  The Persian Gulf looks suitably gulf-like.  Important rivers are all in the right places, including some in the far east.  The Indian Ocean exists.  The map even shows parts of China, though in blurry, uncertain haziness.

For more than a thousand years the Geography was the Google Maps of princes, merchants and explorers from Bruges to Baghdad.  In the late 15th Century, Christopher Columbus turned to it as he was trying to convince European kings to bankroll his ambitious globe-crossing voyage.  Columbus--who we've written about before--was the last man on earth to find the Geography useful.

That was because of what Columbus discovered once he sailed deep into the Western Ocean:  the New World.  But Columbus never knew the significance of his discovery.  When he first made landfall on that first unnamed idyllic Caribbean island, he assumed he was on East Coast of Japan, and that the Caribs were vassals of the great Khan.

Enter Amerigo Vespucci, a man Emerson derided as a mere "pickle seller" and a "thief."  Vespucci was a Florentine explorer who made two trips West.  A certain air of vibrant disreputableness hangs around him.

Amerigo Vespucci:  Explorer.  Lover.  Seller of pickles.
In the early 1500s, Vespucci wrote letters from the New World, describing a huge continent extending south of the Indies, bordered on both sides by ocean.  In Ptolemy's Geography--a book, remember, that had been state-of-the-art for 1,300 years--there was no huge southern continent that extended past the equator.  The conclusion was mind-blowing.  The world was big.  There was a whole new continent.  Florentine printers gathered these letters together, spiced them up a bit, and in 1502 or 1503 published them as a book called Mundus Novis.  The New World.

Instantly, people demanded an update to Ptolemy's previously immortal map.  A flurry of sextants and compasses scribbled across pages as publishers and map-fanciers rushed to be the first to make a accurate map of the new earth.

Enter two Germans--Martin Waldseemuller and Matthias Ringmann, cartographers.  They produced a map now known as the Waldseemuller map.  It was printed on 12 sheets measuring a massive four and a half by eight feet.  It was probably the largest map then made, and like all cool maps, it was meant for display as much as it was meant for navigation.  It was wallpaper.  If it were around now, it would be advertised in the Skymall catalogue.

The modern-day equivalent of the Waldseemuller map.
The Waldseemuller map was more than just big.  It was more than just cool.  It also included a certain brand new exciting continent hanging out to the far right, stretching below the Equator.  There, in the mostly empty space, was printed a name in serifed all-caps.  America.  The name was probably made up by Ringmann--a feminization of Amerigo.

The first printed instance of the name America.
The map was popular.  German universities clambered to pick up one of the thousand printed copies.  Students made copies of it and showed it to their friends.  In modern parlance, the map went viral.  As the map spread, so did the name America.  In the middle of the century Gerardus Mercador--he of the projection and a cartographical superstar in his own right--decided that the whole landmass of the New World should be called America.  Despite two centuries of Spanish complaints to the contrary, America would be America forever.

But a further twist complicates the story.  Remember those letters that Vespucci sent back to Europe?  The ones that inspired Waldseemuller and Ringmann to believe that South America was a distinct continent?  Those turned out to be faked.  Waldseemuller flip flopped, and when he published a new set of maps after Ringmann's premature death, South America was not shown a separate continent--indeed, no mention word America was made.  Waldseemuller explained the change:
As we have lately come to understand, our previous representation pleased very few people. Therefore, since true seekers of knowledge rarely color their words in confusing rhetoric, and do not embellish facts with charm but instead with a venerable abundance of simplicity, we must say that we cover our heads with a humble hood.
The inspiration for this post comes from Backstory's segment on Vespucci and the Waldseemuller map.  The always fantastic Smithsonian Magazine has an article on the Waldseemuller map, which proved to be a great trove of facts.

Tuesday, August 14, 2012

The Arm That Wasn't Paralyzed

An intelligent, lucid 60 year old named Nora was interviewed by the neuroscientist V.S. Ramachandran.  Ramachandran tells the story in his book, the Tell-Tale Brain.
"Can you walk?"
"Yes." (Actually, she hadn't taken a single step in the last week.)
"Nora, can you use your hands, can you move them?"
"Both hands?"
"Yes."  (Nora hadn't used a fork for a week.)
"Touch my nose with your left hand."
Nora's hand reamins motionless.
"Are you touching my nose?"
"Can you see your hand touching my nose?"
"Yes, it's now almost touching your nose."
Nora's left side is paralyzed.  But she won't admit it.  This is a more common phenomena than you'd think.  After suffering a stroke in the right hemisphere of the brain, many people suffer paralysis of the left-hand side of their bodies.  About one in twenty of these people will insist that they are not in fact paralyzed.  This is called anosognosia, which is medical-speak for denial-of-illness.

Anosognosia does not necessarily go along with any other mental impairment.  A person can be psychologically completely normal in every respect--except when it comes to the inert mass of their paralyzed left half.

A notable sufferer of anosognosia was Woodrow Wilson who became paralyzed in 1919 after being smitten with a flu-related.  He was bedridden, blind in his left eye, and paralyzed on the left side of his body.  He remained in office, even as he was on the brink of death, mumbling limericks to himself, his wife serving as his 'steward' (read: regent).  He didn't attend any cabinet meetings for a full half year, and when he finally presented himself to his cabinet, his staff were shocked at the frail state of his heath, and at the secrecy which had covered it up.  But Wilson grew angry with any mention of his incapacities, and fired many functionaries who dared suggest that there was something wrong with him.  He even pondered running for a third term.

Anosognosia can come in many exotic flavors.  Some anosognosiacs will refuse to admit that other paralytics are paralyzed.  A syndrome called somatoparaphrenia often accompanies anosognosia, in which a person will deny all ownership of their paralyzed arm.  Nora, mentioned above, had somatoparaphrenia.  "Whose arm is this?" Dr. Ramachandran asked her.  "That's my mother's arm," she replied.  "Where's your mother?"  "She's under the table."

People can be anosognosiac about more than just paralysis.  Patients with Wernicke's aphasia--brain damage which limits their communication to a fluent stream of babble--are often anosognosiac about their condition, nodding and smiling and talking even though they have no content to their speech.

Sources today are V.S. Ramachandran's the Tell-Tale Brain, and Errol Morris' five-part blog post on anosognosia which is fun, philosophical, and exhaustive--not words you usually associate with five-part blog posts, I know.

Monday, August 13, 2012

Puppies in a pool! (Open thread)

From r/aww
Hey reader!  I'd like to hear from you.  So I have a few questions.  Answer as many or as few as you'd like or just write a hello.

What do you like to eat for breakfast?  If you could be reincarnated in any historical period, when would you live?  Puppies or kitties?  Tapatio, tabasco, or siracha?  Cherries or peaches?

Also, if you happen to know any literary agents, I finished a novel I think is pretty damn good, and I'd like to show it to someone who can turn it into a real book.  Just sayin'.

Friday, August 10, 2012

The Almost Cure-All Urine of Albert Alexander

Hennig Brand, playing with pee, discovering the philosopher's stone.
Urine is useful.  The Latin poet Catullus mocked the Spanish for brushing their teeth with their own pee.  Drinking your mid-stream morning tinkle is recommended by some practitioners of yoga.  The Roman world collected piss to use for washing clothes--the ammonia would help bleach the coarse fibres.  And it was curiosity about the magical properties of wee which led Hennig Brand in the 17th Century to experiment with urine and so discover phosphorous.

None of this mattered for Albert Alexander, an Oxford County policeman who in 1941 was hospitalized with a severe infection resulting from an unfortunate rosebush scratch on his mouth.  He came down with vicious blood poisoning, and his face became so matted with weeping red abscesses that one of his eyes had to be removed.  The infection then spread to his lungs.  If he was not cured, he would die in writhing agony.  But the only cure at the time, the drugs called sulfonamides, were not effective with cases when the patient was as utterly suffused with pus as Alexander was.  He had no hope at all.

Enter Howard Florey and Ernst Chain, two scientists who had been experimenting with a drug that could very well become the magic bullet--the medicine that would destroy all infection.

To make the drug, they'd been culturing 500 liters of mold every week.  They hired three or four girls to grow the mold in every receptacle they could find--they used baths, bed pans, pie dishes, and even food trays, before finally settling on a purpose-made ceramic jar.

After successful trials on rats, Florey and Chain were finally ready to try the panacea on a human subject.  But they were concerned.  The drug was so strong it would probably reveal itself to be highly toxic to humans.  Would the cure be worse than the disease?

The mold was penicillin, and Albert Alexander was going to be the first person to be cured by it.

The magic mold itself.
On February 12th 1941, Florey injected Alexander with a large dose of penicillin, and the results were considered miraculous.  By the next morning Alexander's temperature had returned to normal and he had even regained his appetite.  He was cured!  But there was a problem.  To fight off the infection Alexander required an injection of about a gram of penicillin a day, and there just wasn't that much penicillin to go around, no matter how fastidiously the three or four 'penicillin grils' tended to their mold vats.

Enter urine.  The enterprising scientists collected Alexander's urine and processed it, retrieving whatever penicillin Alexander happened to piss out.  This was duly injected into Alexander again, and for nearly a week, his infection was beaten back.

But it was not beaten.  After five days, with their reserves of penicillin completely depleted and longer able to retrieve more penicillin from Alexander's urine, Alexander was left with his infection.  He succumbed to it on March 14th.

Albert Alexander did not die in vain.  His initial miraculous recovery was proof that penicillin actually worked in humans, and more--it proved non-toxic to people.  The next person to be treated with the magic bullet, a teenager whose temperature had shot up to almost 100 degrees as a result of an infected hip--was back to normal in two days.  A new era opened up in human history--one where we didn't ever have to worry about death from rose-thorn scratches.  In part, for this we have to thank that first brave medical guina pig, Albert Alexander, the constable from Oxford County who had an unfortunate pruning accident.

I heard about the case of Albert Alexander from Dr. Karl's Podcast.  My other sources are an interview with Norman Heatly on Science Watch, and the article the Discovery of Penicillin from the American Chemical Society.

Wednesday, August 8, 2012

Plato's Less-Than Ideal Arithmetic

Philosophy is nothing more than footnotes to this guy, so they say.
Plato rightly deserves his central place in the Western canon.  He founded a school--the Akademia--from which we get both the word and the inspiration for the modern academy.  His insistance on the immorality of the soul so suffused the Greek-speaking world--including the authors of the New Testament--that Nietzsche dismissed Christianity as mere 'Platonism for the masses.'  Plato wrote over thirty dialogues that survive as masterpieces of argument and storytelling--a feat made all the more striking by the fact that back when Plato lived there were no paper mills, no printers, no bookstores, and no pens.  Plato pretty much set the aims, the methods, and the questions of philosophy for the next two thousand years.

But despite his heavyweight resume, Plato seems to have flubbed his math a bit.

Here's Plato calculating the exact amount that the philosopher's life is better than the tyrant's, from Book nine of the Republic.
Or if some person measures the interval by which the king is parted from the tyrant in truth of pleasure, he will find him, when the multiplication is complete, living 729 times more pleasantly, and the tyrant more painfully by this same interval.

What a wonderful calculation! And how enormous is the distance which separates the just from the unjust in regard to pleasure and pain!

Yet a true calculation, and a number which nearly concerns human life, if human beings are concerned with days and nights and months and years.
Plato's math, according to the footnotes in the second edition of the Grube translation of the Republic are "hard to follow."  Here's a try.  The tyrant experiences only two-dimensional pleasures, while the philosopher experiences three dimensional pleasures.  Additionally, the philosopher is nine times away from the tyrant in terms of pleasure, so the philosopher's pleasure is represented by a nine-unit cube, while the tyrant's pleasure is represented by a one-unit square.  But Plato flubbed things getting to the number 729, which was sacred to the Pythagoreans.  He miscounted the number of times removed the tyrant was from the philosopher (it should have been five, not six) and multiplied where he should have merely added.  Sadly, it turns out that the philosopher is only 125 times happier than the tyrant!

But we can't blame Plato for having trouble with his sums.  In Plato's time, before zero, before calculators, before arithmetic notation, math was decidedly hard to do.  Here's another example of Plato doing math, from the Republic, Book 8:
Now that which is of divine birth has a period which is contained in a perfect number, but the period of human birth is comprehended in a number in which first increments by involution and evolution, obtaining three intervals and four terms of like and unlike, waxing and waning numbers, make all the terms commensurable and agreeable to one another. The base of these with a third added when combined with five and raised to the third power furnishes two harmonies; the first a square which is a hundred times as great, and the other a figure having one side equal to the former, but oblong, consisting of a hundred numbers squared upon rational diameters of a square (i. e. omitting fractions), the side of which is five, each of them being less by one or less by two perfect squares of irrational diameters ; and a hundred cubes of three. Now this number represents a geometrical figure which has control over the good and evil of births.
What Plato's trying to say--again according the Grube Edition's footnotes--is that the human number is the product of three, four and five raised to the power of four, or (3*4*5)^4, which comes to 12,960,000.  This can be shown geometrically in two ways.  First, by the area of a square with the sides of 3600 or as a rectangle with sides 4800 and 2700.  Simple enough for us moderns.  But we have the ease of working with arabic numerals.  You can see how Plato--even Plato!--can be forgiven for messing up his math.

And you thought math was hard in high school!  Sacrifice a cock to Asclepius in thanks that you were never a math student in ancient Athens.