In 1942, when I was a rambunctious lad of seven, I was diagnosed with tuberculosis. The prescription for my recovery called for naps at ten and two, bedtime at seven—and plenty of rest in between. Bad news for a kid, but something akin to Doomsday for my long-suffering mother, who the previous fall had sent me, the last of five kids, to first grade, and thus gained a little wedge of free time in which to re-enter life as an adult.
But this sainted woman was as resourceful as she was wise. In her role as secretary of our church, she produced the weekly bulletin on a manual typewriter, and duplicated it on a mimeograph machine. “Tell you what let’s do,” she said to me one September morning. “Since you need some things to keep you entertained, let’s publish a newspaper. We’ll call it the Bugle. When your friends come by to visit after school, you’ll interview them, get all the news and gossip, and then I’ll teach you how to make stories that we can type up and print on the mimeo.”
Thus began my personal Print Age—my introduction to reading and writing as self-generated pleasures, to the painful necessities of editing and rewriting, to the messy fun of putting ink to paper, and to the intoxicating thrill of seeing front-page news under my byline. The awe and wonder eventually turned to pride of craft, then drudgery, then boredom—but I have never forgotten the sense of empowerment I got from that first opportunity to learn adult skills.
Still, that was child’s play. On the big screen of history, the real Print Age began when Johann Gutenberg invented the printing press in 1439 (just 500 years before my mother’s own act of desperation exposed me to it). Many scholars still regard this singular achievement by a forty-year-old German blacksmith as the greatest invention of all time.
It’s doubtful that the scattered legions of calligrapher monks whose ranks were decimated by Gutenberg’s wood and metal contraption held him in high esteem after that. Who needs mass production, the monks must have wondered, when we can inscribe words more beautifully, and even make multiple copies? The monks may have been the first human subset to suffer from cacoethes scribendi—a common malady with no known cure, now called “the itch to write.” From their time to the present, a tiny but significant fraction of humanity (surely less than one percent at any given time) has dutifully followed the impulse to gather words on papyrus or parchment or paper or some other portable surface and share them with others.
It was the fate of the writer-monks to be swept away in the floodtide of Gutenberg’s first “big book,” a limited edition of the Latin Bible, exquisitely printed and bound in 1455. Though sales were modest, the event itself was monumental: for the first time, the medium was the message. Within a few years, a mere blink of the temporal eye, the world became thoroughly accustomed to the revolutionary innovation of mechanically produced writing, and the monks presumably went back to making wine and cheese (which in the modern age would become, ironically, the staple servings at book signings and public appearances by authors).
The printing press seemed destined to deliver words in a perpetual cascade, and in a myriad of previously unimaginable forms: broadsheets, posters, letters, journals, books, newspapers, gazettes, almanacs. The mechanical device gave prominence and permanence to the decrees of kings and the holy writ of popes. In time, it also spawned legal briefs (seldom brief) and academic arcana and the endless prescriptions and proscriptions of lawmakers. And here is another fact worth noting: since Gutenburg, the ever-widening circle of readers and listeners has never had to deal with word rationing; in good times and bad, a vast army of wordsmiths has always kept the public well supplied.
There is in all this a consistent pattern of stability, a yin-yang, a mystical equilibrium between listeners and readers at one end of the see-saw and writers/editors/publishers at the other. This balance has persisted since Gutenberg pink-slipped the monks. Now, in the dawning decades of an electronic revolution that already bids to make printing presses obsolete, the future of books and magazines and newspapers has been cast into doubt—but would that necessarily drive down the demand for writers? Almost certainly not; on the contrary, they may be more sought-after than ever, simply because more and more content will be needed to feed the word-devouring beasts of cyberspace—and more than enough writers will always be willing to sell or even give away their words in return for a byline and some online exposure. (How many of them will manage to earn a living from the craft, or be accorded professional status in the manner of the monks is an altogether different question.)
I have a crude theory, a sort of jakeleg hypothesis, about the existence and perpetuation of a critical mass of writers in this country. I call it the Scrivener Quotient (SQ). You could probably apply it all the way back to Gutenberg, when the Americas were still deep in prehistory—or just go back to, say, 1875 and find another forty-year-old genius, Mark Twain (a typesetter and printer in his youth), who became the first person to create a book on a typewriter when he rolled The Adventures of Tom Sawyer out of his Remington manual. (It’s just a guess, but I venture to say that probably no more than a hundred new volumes of fiction and nonfiction—later referred to collectively as trade books—were published in the U. S. that year.)
But I’ll use 1970 as the starting point for my explication of the Scrivener Quotient, for two reasons: first, I can base my analysis on the availability of reliable data, and second, that year happens to mark the beginning of a second personal Print Age for me, having to do with the publication of my first book, A Mind to Stay Here.
According to R. R. Bowker’s annual compendium, Books in Print, U.S. publishers issued about 40,000 new trade-book titles in 1970. The next part of this equation is admittedly more of a collective guess than a number based on hard data, but it was a broadly accepted rule of thumb among editors and publishers during the last half of the twentieth century. Think of it as the fifty-to-one chance: for every fifty manuscripts offered to established publishing houses, only one became a published book.
To restate the point for emphasis: in the heyday of American book publishing, 1950 to 2000, a writer had one chance in fifty (better or worse, depending on any prior track record), that an established publishing house would agree to turn his or her manuscript into a book. (There is no way even to guess the larger number that were written but not submitted.) This means that the 40,000 new books of 1970 were the harvested wheat, and almost two million submissions—fifty times as many—were the chaff.
I won’t dwell upon the depressing fact that eighty percent of the 40,000 published titles were financial failures for both the publisher and the author, leaving only 8,000—out of two million submissions!—to break even or produce income. But my own precarious ledge on that beggar’s mountain of words is illustrative enough to merit a sentence or two. I was among the lucky writers—first by beating the fifty-to-one odds when my work was chosen, and then by being in the fortunate twenty percent who came away with a published book and some pocket money. I got a $5,000 advance, and all of the 5,000 copies of A Mind to Stay Here that Macmillan printed were sold or given away. In effect, I was paid one dollar for each of them. No additional copies in either hardback or paperback were ever produced.
If two million manuscripts yielded only 8,000 “break-even” or profitable books, then just one in every 250 submitters entered that select circle. Those were the odds in 1970—and if I had to guess, I’d say they didn’t change much from then until the turn of the century. That is the essence of the Scrivener Quotient.
Now we are a decade into the cyberbook revolution. Amazon, once an online marketer of hard and softcover books (but not collecting state and local sales taxes, which gives them a leg up on hometown bookshops), now sells more e-books than paper ones—and their e-book reader, the Kindle, is the monopoly portal for all Amazon e-books. And here comes the falling second shoe: Amazon is now signing authors to publishing contracts, bypassing the traditional publishing houses altogether. This is known as the corporate Golden Rule: those who have the gold make the rules. Google is also getting into electronic publishing, and Apple is headed there too.
All the Nashville-based Christian book publishers have added e-book departments. So have practically all university presses, many regional publishers and—belatedly, in self-defense—most of the traditional New York publishing houses. Nashville-born Ingram Book Company, once strictly a wholesale distributor of books on paper, was a national pathfinder, more than a decade ago, in the exploration and adoption of print-on-demand and e-book technology. Across the country, new companies seem to spring up almost every week, offering prospective authors help in “building platforms” to produce and market their wares. Remember the 40,000 new–book titles of 1970? By 2010, that number had soared beyond 300,000. Independently produced titles—a blanket euphemism that covers everything from do-it-yourself jobs to vanity-press output—now account for well over half of all new titles with International Standard Book Numbers, or ISBNs, on their copyright pages.
Has there been a similar increase in the number of financially and critically successful books? Far from it. Data from Nielsen BookScan, which keeps track of sales, consistently indicates that only two percent of all trade-book titles sold in a given year (exclusive of used books) sell as many as 5,000 copies. But as the major publishing houses in New York and elsewhere search for ways to cut costs by issuing fewer new books while raising the number and percentage of profitable titles, a wave of new and old regional and specialty publishers, independent companies, vanity presses, and self-publishers have created a bull market that is converting manuscripts into books (online and on paper) at a frantic pace.
It’s easier than ever to get a new book into the marketplace, but it may be as hard as ever to make that book good by any measure—its physical qualities, its editorial standard, its literary worth, its praise from critics and general readers, or its return on investment. Traditional publishing houses have never had an efficient and effective system of determining which of the manuscripts submitted to them were “the best.” Such choices are, by their nature, subjective. But some vetting is better than none, and critical reading by people who do that for a living, be they editors or college professors or proofreaders, is bound to yield better results than can be had in the vast new electronic marketplace of independent publishing, where the author is the sole judge of quality.
There is no doubt in my mind that new technology has already reshaped the book industry (we won’t even bring up newspapers and magazines), and the transformation is only a decade old. Last month the Association of American Publishers announced that ebook downloads had surpassed hardcover sales for the first time. Year by year, more books will be out there in one form or another, simply because technology makes it possible—and because there will always be enough writers to feed the ravenous appetite of computers and their wired progeny. Jonathan Winters once described television as “a glass furnace.” Precisely. And so it is now with publishing: the cyberstove craves fuel.
Not just books as we have known them are affected; so too are bookstores, libraries, collectors, designers, printers, readers, the cadre of reviewers and critics. In the Post-Print Age, tech-savvy explorers will maintain a huge advantage over the hard-copy survivors of the cut-and-paste generation. What we are witnessing now is a change in the world of words that is as encompassing and profound as what happened when that German blacksmith sent the calligrapher monks of Europe in search of another line of work.
And what will not change now? I can think of a couple of things (though I may just be dreaming):
First, as long as there is a world, bound books on paper will be available. Even if they should cease to be printed—especially if that should happen—nostalgia and longing will make the remaining ones more desirable and valuable.
And second, in the best and worst of times, a tiny fraction of the population will always aspire to tell their stories; a minute number of writers will actually follow through and complete manuscripts; a mere handful of those in-vitro books will go on to have their own lasting identity; and a small remnant of the surviving volumes, whether printed and bound or digitized and flat-screened, will have enough of a healthy history to be remembered as something more than a flash of light and a puff of smoke.
Copyright (c) 2012 by John Egerton. All rights reserved. Beginning in high school in the 1950s, through two years in the U.S. Army, five years earning two college degrees, five more as a reporter for the college news bureau, six as a magazine writer, and for the past forty-two years as an independent journalist and author, John Egerton has seldom strayed far from his life’s work: following the social and cultural, political and economic trends that forever have made the American South the unique place that it is, for better and worse. To read Egerton’s last essay for Chapter 16, please click here.