Content

Unknown

3. Reliable metadata

Do we throw out metadata, then?

Of course not. Metadata can be quite useful, if taken with a sufficiently large pinch of salt. The meta-utopia will never come into being, but metadata is often a good means of making rough assumptions about the information that floats through the Internet.

Certain kinds of implicit metadata is awfully useful, in fact. Google exploits metadata about the structure of the World Wide Web: by examining the number of links pointing at a page (and the number of links pointing at each linker), Google can derive statistics about the number of Web-authors who believe that that page is important enough to link to, and hence make extremely reliable guesses about how reputable the information on that page is.

This sort of observational metadata is far more reliable than the stuff that human beings create for the purposes of having their documents found. It cuts through the marketing bullshit, the self-delusion, and the vocabulary collisions.

Taken more broadly, this kind of metadata can be thought of as a pedigree: who thinks that this document is valuable? How closely correlated have this person’s value judgments been with mine in times gone by? This kind of implicit endorsement of information is a far better candidate for an information-retrieval panacea than all the world’s schema combined.

$$$$

Amish for QWERTY

(Originally published on the O’Reilly Network, 07/09/2003)

I learned to type before I learned to write. The QWERTY keyboard layout is hard-wired to my brain, such that I can’t write anything of significance without that I have a 101-key keyboard in front of me. This has always been a badge of geek pride: unlike the creaking pen-and-ink dinosaurs that I grew up reading, I’m well adapted to the modern reality of technology. There’s a secret elitist pride in touch-typing on a laptop while staring off into space, fingers flourishing and caressing the keys.

But last week, my pride got pricked. I was brung low by a phone. Some very nice people from Nokia loaned me a very latest-and-greatest cameraphone, the kind of gadget I’ve described in my science fiction stories. As I prodded at the little 12-key interface, I felt like my father, a 60s-vintage computer scientist who can’t get his wireless network to work, must feel. Like a creaking dino. Like history was passing me by. I’m 31, and I’m obsolete. Or at least Amish.

People think the Amish are technophobes. Far from it. They’re ideologues. They have a concept of what right-living consists of, and they’ll use any technology that serves that ideal — and mercilessly eschew any technology that would subvert it. There’s nothing wrong with driving the wagon to the next farm when you want to hear from your son, so there’s no need to put a phone in the kitchen. On the other hand, there’s nothing right about your livestock dying for lack of care, so a cellphone that can call the veterinarian can certainly find a home in the horse barn.

For me, right-living is the 101-key, QWERTY, computer-centric mediated lifestyle. It’s having a bulky laptop in my bag, crouching by the toilets at a strange airport with my AC adapter plugged into the always-awkwardly-placed power source, running software that I chose and installed, communicating over the wireless network. I use a network that has no incremental cost for communication, and a device that lets me install any software without permission from anyone else. Right-living is the highly mutated, commodity-hardware- based, public and free Internet. I’m QWERTY-Amish, in other words.

I’m the kind of perennial early adopter who would gladly volunteer to beta test a neural interface, but I find myself in a moral panic when confronted with the 12-button keypad on a cellie, even though that interface is one that has been greedily adopted by billions of people worldwide, from strap-hanging Japanese schoolgirls to Kenyan electoral scrutineers to Filipino guerrillas in the bush. The idea of paying for every message makes my hackles tumesce and evokes a reflexive moral conviction that text-messaging is inherently undemocratic, at least compared to free-as-air email. The idea of only running the software that big-brother telco has permitted me on my handset makes me want to run for the hills.

The thumb-generation who can tap out a text-message under their desks while taking notes with the other hand — they’re in for it, too. The pace of accelerated change means that we’re all of us becoming wed to interfaces — ways of communicating with our tools and our world — that are doomed, doomed, doomed. The 12-buttoners are marrying the phone company, marrying a centrally controlled network that requires permission to use and improve, a Stalinist technology whose centralized choke points are subject to regulation and the vagaries of the telcos. Long after the phone companies have been outcompeted by the pure and open Internet (if such a glorious day comes to pass), the kids of today will be bound by its interface and its conventions.

The sole certainty about the future is its Amishness. We will all bend our brains to suit an interface that we will either have to abandon or be left behind. Choose your interface — and the values it implies — carefully, then, before you wed your thought processes to your fingers’ dance. It may be the one you’re stuck with.

$$$$

Ebooks: Neither E, Nor Books

(Paper for the O’Reilly Emerging Technologies Conference, San Diego, February 12, 2004)

Forematter:

This talk was initially given at the O’Reilly Emerging Technology Conference [ http://conferences.oreillynet.com/et2004/ ], along with a set of slides that, for copyright reasons (ironic!) can’t be released alongside of this file. However, you will find, interspersed in this text, notations describing the places where new slides should be loaded, in [square-brackets].

For starters, let me try to summarize the lessons and intuitions I’ve had about ebooks from my release of two novels and most of a short story collection online under a Creative Commons license. A parodist who published a list of alternate titles for the presentations at this event called this talk, “eBooks Suck Right Now,” [eBooks suck right now] and as funny as that is, I don’t think it’s true.

No, if I had to come up with another title for this talk, I’d call it: “Ebooks: You’re Soaking in Them.” [Ebooks: You’re Soaking in Them] That’s because I think that the shape of ebooks to come is almost visible in the way that people interact with text today, and that the job of authors who want to become rich and famous is to come to a better understanding of that shape.

I haven’t come to a perfect understanding. I don’t know what the future of the book looks like. But I have ideas, and I’ll share them with you:

1. Ebooks aren’t marketing. [Ebooks aren’t marketing] OK, so ebooks are marketing: that is to say that giving away ebooks sells more books. Baen Books, who do a lot of series publishing, have found that giving away electronic editions of the previous installments in their series to coincide with the release of a new volume sells the hell out of the new book — and the backlist. And the number of people who wrote to me to tell me about how much they dug the ebook and so bought the paper-book far exceeds the number of people who wrote to me and said, “Ha, ha, you hippie, I read your book for free and now I’m not gonna buy it.” But ebooks shouldn’t be just about marketing: ebooks are a goal unto themselves. In the final analysis, more people will read more words off more screens and fewer words off fewer pages and when those two lines cross, ebooks are gonna have to be the way that writers earn their keep, not the way that they promote the dead-tree editions.

2. Ebooks complement paper books. [Ebooks complement paper books]. Having an ebook is good. Having a paper book is good. Having both is even better. One reader wrote to me and said that he read half my first novel from the bound book, and printed the other half on scrap-paper to read at the beach. Students write to me to say that it’s easier to do their term papers if they can copy and paste their quotations into their word-processors. Baen readers use the electronic editions of their favorite series to build concordances of characters, places and events.

3. Unless you own the ebook, you don’t 0wn the book [Unless you own the ebook, you don’t 0wn the book]. I take the view that the book is a “practice” — a collection of social and economic and artistic activities — and not an “object.” Viewing the book as a “practice” instead of an object is a pretty radical notion, and it begs the question: just what the hell is a book? Good question. I write all of my books in a text-editor [TEXT EDITOR SCREENGRAB] (BBEdit, from Barebones Software — as fine a text-editor as I could hope for). From there, I can convert them into a formatted two-column PDF [TWO-UP SCREENGRAB]. I can turn them into an HTML file [BROWSER SCREENGRAB]. I can turn them over to my publisher, who can turn them into galleys, advanced review copies, hardcovers and paperbacks. I can turn them over to my readers, who can convert them to a bewildering array of formats [DOWNLOAD PAGE SCREENGRAB]. Brewster Kahle’s Internet Bookmobile can convert a digital book into a four-color, full-bleed, perfect-bound, laminated-cover, printed-spine paper book in ten minutes, for about a dollar. Try converting a paper book to a PDF or an html file or a text file or a RocketBook or a printout for a buck in ten minutes! It’s ironic, because one of the frequently cited reasons for preferring paper to ebooks is that paper books confer a sense of ownership of a physical object. Before the dust settles on this ebook thing, owning a paper book is going to feel less like ownership than having an open digital edition of the text.

4. Ebooks are a better deal for writers. [Ebooks are a better deal for writers] The compensation for writers is pretty thin on the ground. Amazing Stories, Hugo Gernsback’s original science fiction magazine, paid a couple cents a word. Today, science fiction magazines pay…a couple cents a word. The sums involved are so minuscule, they’re not even insulting: they’re quaint and historical, like the WHISKEY 5 CENTS sign over the bar at a pioneer village. Some writers do make it big, but they’re rounding errors as compared to the total population of sf writers earning some of their living at the trade. Almost all of us could be making more money elsewhere (though we may dream of earning a stephenkingload of money, and of course, no one would play the lotto if there were no winners). The primary incentive for writing has to be artistic satisfaction, egoboo, and a desire for posterity. Ebooks get you that. Ebooks become a part of the corpus of human knowledge because they get indexed by search engines and replicated by the hundreds, thousands or millions. They can be googled.

Even better: they level the playing field between writers and trolls. When Amazon kicked off, many writers got their knickers in a tight and powerful knot at the idea that axe-grinding yahoos were filling the Amazon message-boards with ill-considered slams at their work — for, if a personal recommendation is the best way to sell a book, then certainly a personal condemnation is the best way to not sell a book. Today, the trolls are still with us, but now, the readers get to decide for themselves. Here’s a bit of a review of Down and Out in the Magic Kingdom that was recently posted to Amazon by “A reader from Redwood City, CA”:

[QUOTED TEXT]

> I am really not sure what kind of drugs critics are

> smoking, or what kind of payola may be involved. But

> regardless of what Entertainment Weekly says, whatever

> this newspaper or that magazine says, you shouldn’t

> waste your money. Download it for free from Corey’s

> (sic) site, read the first page, and look away in

> disgust — this book is for people who think Dan

> Brown’s Da Vinci Code is great writing.

Back in the old days, this kind of thing would have really pissed me off. Axe-grinding, mouth-breathing yahoos, defaming my good name! My stars and mittens! But take a closer look at that damning passage:

[PULL-QUOTE]

> Download it for free from Corey’s site, read the first

> page

You see that? Hell, this guy is working for me! [ADDITIONAL PULL QUOTES] Someone accuses a writer I’m thinking of reading of paying off Entertainment Weekly to say nice things about his novel, “a surprisingly bad writer,” no less, whose writing is “stiff, amateurish, and uninspired!” I wanna check that writer out. And I can. In one click. And then I can make up my own mind.

You don’t get far in the arts without healthy doses of both ego and insecurity, and the downside of being able to google up all the things that people are saying about your book is that it can play right into your insecurities — “all these people will have it in their minds not to bother with my book because they’ve read the negative interweb reviews!” But the flipside of that is the ego: “If only they’d give it a shot, they’d see how good it is.” And the more scathing the review is, the more likely they are to give it a shot. Any press is good press, so long as they spell your URL right (and even if they spell your name wrong!).

5. Ebooks need to embrace their nature. [Ebooks need to embrace their nature.] The distinctive value of ebooks is orthogonal to the value of paper books, and it revolves around the mix-ability and send-ability of electronic text. The more you constrain an ebook’s distinctive value propositions — that is, the more you restrict a reader’s ability to copy, transport or transform an ebook — the more it has to be valued on the same axes as a paper-book. Ebooks fail on those axes. Ebooks don’t beat paper-books for sophisticated typography, they can’t match them for quality of paper or the smell of the glue. But just try sending a paper book to a friend in Brazil, for free, in less than a second. Or loading a thousand paper books into a little stick of flash-memory dangling from your keychain. Or searching a paper book for every instance of a character’s name to find a beloved passage. Hell, try clipping a pithy passage out of a paper book and pasting it into your sig-file.

6. Ebooks demand a different attention span (but not a shorter one). [Ebooks demand a different attention span (but not a shorter one).] Artists are always disappointed by their audience’s attention-spans. Go back far enough and you’ll find cuneiform etchings bemoaning the current Sumerian go-go lifestyle with its insistence on myths with plotlines and characters and action, not like we had in the old days. As artists, it would be a hell of a lot easier if our audiences were more tolerant of our penchant for boring them. We’d get to explore a lot more ideas without worrying about tarting them up with easy-to-swallow chocolate coatings of entertainment. We like to think of shortened attention spans as a product of the information age, but check this out:

[Nietzsche quote]

> To be sure one thing necessary above all: if one is to

> practice reading as an art in this way, something

> needs to be un-learned most thoroughly in these days.

In other words, if my book is too boring, it’s because you’re not paying enough attention. Writers say this stuff all the time, but this quote isn’t from this century or the last. [Nietzsche quote with attribution] It’s from the preface to Nietzsche’s “Genealogy of Morals,” published in 1887.

Yeah, our attention-spans are different today, but they aren’t necessarily shorter. Warren Ellis’s fans managed to hold the storyline for Transmetropolitan [Transmet cover] in their minds for five years while the story trickled out in monthly funnybook installments. JK Rowlings’s installments on the Harry Potter series get fatter and fatter with each new volume. Entire forests are sacrificed to long-running series fiction like Robert Jordan’s Wheel of Time books, each of which is approximately 20,000 pages long (I may be off by an order of magnitude one way or another here). Sure, presidential debates are conducted in soundbites today and not the days-long oratory extravaganzas of the Lincoln-Douglas debates, but people manage to pay attention to the 24-month-long presidential campaigns from start to finish.

7. We need all the ebooks. [We need all the ebooks] The vast majority of the words ever penned are lost to posterity. No one library collects all the still-extant books ever written and no one person could hope to make a dent in that corpus of written work. None of us will ever read more than the tiniest sliver of human literature. But that doesn’t mean that we can stick with just the most popular texts and get a proper ebook revolution.

For starters, we’re all edge-cases. Sure, we all have the shared desire for the core canon of literature, but each of us want to complete that collection with different texts that are as distinctive and individualistic as fingerprints. If we all look like we’re doing the same thing when we read, or listen to music, or hang out in a chatroom, that’s because we’re not looking closely enough. The shared-ness of our experience is only present at a coarse level of measurement: once you get into really granular observation, there are as many differences in our “shared” experience as there are similarities.

More than that, though, is the way that a large collection of electronic text differs from a small one: it’s the difference between a single book, a shelf full of books and a library of books. Scale makes things different. Take the Web: none of us can hope to read even a fraction of all the pages on the Web, but by analyzing the link structures that bind all those pages together, Google is able to actually tease out machine-generated conclusions about the relative relevance of different pages to different queries. None of us will ever eat the whole corpus, but Google can digest it for us and excrete the steaming nuggets of goodness that make it the search-engine miracle it is today.

8. Ebooks are like paper books. [Ebooks are like paper books]. To round out this talk, I’d like to go over the ways that ebooks are more like paper books than you’d expect. One of the truisms of retail theory is that purchasers need to come into contact with a good several times before they buy — seven contacts is tossed around as the magic number. That means that my readers have to hear the title, see the cover, pick up the book, read a review, and so forth, seven times, on average, before they’re ready to buy.

There’s a temptation to view downloading a book as comparable to bringing it home from the store, but that’s the wrong metaphor. Some of the time, maybe most of the time, downloading the text of the book is like taking it off the shelf at the store and looking at the cover and reading the blurbs (with the advantage of not having to come into contact with the residual DNA and burger king left behind by everyone else who browsed the book before you). Some writers are horrified at the idea that three hundred thousand copies of my first novel were downloaded and “only” ten thousand or so were sold so far. If it were the case that for ever copy sold, thirty were taken home from the store, that would be a horrifying outcome, for sure. But look at it another way: if one out of every thirty people who glanced at the cover of my book bought it, I’d be a happy author. And I am. Those downloads cost me no more than glances at the cover in a bookstore, and the sales are healthy.

We also like to think of physical books as being inherently countable in a way that digital books aren’t (an irony, since computers are damned good at counting things!). This is important, because writers get paid on the basis of the number of copies of their books that sell, so having a good count makes a difference. And indeed, my royalty statements contain precise numbers for copies printed, shipped, returned and sold.

But that’s a false precision. When the printer does a run of a book, it always runs a few extra at the start and finish of the run to make sure that the setup is right and to account for the occasional rip, drop, or spill. The actual total number of books printed is approximately the number of books ordered, but never exactly — if you’ve ever ordered 500 wedding invitations, chances are you received 500-and-a-few back from the printer and that’s why.

And the numbers just get fuzzier from there. Copies are stolen. Copies are dropped. Shipping people get the count wrong. Some copies end up in the wrong box and go to a bookstore that didn’t order them and isn’t invoiced for them and end up on a sale table or in the trash. Some copies are returned as damaged. Some are returned as unsold. Some come back to the store the next morning accompanied by a whack of buyer’s remorse. Some go to the place where the spare sock in the dryer ends up.

The numbers on a royalty statement are actuarial, not actual. They represent a kind of best-guess approximation of the copies shipped, sold, returned and so forth. Actuarial accounting works pretty well: well enough to run the juggernaut banking, insurance, and gambling industries on. It’s good enough for divvying up the royalties paid by musical rights societies for radio airplay and live performance. And it’s good enough for counting how many copies of a book are distributed online or off.

Counts of paper books are differently precise from counts of electronic books, sure: but neither one is inherently countable.

And finally, of course, there’s the matter of selling books. However an author earns her living from her words, printed or encoded, she has as her first and hardest task to find her audience. There are more competitors for our attention than we can possibly reconcile, prioritize or make sense of. Getting a book under the right person’s nose, with the right pitch, is the hardest and most important task any writer faces.

*

I care about books, a lot. I started working in libraries and bookstores at the age of 12 and kept at it for a decade, until I was lured away by the siren song of the tech world. I knew I wanted to be a writer at the age of 12, and now, 20 years later, I have three novels, a short story collection and a nonfiction book out, two more novels under contract, and another book in the works. [BOOK COVERS] I’ve won a major award in my genre, science fiction, [CAMPBELL AWARD] and I’m nominated for another one, the 2003 Nebula Award for best novelette. [NEBULA]

I own a *lot* of books. Easily more than 10,000 of them, in storage on both coasts of the North American continent [LIBRARY LADDER]. I have to own them, since they’re the tools of my trade: the reference works I refer to as a novelist and writer today. Most of the literature I dig is very short-lived, it disappears from the shelf after just a few months, usually for good. Science fiction is inherently ephemeral. [ACE DOUBLES]

Now, as much as I love books, I love computers, too. Computers are fundamentally different from modern books in the same way that printed books are different from monastic Bibles: they are malleable. Time was, a “book” was something produced by many months’ labor by a scribe, usually a monk, on some kind of durable and sexy substrate like foetal lambskin. [ILLUMINATED BIBLE] Gutenberg’s xerox machine changed all that, changed a book into something that could be simply run off a press in a few minutes’ time, on substrate more suitable to ass-wiping than exaltation in a place of honor in the cathedral. The Gutenberg press meant that rather than owning one or two books, a member of the ruling class could amass a library, and that rather than picking only a few subjects from enshrinement in print, a huge variety of subjects could be addressed on paper and handed from person to person. [KAPITAL/TIJUANA BIBLE]

Most new ideas start with a precious few certainties and a lot of speculation. I’ve been doing a bunch of digging for certainties and a lot of speculating lately, and the purpose of this talk is to lay out both categories of ideas.

This all starts with my first novel, Down and Out in the Magic Kingdom [COVER], which came out on January 9, 2003. At that time, there was a lot of talk in my professional circles about, on the one hand, the dismal failure of ebooks, and, on the other, the new and scary practice of ebook “piracy.” [alt.binaries.e-books screengrab] It was strikingly weird that no one seemed to notice that the idea of ebooks as a “failure” was at strong odds with the notion that electronic book “piracy” was worth worrying about: I mean, if ebooks are a failure, then who gives a rats if intarweb dweebs are trading them on Usenet?

A brief digression here, on the double meaning of “ebooks.” One meaning for that word is “legitimate” ebook ventures, that is to say, rightsholder-authorized editions of the texts of books, released in a proprietary, use-restricted format, sometimes for use on a general-purpose PC and sometimes for use on a special-purpose hardware device like the nuvoMedia Rocketbook [ROCKETBOOK]. The other meaning for ebook is a “pirate” or unauthorized electronic edition of a book, usually made by cutting the binding off of a book and scanning it a page at a time, then running the resulting bitmaps through an optical character recognition app to convert them into ASCII text, to be cleaned up by hand. These books are pretty buggy, full of errors introduced by the OCR. A lot of my colleagues worry that these books also have deliberate errors, created by mischievous book-rippers who cut, add or change text in order to “improve” the work. Frankly, I have never seen any evidence that any book-ripper is interested in doing this, and until I do, I think that this is the last thing anyone should be worrying about.

Back to Down and Out in the Magic Kingdom [COVER]. Well, not yet. I want to convey to you the depth of the panic in my field over ebook piracy, or “bookwarez” as it is known in book-ripper circles. Writers were joining the discussion on alt.binaries.ebooks using assumed names, claiming fear of retaliation from scary hax0r kids who would presumably screw up their credit-ratings in retaliation for being called thieves. My editor, a blogger, hacker and guy-in-charge-of-the-largest-sf-line-in-the-world named Patrick Nielsen Hayden posted to one of the threads in the newsgroup, saying, in part [SCREENGRAB]:

> Pirating copyrighted etext on Usenet and elsewhere is going to

> happen more and more, for the same reasons that everyday folks

> make audio cassettes from vinyl LPs and audio CDs, and

> videocassette copies of store-bought videotapes. Partly it’s

> greed; partly it’s annoyance over retail prices; partly it’s the

> desire to Share Cool Stuff (a motivation usually underrated by

> the victims of this kind of small-time hand-level piracy).

> Instantly going to Defcon One over it and claiming it’s morally

> tantamount to mugging little old ladies in the street will make

> it kind of difficult to move forward from that position when it

> doesn’t work. In the 1970s, the record industry shrieked that

> “home taping is killing music.” It’s hard for ordinary folks to

> avoid noticing that music didn’t die. But the record industry’s

> credibility on the subject wasn’t exactly enhanced.

Patrick and I have a long relationship, starting when I was 18 years old and he kicked in toward a scholarship fund to send me to a writers’ workshop, continuing to a fateful lunch in New York in the mid-Nineties when I showed him a bunch of Project Gutenberg texts on my Palm Pilot and inspired him to start licensing Tor’s titles for PDAs [PEANUTPRESS SCREENGRAB], to the turn-of-the-millennium when he bought and then published my first novel (he’s bought three more since — I really like Patrick!).

Right as bookwarez newsgroups were taking off, I was shocked silly by legal action by one of my colleagues against AOL/Time-Warner for carrying the alt.binaries.ebooks newsgroup. This writer alleged that AOL should have a duty to remove this newsgroup, since it carried so many infringing files, and that its failure to do so made it a contributory infringer, and so liable for the incredibly stiff penalties afforded by our newly minted copyright laws like the No Electronic Theft Act and the loathsome Digital Millennium Copyright Act or DMCA.

Now there was a scary thought: there were people out there who thought the world would be a better place if ISPs were given the duty of actively policing and censoring the websites and newsfeeds their customers had access to, including a requirement that ISPs needed to determine, all on their own, what was an unlawful copyright infringement — something more usually left up to judges in the light of extensive amicus briefings from esteemed copyright scholars [WIND DONE GONE GRAPHIC].

This was a stupendously dumb idea, and it offended me down to my boots. Writers are supposed to be advocates of free expression, not censorship. It seemed that some of my colleagues loved the First Amendment, but they were reluctant to share it with the rest of the world.

Well, dammit, I had a book coming out, and it seemed to be an opportunity to try to figure out a little more about this ebook stuff. On the one hand, ebooks were a dismal failure. On the other hand, there were more books posted to alt.binaries.ebooks every day.

This leads me into the two certainties I have about ebooks:

1. More people are reading more words off more screens every day [GRAPHIC]

2. Fewer people are reading fewer words off fewer pages every day [GRAPHIC]

These two certainties begged a lot of questions.

[CHART: EBOOK FAILINGS]

* Screen resolutions are too low to effectively replace paper

* People want to own physical books because of their visceral appeal (often this is accompanied by a little sermonette on how good books smell, or how good they look on a bookshelf, or how evocative an old curry stain in the margin can be)

* You can’t take your ebook into the tub

* You can’t read an ebook without power and a computer

* File-formats go obsolete, paper has lasted for a long time

None of these seemed like very good explanations for the “failure” of ebooks to me. If screen resolutions are too low to replace paper, then how come everyone I know spends more time reading off a screen every year, up to and including my sainted grandmother (geeks have a really crappy tendency to argue that certain technologies aren’t ready for primetime because their grandmothers won’t use them — well, my grandmother sends me email all the time. She types 70 words per minute, and loves to show off grandsonular email to her pals around the pool at her Florida retirement condo)?

The other arguments were a lot more interesting, though. It seemed to me that electronic books are different from paper books, and have different virtues and failings. Let’s think a little about what the book has gone through in years gone by. This is interesting because the history of the book is the history of the Enlightenment, the Reformation, the Pilgrims, and, ultimately the colonizing of the Americas and the American Revolution.

Broadly speaking, there was a time when books were hand-printed on rare leather by monks. The only people who could read them were priests, who got a regular eyeful of the really cool cartoons the monks drew in the margins. The priests read the books aloud, in Latin [LATIN BIBLE] (to a predominantly non-Latin-speaking audience) in cathedrals, wreathed in pricey incense that rose from censers swung by altar boys.

Then Johannes Gutenberg invented the printing press. Martin Luther turned that press into a revolution. [LUTHER BIBLE] He printed Bibles in languages that non-priests could read, and distributed them to normal people who got to read the word of God all on their own. The rest, as they say, is history.

Here are some interesting things to note about the advent of the printing press:

[CHART: LUTHER VERSUS THE MONKS]

* Luther Bibles lacked the manufacturing quality of the illuminated Bibles. They were comparatively cheap and lacked the typographical expressiveness that a really talented monk could bring to bear when writing out the word of God

* Luther Bibles were utterly unsuited to the traditional use-case for Bibles. A good Bible was supposed to reinforce the authority of the man at the pulpit. It needed heft, it needed impressiveness, and most of all, it needed rarity.

* The user-experience of Luther Bibles sucked. There was no incense, no altar boys, and who (apart from the priesthood) knew that reading was so friggin’ hard on the eyes?

* Luther Bibles were a lot less trustworthy than the illuminated numbers. Anyone with a press could run one off, subbing in any apocryphal text he wanted — and who knew how accurate that translation was? Monks had an entire Papacy behind them, running a quality-assurance operation that had stood Europe in good stead for centuries.

In the late nineties, I went to conferences where music execs patiently explained that Napster was doomed, because you didn’t get any cover-art or liner-notes with it, you couldn’t know if the rip was any good, and sometimes the connection would drop mid-download. I’m sure that many Cardinals espoused the points raised above with equal certainty.

What the record execs and the cardinals missed was all the ways that Luther Bibles kicked ass:

[CHART: WHY LUTHER BIBLES KICKED ASS]

* They were cheap and fast. Loads of people could acquire them without having to subject themselves to the authority and approval of the Church

* They were in languages that non-priests could read. You no longer had to take the Church’s word for it when its priests explained what God really meant

* They birthed a printing-press ecosystem in which lots of books flourished. New kinds of fiction, poetry, politics, scholarship and so on were all enabled by the printing presses whose initial popularity was spurred by Luther’s ideas about religion.

Note that all of these virtues are orthogonal to the virtues of a monkish Bible. That is, none of the things that made the Gutenberg press a success were the things that made monk-Bibles a success.

By the same token, the reasons to love ebooks have precious little to do with the reasons to love paper books.

[CHART: WHY EBOOKS KICK ASS]

* They are easy to share. Secrets of Ya-Ya Sisterhood went from a midlist title to a bestseller by being passed from hand to hand by women in reading circles. Slashdorks and other netizens have social life as rich as reading-circlites, but they don’t ever get to see each other face to face; the only kind of book they can pass from hand to hand is an ebook. What’s more, the single factor most correlated with a purchase is a recommendation from a friend — getting a book recommended by a pal is more likely to sell you on it than having read and enjoyed the preceding volume in a series!

* They are easy to slice and dice. This is where the Mac evangelist in me comes out — minority platforms matter. It’s a truism of the Napsterverse that most of the files downloaded are bog-standard top-40 tracks, like 90 percent or so, and I believe it. We all want to popular music. That’s why it’s popular. But the interesting thing is the other ten percent. Bill Gates told the New York Times that Microsoft lost the search wars by doing “a good job on the 80 percent of common queries and ignor[ing] the other stuff. But it’s the remaining 20 percent that counts, because that’s where the quality perception is.” Why did Napster captivate so many of us? Not because it could get us the top-40 tracks that we could hear just by snapping on the radio: it was because 80 percent of the music ever recorded wasn’t available for sale anywhere in the world, and in that 80 percent were all the songs that had ever touched us, all the earworms that had been lodged in our hindbrains, all the stuff that made us smile when we heard it. Those songs are different for all of us, but they share the trait of making the difference between a compelling service and, well, top-40 Clearchannel radio programming. It was the minority of tracks that appealed to the majority of us. By the same token, the malleability of electronic text means that it can be readily repurposed: you can throw it on a webserver or convert it to a format for your favorite PDA; you can ask your computer to read it aloud or you can search the text for a quotation to cite in a book report or to use in your sig. In other words, most people who download the book do so for the predictable reason, and in a predictable format — say, to sample a chapter in the HTML format before deciding whether to buy the book — but the thing that differentiates a boring e-text experience from an exciting one is the minority use — printing out a couple chapters of the book to bring to the beach rather than risk getting the hardcopy wet and salty.

Tool-makers and software designers are increasingly aware of the notion of “affordances” in design. You can bash a nail into the wall with any heavy, heftable object from a rock to a hammer to a cast-iron skillet. However, there’s something about a hammer that cries out for nail-bashing, it has affordances that tilt its holder towards swinging it. And, as we all know, when all you have is a hammer, everything starts to look like a nail.

The affordance of a computer — the thing it’s designed to do — is to slice-and-dice collections of bits. The affordance of the Internet is to move bits at very high speed around the world at little-to-no cost. It follows from this that the center of the ebook experience is going to involve slicing and dicing text and sending it around.

Copyright lawyers have a word for these activities: infringement. That’s because copyright gives creators a near-total monopoly over copying and remixing of their work, pretty much forever (theoretically, copyright expires, but in actual practice, copyright gets extended every time the early Mickey Mouse cartoons are about to enter the public domain, because Disney swings a very big stick on the Hill).

This is a huge problem. The biggest possible problem. Here’s why:

[CHART: HOW BROKEN COPYRIGHT SCREWS EVERYONE]

* Authors freak out. Authors have been schooled by their peers that strong copyright is the only thing that keeps them from getting savagely rogered in the marketplace. This is pretty much true: it’s strong copyright that often defends authors from their publishers’ worst excesses. However, it doesn’t follow that strong copyright protects you from your readers.

* Readers get indignant over being called crooks. Seriously. You’re a small businessperson. Readers are your customers. Calling them crooks is bad for business.

* Publishers freak out. Publishers freak out, because they’re in the business of grabbing as much copyright as they can and hanging onto it for dear life because, dammit, you never know. This is why science fiction magazines try to trick writers into signing over improbable rights for things like theme park rides and action figures based on their work — it’s also why literary agents are now asking for copyright-long commissions on the books they represent: copyright covers so much ground and takes to long to shake off, who wouldn’t want a piece of it?

* Liability goes through the roof. Copyright infringement, especially on the Net, is a supercrime. It carries penalties of $150,000 per infringement, and aggrieved rightsholders and their representatives have all kinds of special powers, like the ability to force an ISP to turn over your personal information before showing evidence of your alleged infringement to a judge. This means that anyone who suspects that he might be on the wrong side of copyright law is going to be terribly risk-averse: publishers non-negotiably force their authors to indemnify them from infringement claims and go one better, forcing writers to prove that they have “cleared” any material they quote, even in the case of brief fair-use quotations, like song-titles at the opening of chapters. The result is that authors end up assuming potentially life-destroying liability, are chilled from quoting material around them, and are scared off of public domain texts because an honest mistake about the public-domain status of a work carries such a terrible price.

* Posterity vanishes. In the Eldred v. Ashcroft Supreme Court hearing last year, the court found that 98 percent of the works in copyright are no longer earning money for anyone, but that figuring out who these old works belong to with the degree of certainty that you’d want when one mistake means total economic apocalypse would cost more than you could ever possibly earn on them. That means that 98 percent of works will largely expire long before the copyright on them does. Today, the names of science fiction’s ancestral founders — Mary Shelley, Arthur Conan Doyle, Edgar Allan Poe, Jules Verne, HG Wells — are still known, their work still a part of the discourse. Their spiritual descendants from Hugo Gernsback onward may not be so lucky — if their work continues to be “protected” by copyright, it might just vanish from the face of the earth before it reverts to the public domain.

This isn’t to say that copyright is bad, but that there’s such a thing as good copyright and bad copyright, and that sometimes, too much good copyright is a bad thing. It’s like chilis in soup: a little goes a long way, and too much spoils the broth.

From the Luther Bible to the first phonorecords, from radio to the pulps, from cable to MP3, the world has shown that its first preference for new media is its “democratic-ness” — the ease with which it can reproduced.

(And please, before we get any farther, forget all that business about how the Internet’s copying model is more disruptive than the technologies that proceeded it. For Christ’s sake, the Vaudeville performers who sued Marconi for inventing the radio had to go from a regime where they had one hundred percent control over who could get into the theater and hear them perform to a regime where they had zero percent control over who could build or acquire a radio and tune into a recording of them performing. For that matter, look at the difference between a monkish Bible and a Luther Bible — next to that phase-change, Napster is peanuts)

Back to democratic-ness. Every successful new medium has traded off its artifact-ness — the degree to which it was populated by bespoke hunks of atoms, cleverly nailed together by master craftspeople — for ease of reproduction. Piano rolls weren’t as expressive as good piano players, but they scaled better — as did radio broadcasts, pulp magazines, and MP3s. Liner notes, hand illumination and leather bindings are nice, but they pale in comparison to the ability of an individual to actually get a copy of her own.

Which isn’t to say that old media die. Artists still hand-illuminate books; master pianists still stride the boards at Carnegie Hall, and the shelves burst with tell-all biographies of musicians that are richer in detail than any liner-notes booklet. The thing is, when all you’ve got is monks, every book takes on the character of a monkish Bible. Once you invent the printing press, all the books that are better-suited to movable type migrate into that new form. What’s left behind are those items that are best suited to the old production scheme: the plays that need to be plays, the books that are especially lovely on creamy paper stitched between covers, the music that is most enjoyable performed live and experienced in a throng of humanity.

Increased democratic-ness translates into decreased control: it’s a lot harder to control who can copy a book once there’s a photocopier on every corner than it is when you need a monastery and several years to copy a Bible. And that decreased control demands a new copyright regime that rebalances the rights of creators with their audiences.

For example, when the VCR was invented, the courts affirmed a new copyright exemption for time-shifting; when the radio was invented, the Congress granted an anti-trust exemption to the record labels in order to secure a blanket license; when cable TV was invented, the government just ordered the broadcasters to sell the cable-operators access to programming at a fixed rate.

Copyright is perennially out of date, because its latest rev was generated in response to the last generation of technology. The temptation to treat copyright as though it came down off the mountain on two stone tablets (or worse, as “just like” real property) is deeply flawed, since, by definition, current copyright only considers the last generation of tech.

So, are bookwarez in violation of copyright law? Duh. Is this the end of the world? Duh. If the Catholic church can survive the printing press, science fiction will certainly weather the advent of bookwarez.

*

Lagniappe [Lagniappe]

We’re almost done here, but there’s one more thing I’d like to do before I get off the stage. [Lagniappe: an unexpected bonus or extra] Think of it as a “lagniappe” — a little something extra to thank you for your patience.

About a year ago, I released my first novel, Down and Out in the Magic Kingdom, on the net, under the terms of the most restrictive Creative Commons license available. All it allowed my readers to do was send around copies of the book. I was cautiously dipping my toe into the water, though at the time, it felt like I was taking a plunge.

Now I’m going to take a plunge. Today, I will re-license the text of Down and Out in the Magic Kingdom under a Creative Commons “Attribution-ShareAlike-Derivs-Noncommercial” license [HUMAN READABLE LICENSE], which means that as of today, you have my blessing to create derivative works from my first book. You can make movies, audiobooks, translations, fanfiction, slash fiction (God help us) [GEEK HIERARCHY], furry slash fiction [GEEK HIERARCHY DETAIL], poetry, translations, t-shirts, you name it, with two provisos: that one, you have to allow everyone else to rip, mix and burn your creations in the same way you’re hacking mine; and on the other hand, you’ve got to do it noncommercially.

The sky didn’t fall when I dipped my toe in. Let’s see what happens when I get in up to my knees.

The text with the new license will be online before the end of the day. Check craphound.com/down for details.

Oh, and I’m also releasing the text of this speech under a Creative Commons Public Domain dedication, [Public domain dedication] giving it away to the world to do with as it see fits. It’ll be linked off my blog, Boing Boing, before the day is through.

$$$$

Free(konomic) E-books

(Originally published in Locus Magazine, September 2007)

Can giving away free electronic books really sell printed books? I think so. As I explained in my March column (“You Do Like Reading Off a Computer Screen”), I don’t believe that most readers want to read long-form works off a screen, and I don’t believe that they will ever want to read long-form works off a screen. As I say in the column, the problem with reading off a screen isn’t resolution, eyestrain, or compatibility with reading in the bathtub: it’s that computers are seductive, they tempt us to do other things, making concentrating on a long-form work impractical.

Sure, some readers have the cognitive quirk necessary to read full-length works off screens, or are motivated to do so by other circumstances (such as being so broke that they could never hope to buy the printed work). The rational question isn’t, “Will giving away free e-books cost me sales?” but rather, “Will giving away free e-books win me more sales than it costs me?”

This is a very hard proposition to evaluate in a quantitative way. Books aren’t lattes or cable-knit sweaters: each book sells (or doesn’t) due to factors that are unique to that title. It’s hard to imagine an empirical, controlled study in which two “equivalent” books are published, and one is also available as a free download, the other not, and the difference calculated as a means of “proving” whether e-books hurt or help sales in the long run.

I’ve released all of my novels as free downloads simultaneous with their print publication. If I had a time machine, I could re-release them without the free downloads and compare the royalty statements. Lacking such a device, I’m forced to draw conclusions from qualitative, anecdotal evidence, and I’ve collected plenty of that:

* Many writers have tried free e-book releases to tie in with the print release of their works. To the best of my knowledge, every writer who’s tried this has repeated the experiment with future works, suggesting a high degree of satisfaction with the outcomes

* A writer friend of mine had his first novel come out at the same time as mine. We write similar material and are often compared to one another by critics and reviewers. My first novel had a free download, his didn’t. We compared sales figures and I was doing substantially better than him — he subsequently convinced his publisher to let him follow suit

* Baen Books has a pretty good handle on expected sales for new volumes in long-running series; having sold many such series, they have lots of data to use in sales estimates. If Volume N sells X copies, we expect Volume N+1 to sell Y copies. They report that they have seen a measurable uptick in sales following from free e-book releases of previous and current volumes

* David Blackburn, a Harvard PhD candidate in economics, published a paper in 2004 in which he calculated that, for music, “piracy” results in a net increase in sales for all titles in the 75th percentile and lower; negligible change in sales for the “middle class” of titles between the 75th percentile and the 97th percentile; and a small drag on the “super-rich” in the 97th percentile and higher. Publisher Tim O’Reilly describes this as “piracy’s progressive taxation,” apportioning a small wealth-redistribution to the vast majority of works, no net change to the middle, and a small cost on the richest few

* Speaking of Tim O’Reilly, he has just published a detailed, quantitative study of the effect of free downloads on a single title. O’Reilly Media published Asterisk: The Future of Telephony, in November 2005, simultaneously releasing the book as a free download. By March 2007, they had a pretty detailed picture of the sales-cycle of this book — and, thanks to industry standard metrics like those provided by Bookscan, they could compare it, apples-to-apples style, against the performance of competing books treating with the same subject. O’Reilly’s conclusion: downloads didn’t cause a decline in sales, and appears to have resulted in a lift in sales. This is particularly noteworthy because the book in question is a technical reference work, exclusively consumed by computer programmers who are by definition disposed to read off screens. Also, this is a reference work and therefore is more likely to be useful in electronic form, where it can be easily searched

* In my case, my publishers have gone back to press repeatedly for my books. The print runs for each edition are modest — I’m a midlist writer in a world with a shrinking midlist — but publishers print what they think they can sell, and they’re outselling their expectations

* The new opportunities arising from my free downloads are so numerous as to be uncountable — foreign rights deals, comic book licenses, speaking engagements, article commissions — I’ve made more money in these secondary markets than I have in royalties

* More anecdotes: I’ve had literally thousands of people approach me by e-mail and at signings and cons to say, “I found your work online for free, got hooked, and started buying it.” By contrast, I’ve had all of five e-mails from people saying, “Hey, idiot, thanks for the free book, now I don’t have to buy the print edition, ha ha!”

Many of us have assumed, a priori, that electronic books substitute for print books. While I don’t have controlled, quantitative data to refute the proposition, I do have plenty of experience with this stuff, and all that experience leads me to believe that giving away my books is selling the hell out of them.

More importantly, the free e-book skeptics have no evidence to offer in support of their position — just hand-waving and dark muttering about a mythological future when book-lovers give up their printed books for electronic book-readers (as opposed to the much more plausible future where book lovers go on buying their fetish objects and carry books around on their electronic devices).

I started giving away e-books after I witnessed the early days of the “bookwarez” scene, wherein fans cut the binding off their favorite books, scanned them, ran them through optical character recognition software, and manually proofread them to eliminate the digitization errors. These fans were easily spending 80 hours to rip their favorite books, and they were only ripping their favorite books, books they loved and wanted to share. (The 80-hour figure comes from my own attempt to do this — I’m sure that rippers get faster with practice.)

I thought to myself that 80 hours’ free promotional effort would be a good thing to have at my disposal when my books entered the market. What if I gave my readers clean, canonical electronic editions of my works, saving them the bother of ripping them, and so freed them up to promote my work to their friends?

After all, it’s not like there’s any conceivable way to stop people from putting books on scanners if they really want to. Scanners aren’t going to get more expensive or slower. The Internet isn’t going to get harder to use. Better to confront this challenge head on, turn it into an opportunity, than to rail against the future (I’m a science fiction writer — tuning into the future is supposed to be my metier).

The timing couldn’t have been better. Just as my first novel was being published, a new, high-tech project for promoting sharing of creative works launched: the Creative Commons project (CC). CC offers a set of tools that make it easy to mark works with whatever freedoms the author wants to give away. CC launched in 2003 and today, more than 160,000,000 works have been released under its licenses.

My next column will go into more detail on what CC is, what licenses it offers, and how to use them — but for now, check them out online at creativecommons.org.

$$$$

The Progressive Apocalypse and Other Futurismic Delights

(Originally published in Locus Magazine, July 2007)

Of course, science fiction is a literature of the present. Many’s the science fiction writer who uses the future as a warped mirror for reflecting back the present day, angled to illustrate the hidden strangeness buried by our invisible assumptions: Orwell turned 1948 into Nineteen Eighty-Four. But even when the fictional future isn’t a parable about the present day, it is necessarily a creation of the present day, since it reflects the present day biases that infuse the author. Hence Asimov’s Foundation, a New Deal-esque project to think humanity out of its tribulations though social interventionism.

Bold SF writers eschew the future altogether, embracing a futuristic account of the present day. William Gibson’s forthcoming Spook Country is an act of “speculative presentism,” a book so futuristic it could only have been set in 2006, a book that exploits retrospective historical distance to let us glimpse just how alien and futuristic our present day is.

Science fiction writers aren’t the only people in the business of predicting the future. Futurists — consultants, technology columnists, analysts, venture capitalists, and entrepreneurial pitchmen — spill a lot of ink, phosphors, and caffeinated hot air in describing a vision for a future where we’ll get more and more of whatever it is they want to sell us or warn us away from. Tomorrow will feature faster, cheaper processors, more Internet users, ubiquitous RFID tags, radically democratic political processes dominated by bloggers, massively multiplayer games whose virtual economies dwarf the physical economy.

There’s a lovely neologism to describe these visions: “futurismic.” Futurismic media is that which depicts futurism, not the future. It is often self-serving — think of the antigrav Nikes in Back to the Future III — and it generally doesn’t hold up well to scrutiny.

SF films and TV are great fonts of futurismic imagery: R2D2 is a fully conscious AI, can hack the firewall of the Death Star, and is equipped with a range of holographic projectors and antipersonnel devices — but no one has installed a $15 sound card and some text-to-speech software on him, so he has to whistle like Harpo Marx. Or take the Starship Enterprise, with a transporter capable of constituting matter from digitally stored plans, and radios that can breach the speed of light.

The non-futurismic version of NCC-1701 would be the size of a softball (or whatever the minimum size for a warp drive, transporter, and subspace radio would be). It would zip around the galaxy at FTL speeds under remote control. When it reached an interesting planet, it would beam a stored copy of a landing party onto the surface, and when their mission was over, it would beam them back into storage, annihilating their physical selves until they reached the next stopping point. If a member of the landing party were eaten by a green-skinned interspatial hippie or giant toga-wearing galactic tyrant, that member would be recovered from backup by the transporter beam. Hell, the entire landing party could consist of multiple copies of the most effective crewmember onboard: no redshirts, just a half-dozen instances of Kirk operating in clonal harmony.

Futurism has a psychological explanation, as recounted in Harvard clinical psych prof Daniel Gilbert’s 2006 book, Stumbling on Happiness. Our memories and our projections of the future are necessarily imperfect. Our memories consist of those observations our brains have bothered to keep records of, woven together with inference and whatever else is lying around handy when we try to remember something. Ask someone who’s eating a great lunch how breakfast was, and odds are she’ll tell you it was delicious. Ask the same question of someone eating rubbery airplane food, and he’ll tell you his breakfast was awful. We weave the past out of our imperfect memories and our observable present.

We make the future in much the same way: we use reasoning and evidence to predict what we can, and whenever we bump up against uncertainty, we fill the void with the present day. Hence the injunction on women soldiers in the future of Starship Troopers, or the bizarre, glassed-over “Progressland” city diorama at the end of the 1964 World’s Fair exhibit The Carousel of Progress, which Disney built for GE.

Lapsarianism — the idea of a paradise lost, a fall from grace that makes each year worse than the last — is the predominant future feeling for many people. It’s easy to see why: an imperfectly remembered golden childhood gives way to the worries of adulthood and physical senescence. Surely the world is getting worse: nothing tastes as good as it did when we were six, everything hurts all the time, and our matured gonads drive us into frenzies of bizarre, self-destructive behavior.

Lapsarianism dominates the Abrahamic faiths. I have an Orthodox Jewish friend whose tradition holds that each generation of rabbis is necessarily less perfect than the rabbis that came before, since each generation is more removed from the perfection of the Garden. Therefore, no rabbi is allowed to overturn any of his forebears’ wisdom, since they are all, by definition, smarter than him.

The natural endpoint of Lapsarianism is apocalypse. If things get worse, and worse, and worse, eventually they’ll just run out of worseness. Eventually, they’ll bottom out, a kind of rotten death of the universe when Lapsarian entropy hits the nadir and takes us all with it.

Running counter to Lapsarianism is progressivism: the Enlightenment ideal of a world of great people standing on the shoulders of giants. Each of us contributes to improving the world’s storehouse of knowledge (and thus its capacity for bringing joy to all of us), and our descendants and proteges take our work and improve on it. The very idea of “progress” runs counter to the idea of Lapsarianism and the fall: it is the idea that we, as a species, are falling in reverse, combing back the wild tangle of entropy into a neat, tidy braid.

Of course, progress must also have a boundary condition — if only because we eventually run out of imaginary ways that the human condition can improve. And science fiction has a name for the upper bound of progress, a name for the progressive apocalypse:

We call it the Singularity.

Vernor Vinge’s Singularity takes place when our technology reaches a stage that allows us to “upload” our minds into software, run them at faster, hotter speeds than our neurological wetware substrate allows for, and create multiple, parallel instances of ourselves. After the Singularity, nothing is predictable because everything is possible. We will cease to be human and become (as the title of Rudy Rucker’s next novel would have it) Postsingular.

The Singularity is what happens when we have so much progress that we run out of progress. It’s the apocalypse that ends the human race in rapture and joy. Indeed, Ken MacLeod calls the Singularity “the rapture of the nerds,” an apt description for the mirror-world progressive version of the Lapsarian apocalypse.

At the end of the day, both progress and the fall from grace are illusions. The central thesis of Stumbling on Happiness is that human beings are remarkably bad at predicting what will make us happy. Our predictions are skewed by our imperfect memories and our capacity for filling the future with the present day.

The future is gnarlier than futurism. NCC-1701 probably wouldn’t send out transporter-equipped drones — instead, it would likely find itself on missions whose ethos, mores, and rationale are largely incomprehensible to us, and so obvious to its crew that they couldn’t hope to explain them.

Science fiction is the literature of the present, and the present is the only era that we can hope to understand, because it’s the only era that lets us check our observations and predictions against reality.

$$$$

When the Singularity is More Than a Literary Device: An Interview with Futurist-Inventor Ray Kurzweil

(Originally published in Asimov’s Science Fiction Magazine, June 2005)

It’s not clear to me whether the Singularity is a technical belief system or a spiritual one.

The Singularity — a notion that’s crept into a lot of skiffy, and whose most articulate in-genre spokesmodel is Vernor Vinge — describes the black hole in history that will be created at the moment when human intelligence can be digitized. When the speed and scope of our cognition is hitched to the price-performance curve of microprocessors, our “progress” will double every eighteen months, and then every twelve months, and then every ten, and eventually, every five seconds.

Singularities are, literally, holes in space from whence no information can emerge, and so SF writers occasionally mutter about how hard it is to tell a story set after the information Singularity. Everything will be different. What it means to be human will be so different that what it means to be in danger, or happy, or sad, or any of the other elements that make up the squeeze-and-release tension in a good yarn will be unrecognizable to us pre-Singletons.

It’s a neat conceit to write around. I’ve committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.

Of course, the Singularity isn’t just a conceit for noodling with in the pages of the pulps: it’s the subject of serious-minded punditry, futurism, and even science.

Ray Kurzweil is one such pundit-futurist-scientist. He’s a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.

Kurzweil believes in the Singularity. In his 1990 manifesto, “The Age of Intelligent Machines,” Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called The Age of Spiritual Machines, whose most audacious claim is that the world’s computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industry’s Moore’s Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.

Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.

See what I meant about his being a Heinlein hero?

I still don’t know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.

I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you’ve been restored from backup?

The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to “cure” him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that it’s a person, then to all intents and purposes, it’s a person.

So how do you know if the backed-up you that you’ve restored into a new body — or a jar with a speaker attached to it — is really you? Well, you can ask it some questions, and if it answers the same way that you do, you’re talking to a faithful copy of yourself.

Sounds good. But the me who sent his first story into Asimov’s seventeen years ago couldn’t answer the question, “Write a story for Asimov’s” the same way the me of today could. Does that mean I’m not me anymore?

Kurzweil has the answer.

“If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesn’t have to match the quantum state of my every neuron, either: if you meet me the next day, I’d pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we don’t examine the assumption that we are the same person closely.

“We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brain — the physical part of us most closely associated with our identity — cells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only days. I’m a completely different set of particles from what I was a week ago.

“Consciousness is a difficult subject, and I’m always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we can’t postulate a consciousness detector that does not have some assumptions about consciousness built into it.

“Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and there’s a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers don’t generally attribute consciousness to animals that aren’t humanlike.

“When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals.”

The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer that’s as fast and complex as a brain, and voila, intelligence.

Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us us.

But brains aren’t that complex, Kurzweil says. Already, we’re starting to unravel their mysteries.

“We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don’t have any, most animals don’t have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.

“Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child’s mastery of language. Language is the true embodiment of human intelligence.”

If we’re not so complex, then it’s only a matter of time until computers are more complex than us. When that comes, our brains will be model-able in a computer and that’s when the fun begins. That’s the thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.

Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z petabytes, and that n and x and z are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.

But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinlein’s “Man Who Sold the Moon,” the gimmick is that once the computer becomes complex enough, with enough “random numbers,” it just wakes up.

Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselves — they tend to be no smarter than the people who write their software.

But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lots — thousands or even millions — of randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.

Indeed, evolutionary computing is a promising and exciting field that’s realizing real returns through cool offshoots like “ant colony optimization” and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.

So if you buy Kurzweil’s premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to evolve the best way to model a scanned-in human brain such that it “wakes up” like Heinlein’s Mike computer?

Indeed, this is the crux of Kurzweil’s argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.Indeed, this is the crux of Kurzweil’s argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.

But it’s a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution: heritable variation of candidates and a system that culls the least-suitable candidates. This latter — the fitness-factor that determines which individuals in a cohort breed and which vanish — is the key to a successful evolutionary system. Without it, there’s no pressure for the system to achieve the desired goal: merely mutation and more mutation.

But how can a machine evaluate which of a trillion models of a human brain is “most like” a conscious mind? Or better still: which one is most like the individual whose brain is being modeled?

“It is a sleight of hand in Spiritual Machines,” Kurzweil admits. “But in The Singularity Is Near, I have an in-depth discussion about what we know about the brain and how to model it. Our tools for understanding the brain are subject to the Law of Accelerating Returns, and we’ve made more progress in reverse-engineering the human brain than most people realize.” This is a tasty Kurzweilism that observes that improvements in technology yield tools for improving technology, round and round, so that the thing that progress begets more than anything is more and yet faster progress.

“Scanning resolution of human tissue — both spatial and temporal — is doubling every year, and so is our knowledge of the workings of the brain. The brain is not one big neural net, the brain is several hundred different regions, and we can understand each region, we can model the regions with mathematics, most of which have some nexus with chaos and self-organizing systems. This has already been done for a couple dozen regions out of the several hundred.

“We have a good model of a dozen or so regions of the auditory and visual cortex, how we strip images down to very low-resolution movies based on pattern recognition. Interestingly, we don’t actually see things, we essentially hallucinate them in detail from what we see from these low resolution cues. Past the early phases of the visual cortex, detail doesn’t reach the brain.

“We are getting exponentially more knowledge. We can get detailed scans of neurons working in vivo, and are beginning to understand the chaotic algorithms underlying human intelligence. In some cases, we are getting comparable performance of brain regions in simulation. These tools will continue to grow in detail and sophistication.

“We can have confidence of reverse-engineering the brain in twenty years or so. The reason that brain reverse engineering has not contributed much to artificial intelligence is that up until recently we didn’t have the right tools. If I gave you a computer and a few magnetic sensors and asked you to reverse-engineer it, you might figure out that there’s a magnetic device spinning when a file is saved, but you’d never get at the instruction set. Once you reverse-engineer the computer fully, however, you can express its principles of operation in just a few dozen pages.

“Now there are new tools that let us see the interneuronal connections and their signaling, in vivo, and in real-time. We’re just now getting these tools and there’s very rapid application of the tools to obtain the data.

“Twenty years from now we will have realistic simulations and models of all the regions of the brain and [we will] understand how they work. We won’t blindly or mindlessly copy those methods, we will understand them and use them to improve our AI toolkit. So we’ll learn how the brain works and then apply the sophisticated tools that we will obtain, as we discover how the brain works.

“Once we understand a subtle science principle, we can isolate, amplify, and expand it. Air goes faster over a curved surface: from that insight we isolated, amplified, and expanded the idea and invented air travel. We’ll do the same with intelligence.

“Progress is exponential — not just a measure of power of computation, number of Internet nodes, and magnetic spots on a hard disk — the rate of paradigm shift is itself accelerating, doubling every decade. Scientists look at a problem and they intuitively conclude that since we’ve solved 1 percent over the last year, it’ll therefore be one hundred years until the problem is exhausted: but the rate of progress doubles every decade, and the power of the information tools (in price-performance, resolution, bandwidth, and so on) doubles every year. People, even scientists, don’t grasp exponential growth. During the first decade of the human genome project, we only solved 2 percent of the problem, but we solved the remaining 98 percent in five years.”

But Kurzweil doesn’t think that the future will arrive in a rush. As William Gibson observed, “The future is here, it’s just not evenly distributed.”

“Sure, it’d be interesting to take a human brain, scan it, reinstantiate the brain, and run it on another substrate. That will ultimately happen.”

“But the most salient scenario is that we’ll gradually merge with our technology. We’ll use nanobots to kill pathogens, then to kill cancer cells, and then they’ll go into our brain and do benign things there like augment our memory, and very gradually they’ll get more and more sophisticated. There’s no single great leap, but there is ultimately a great leap comprised of many small steps.

“In The Singularity Is Near, I describe the radically different world of 2040, and how we’ll get there one benign change at a time. The Singularity will be gradual, smooth.

“Really, this is about augmenting our biological thinking with nonbiological thinking. We have a capacity of 1026 to 1029 calculations per second (cps) in the approximately 1010 biological human brains on Earth and that number won’t change much in fifty years, but nonbiological thinking will just crash through that. By 2049, nonbiological thinking capacity will be on the order of a billion times that. We’ll get to the point where bio thinking is relatively insignificant.

“People didn’t throw their typewriters away when word-processing started. There’s always an overlap — it’ll take time before we realize how much more powerful nonbiological thinking will ultimately be.”

It’s well and good to talk about all the stuff we can do with technology, but it’s a lot more important to talk about the stuff we’ll be allowed to do with technology. Think of the global freak-out caused by the relatively trivial advent of peer-to-peer file-sharing tools: Universities are wiretapping their campuses and disciplining computer science students for writing legitimate, general purpose software; grandmothers and twelve-year-olds are losing their life savings; privacy and due process have sailed out the window without so much as a by-your-leave.

Even P2P’s worst enemies admit that this is a general-purpose technology with good and bad uses, but when new tech comes along it often engenders a response that countenances punishing an infinite number of innocent people to get at the guilty.

What’s going to happen when the new technology paradigm isn’t song-swapping, but transcendent super-intelligence? Will the reactionary forces be justified in razing the whole ecosystem to eliminate a few parasites who are doing negative things with the new tools?

“Complex ecosystems will always have parasites. Malware [malicious software] is the most important battlefield today.

“Everything will become software — objects will be malleable, we’ll spend lots of time in VR, and computhought will be orders of magnitude more important than biothought.

“Software is already complex enough that we have an ecological terrain that has emerged just as it did in the bioworld.

“That’s partly because technology is unregulated and people have access to the tools to create malware and the medicine to treat it. Today’s software viruses are clever and stealthy and not simpleminded. Very clever.

“But here’s the thing: you don’t see people advocating shutting down the Internet because malware is so destructive. I mean, malware is potentially more than a nuisance — emergency systems, air traffic control, and nuclear reactors all run on vulnerable software. It’s an important issue, but the potential damage is still a tiny fraction of the benefit we get from the Internet.

“I hope it’ll remain that way — that the Internet won’t become a regulated space like medicine. Malware’s not the most important issue facing human society today. Designer bioviruses are. People are concerted about WMDs, but the most daunting WMD would be a designed biological virus. The means exist in college labs to create destructive viruses that erupt and spread silently with long incubation periods.

“Importantly, a would-be bio-terrorist doesn’t have to put malware through the FDA’s regulatory approval process, but scientists working to fix bio-malware do.

“In Huxley’s Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won’t listen to the regulators anyway.

“The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.

“I advocate a one hundred billion dollar program to accelerate the development of anti-biological virus technology. The way to combat this is to develop broad tools to destroy viruses. We have tools like RNA interference, just discovered in the past two years to block gene expression. We could develop means to sequence the genes of a new virus (SARS only took thirty-one days) and respond to it in a matter of days.

“Think about it. There’s no FDA for software, no certification for programmers. The government is thinking about it, though! The reason the FCC is contemplating Trusted Computing mandates,” — a system to restrict what a computer can do by means of hardware locks embedded on the motherboard — “is that computing technology is broadening to cover everything. So now you have communications bureaucrats, biology bureaucrats, all wanting to regulate computers.

“Biology would be a lot more stable if we moved away from regulation — which is extremely irrational and onerous and doesn’t appropriately balance risks. Many medications are not available today even though they should be. The FDA always wants to know what happens if we approve this and will it turn into a thalidomide situation that embarrasses us on CNN?

“Nobody asks about the harm that will certainly accrue from delaying a treatment for one or more years. There’s no political weight at all, people have been dying from diseases like heart disease and cancer for as long as we’ve been alive. Attributable risks get 100-1000 times more weight than unattributable risks.”

Is this spirituality or science? Perhaps it is the melding of both — more shades of Heinlein, this time the weird religions founded by people who took Stranger in a Strange Land way too seriously.

After all, this is a system of belief that dictates a means by which we can care for our bodies virtuously and live long enough to transcend them. It is a system of belief that concerns itself with the meddling of non-believers, who work to undermine its goals through irrational systems predicated on their disbelief. It is a system of belief that asks and answers the question of what it means to be human.

It’s no wonder that the Singularity has come to occupy so much of the science fiction narrative in these years. Science or spirituality, you could hardly ask for a subject better tailored to technological speculation and drama.

$$$$

Wikipedia: a genuine Hitchhikers’ Guide to the Galaxy — minus the editors

(Originally published in The Anthology at the End of the Universe, April 2005)

“Mostly Harmless” — a phrase so funny that Adams actually titled a book after it. Not that there’s a lot of comedy inherent in those two words: rather, they’re the punchline to a joke that anyone who’s ever written for publication can really get behind.

Ford Prefect, a researcher for the Hitchhiker’s Guide to the Galaxy, has been stationed on Earth for years, painstakingly compiling an authoritative, insightful entry on Terran geography, science and culture, excerpts from which appear throughout the H2G2 books. His entry improved upon the old one, which noted that Earth was, simply, “Harmless.”

However, the Guide has limited space, and when Ford submits his entry to his editors, it is trimmed to fit:

“What? Harmless? Is that all it’s got to say? Harmless! One

word!”

Ford shrugged. “Well, there are a hundred billion stars in the

Galaxy, and only a limited amount of space in the book’s

microprocessors,” he said, “and no one knew much about the Earth

of course.”

“Well for God’s sake I hope you managed to rectify that a bit.”

“Oh yes, well I managed to transmit a new entry off to the editor.

He had to trim it a bit, but it’s still an improvement.”

“And what does it say now?” asked Arthur.

“Mostly harmless,” admitted Ford with a slightly embarrassed

cough.

[fn: My lifestyle is as gypsy and fancy-free as the characters in H2G2, and as a result my copies of the Adams books are thousands of miles away in storages in other countries, and this essay was penned on public transit and cheap hotel rooms in Chile, Boston, London, Geneva, Brussels, Bergen, Geneva (again), Toronto, Edinburgh, and Helsinki. Luckily, I was able to download a dodgy, re-keyed version of the Adams books from a peer-to-peer network, which network I accessed via an open wireless network on a random street-corner in an anonymous city, a fact that I note here as testimony to the power of the Internet to do what the Guide does for Ford and Arthur: put all the information I need at my fingertips, wherever I am. However, these texts are a little on the dodgy side, as noted, so you might want to confirm these quotes before, say, uttering them before an Adams truefan.]

And there’s the humor: every writer knows the pain of laboring over a piece for days, infusing it with diverse interesting factoids and insights, only to have it cut to ribbons by some distant editor (I once wrote thirty drafts of a 5,000-word article for an editor who ended up running it in three paragraphs as accompaniment for what he decided should be a photo essay with minimal verbiage.)

Since the dawn of the Internet, H2G2 geeks have taken it upon themselves to attempt to make a Guide on the Internet. Volunteers wrote and submitted essays on various subjects as would be likely to appear in a good encyclopedia, infusing them with equal measures of humor and thoughtfulness, and they were edited together by the collective effort of the contributors. These projects — Everything2, H2G2 (which was overseen by Adams himself), and others — are like a barn-raising in which a team of dedicated volunteers organize the labors of casual contributors, piecing together a free and open user-generated encyclopedia.

These encyclopedias have one up on Adams’s Guide: they have no shortage of space on their “microprocessors” (the first volume of the Guide was clearly written before Adams became conversant with PCs!). The ability of humans to generate verbiage is far outstripped by the ability of technologists to generate low-cost, reliable storage to contain it. For example, Brewster Kahle’s Internet Archive project (archive.org) has been making a copy of the Web — the whole Web, give or take — every couple of days since 1996. Using the Archive’s Wayback Machine, you can now go and see what any page looked like on a given day.

The Archive doesn’t even bother throwing away copies of pages that haven’t changed since the last time they were scraped: with storage as cheap as it is — and it is very cheap for the Archive, which runs the largest database in the history of the universe off of a collection of white-box commodity PCs stacked up on packing skids in the basement of a disused armory in San Francisco’s Presidio — there’s no reason not to just keep them around. In fact, the Archive has just spawned two “mirror” Archives, one located under the rebuilt Library of Alexandria and the other in Amsterdam. [fn: Brewster Kahle says that he was nervous about keeping his only copy of the “repository of all human knowledge” on the San Andreas fault, but keeping your backups in a censorship-happy Amnesty International watchlist state and/or in a floodplain below sea level is probably not such a good idea either!]

So these systems did not see articles trimmed for lack of space; for on the Internet, the idea of “running out of space” is meaningless. But they were trimmed, by editorial cliques, and rewritten for clarity and style. Some entries were rejected as being too thin, while others were sent back to the author for extensive rewrites.

This traditional separation of editor and writer mirrors the creative process itself, in which authors are exhorted to concentrate on either composing or revising, but not both at the same time, for the application of the critical mind to the creative process strangles it. So you write, and then you edit. Even when you write for your own consumption, it seems you have to answer to an editor.

The early experimental days of the Internet saw much experimentation with alternatives to traditional editor/author divisions. Slashdot, a nerdy news-site of surpassing popularity [fn: Having a link to one’s website posted to Slashdot will almost inevitably overwhelm your server with traffic, knocking all but the best-provisioned hosts offline within minutes; this is commonly referred to as “the Slashdot Effect.”], has a baroque system for “community moderation” of the responses to the articles that are posted to its front pages. Readers, chosen at random, are given five “moderator points” that they can use to raise or lower the score of posts on the Slashdot message boards. Subsequent readers can filter their views of these boards to show only highly ranked posts. Other readers are randomly presented with posts and their rankings and are asked to rate the fairness of each moderator’s moderation. Moderators who moderate fairly are given more opportunities to moderate; likewise message-board posters whose messages are consistently highly rated.

It is thought that this system rewards good “citizenship” on the Slashdot boards through checks and balances that reward good messages and fair editorial practices. And in the main, the Slashdot moderation system works [fn: as do variants on it, like the system in place at Kur5hin.org (pronounced “corrosion”)]. If you dial your filter up to show you highly scored messages, you will generally get well-reasoned, or funny, or genuinely useful posts in your browser.

This community moderation scheme and ones like it have been heralded as a good alternative to traditional editorship. The importance of the Internet to “edit itself” is best understood in relation to the old shibboleth, “On the Internet, everyone is a slushreader.” [fn: “Slush” is the term for generally execrable unsolicited manuscripts that fetch up in publishers’ offices — these are typically so bad that the most junior people on staff are drafted into reading (and, usually, rejecting) them]. When the Internet’s radical transformative properties were first bandied about in publishing circles, many reassured themselves that even if printing’s importance was de-emphasized, that good editors would always been needed, and doubly so online, where any mouth-breather with a modem could publish his words. Someone would need to separate the wheat from the chaff and help keep us from drowning in information.

One of the best-capitalized businesses in the history of the world, Yahoo!, went public on the strength of this notion, proposing to use an army of researchers to catalog every single page on the Web even as it was created, serving as a comprehensive guide to all human knowledge. Less than a decade later, Yahoo! is all but out of that business: the ability of the human race to generate new pages far outstrips Yahoo!‘s ability to read, review, rank and categorize them.

Hence Slashdot, a system of distributed slushreading. Rather than professionalizing the editorship role, Slashdot invites contributors to identify good stuff when they see it, turning editorship into a reward for good behavior.

But as well as Slashdot works, it has this signal failing: nearly every conversation that takes place on Slashdot is shot through with discussion, griping and gaming on the moderation system itself. The core task of Slashdot has become editorship, not the putative subjects of Slashdot posts. The fact that the central task of Slashdot is to rate other Slashdotters creates a tenor of meanness in the discussion. Imagine if the subtext of every discussion you had in the real world was a kind of running, pedantic nitpickery in which every point was explicitly weighed and judged and commented upon. You’d be an unpleasant, unlikable jerk, the kind of person that is sometimes referred to as a “slashdork.”

As radical as Yahoo!‘s conceit was, Slashdot’s was more radical. But as radical as Slashdot’s is, it is still inherently conservative in that it presumes that editorship is necessary, and that it further requires human judgment and intervention.

Google’s a lot more radical. Instead of editors, it has an algorithm. Not the kind of algorithm that dominated the early search engines like Altavista, in which laughably bad artificial intelligence engines attempted to automatically understand the content, context and value of every page on the Web so that a search for “Dog” would turn up the page more relevant to the query.

Google’s algorithm is predicated on the idea that people are good at understanding things and computers are good at counting things. Google counts up all the links on the Web and affords more authority to those pages that have been linked to by the most other pages. The rationale is that if a page has been linked to by many web-authors, then they must have seen some merit in that page. This system works remarkably well — so well that it’s nearly inconceivable that any search-engine would order its rankings by any other means. What’s more, it doesn’t pervert the tenor of the discussions and pages that it catalogs by turning each one into a performance for a group of ranking peers. [fn: Or at least, it didn’t. Today, dedicated web-writers, such as bloggers, are keenly aware of the way that Google will interpret their choices about linking and page-structure. One popular sport is “googlebombing,” in which web-writers collude to link to a given page using a humorous keyword so that the page becomes the top result for that word — which is why, for a time, the top result for “more evil than Satan” was Microsoft.com. Likewise, the practice of “blogspamming,” in which unscrupulous spammers post links to their webpages in the message boards on various blogs, so that Google will be tricked into thinking that a wide variety of sites have conferred some authority onto their penis-enlargement page.]

But even Google is conservative in assuming that there is a need for editorship as distinct from composition. Is there a way we can dispense with editorship altogether and just use composition to refine our ideas? Can we merge composition and editorship into a single role, fusing our creative and critical selves?

You betcha.

“Wikis” [fn: Hawai’ian for “fast”] are websites that can be edited by anyone. They were invented by Ward Cunningham in 1995, and they have become one of the dominant tools for Internet collaboration in the present day. Indeed, there is a sort of Internet geek who throws up a Wiki in the same way that ants make anthills: reflexively, unconsciously.

Here’s how a Wiki works. You put up a page:

Welcome to my Wiki. It is rad.

There are OtherWikis that inspired me.

Click “publish” and bam, the page is live. The word “OtherWikis” will be underlined, having automatically been turned into a link to a blank page titled “OtherWikis” (Wiki software recognizes words with capital letters in the middle of them as links to other pages. Wiki people call this “camel-case,” because the capital letters in the middle of words make them look like humped camels.) At the bottom of it appears this legend: “Edit this page.”

Click on “Edit this page” and the text appears in an editable field. Revise the text to your heart’s content and click “Publish” and your revisions are live. Anyone who visits a Wiki can edit any of its pages, adding to it, improving on it, adding camel-cased links to new subjects, or even defacing or deleting it.

It is authorship without editorship. Or authorship fused with editorship. Whichever, it works, though it requires effort. The Internet, like all human places and things, is fraught with spoilers and vandals who deface whatever they can. Wiki pages are routinely replaced with obscenities, with links to spammers’ websites, with junk and crap and flames.

But Wikis have self-defense mechanisms, too. Anyone can “subscribe” to a Wiki page, and be notified when it is updated. Those who create Wiki pages generally opt to act as “gardeners” for them, ensuring that they are on hand to undo the work of the spoilers.

In this labor, they are aided by another useful Wiki feature: the “history” link. Every change to every Wiki page is logged and recorded. Anyone can page back through every revision, and anyone can revert the current version to a previous one. That means that vandalism only lasts as long as it takes for a gardener to come by and, with one or two clicks, set things to right.

This is a powerful and wildly successful model for collaboration, and there is no better example of this than the Wikipedia, a free, Wiki-based encyclopedia with more than one million entries, which has been translated into 198 languages [fn: That is, one or more Wikipedia entries have been translated into 198 languages; more than 15 languages have 10,000 or more entries translated]

Wikipedia is built entirely out of Wiki pages created by self-appointed experts. Contributors research and write up subjects, or produce articles on subjects that they are familiar with.

This is authorship, but what of editorship? For if there is one thing a Guide or an encyclopedia must have, it is authority. It must be vetted by trustworthy, neutral parties, who present something that is either The Truth or simply A Truth, but truth nevertheless.

The Wikipedia has its skeptics. Al Fasoldt, a writer for the Syracuse Post-Standard, apologized to his readers for having recommended that they consult Wikipedia. A reader of his, a librarian, wrote in and told him that his recommendation had been irresponsible, for Wikipedia articles are often defaced or worse still, rewritten with incorrect information. When another journalist from the Techdirt website wrote to Fasoldt to correct this impression, Fasoldt responded with an increasingly patronizing and hysterical series of messages in which he described Wikipedia as “outrageous,” “repugnant” and “dangerous,” insulting the Techdirt writer and storming off in a huff. [fn: see http://techdirt.com/articles/20040827/0132238[underscore]F.shtml for more]

Spurred on by this exchange, many of Wikipedia’s supporters decided to empirically investigate the accuracy and resilience of the system. Alex Halavais made changes to 13 different pages, ranging from obvious to subtle. Every single change was found and corrected within hours. [fn: see http://alex.halavais.net/news/index.php?p=794 for more] Then legendary Princeton engineer Ed Felten ran side-by-side comparisons of Wikipedia entries on areas in which he had deep expertise with their counterparts in the current electronic edition of the Encyclopedia Britannica. His conclusion? “Wikipedia’s advantage is in having more, longer, and more current entries. If it weren’t for the Microsoft-case entry, Wikipedia would have been the winner hands down. Britannica’s advantage is in having lower variance in the quality of its entries.” [fn: see http://www.freedom-to-tinker.com/archives/000675.html for more] Not a complete win for Wikipedia, but hardly “outrageous,” “repugnant” and “dangerous.” (Poor Fasoldt — his idiotic hyperbole will surely haunt him through the whole of his career — I mean, “repugnant?!”)

There has been one very damning and even frightening indictment of Wikipedia, which came from Ethan Zuckerman, the founder of the GeekCorps group, which sends volunteers to poor countries to help establish Internet Service Providers and do other good works through technology.

Zuckerman, a Harvard Berkman Center Fellow, is concerned with the “systemic bias” in a collaborative encyclopedia whose contributors must be conversant with technology and in possession of same in order to improve on the work there. Zuckerman reasonably observes that Internet users skew towards wealth, residence in the world’s richest countries, and a technological bent. This means that the Wikipedia, too, is skewed to subjects of interest to that group — subjects where that group already has expertise and interest.

The result is tragicomical. The entry on the Congo Civil War, the largest military conflict the world has seen since WWII, which has claimed over three million lives, has only a fraction of the verbiage devoted to the War of the Ents, a fictional war fought between sentient trees in JRR Tolkien’s Lord of the Rings.

Zuckerman issued a public call to arms to rectify this, challenging Wikipedia contributors to seek out information on subjects like Africa’s military conflicts, nursing and agriculture and write these subjects up in the same loving detail given over to science fiction novels and contemporary youth culture. His call has been answered well. What remains is to infiltrate the Wikipedia into the academe so that term papers, Masters and Doctoral theses on these subjects find themselves in whole or in part on the Wikipedia. [fn See http://en.wikipedia.org/wiki/User:Xed/CROSSBOW for more on this]

But if Wikipedia is authoritative, how does it get there? What alchemy turns the maunderings of “mouth-breathers with modems” into valid, useful encyclopedia entries?

It all comes down to the way that disputes are deliberated over and resolved. Take the entry on Israel. At one point, it characterized Israel as a beleaguered state set upon by terrorists who would drive its citizens into the sea. Not long after, the entry was deleted holus-bolus and replaced with one that described Israel as an illegal state practicing Apartheid on an oppressed ethnic minority.

Back and forth the editors went, each overwriting the other’s with his or her own doctrine. But eventually, one of them blinked. An editor moderated the doctrine just a little, conceding a single point to the other. And the other responded in kind. In this way, turn by turn, all those with a strong opinion on the matter negotiated a kind of Truth, a collection of statements that everyone could agree represented as neutral a depiction of Israel as was likely to emerge. Whereupon, the joint authors of this marvelous document joined forces and fought back to back to resist the revisions of other doctrinaires who came later, preserving their hard-won peace. [fn: This process was just repeated in microcosm in the Wikipedia entry on the author of this paper, which was replaced by a rather disparaging and untrue entry that characterized his books as critical and commercial failures — there ensued several editorial volleys, culminating in an uneasy peace that couches the anonymous detractor’s skepticism in context and qualifiers that make it clear what the facts are and what is speculation]

What’s most fascinating about these entries isn’t their “final” text as currently present on Wikipedia. It is the history page for each, blow-by-blow revision lists that make it utterly transparent where the bodies were buried on the way to arriving at whatever Truth has emerged. This is a neat solution to the problem of authority — if you want to know what the fully rounded view of opinions on any controversial subject look like, you need only consult its entry’s history page for a blistering eyeful of thorough debate on the subject.

And here, finally, is the answer to the “Mostly harmless” problem. Ford’s editor can trim his verbiage to two words, but they need not stay there — Arthur, or any other user of the Guide as we know it today [fn: that is, in the era where we understand enough about technology to know the difference between a microprocessor and a hard-drive] can revert to Ford’s glorious and exhaustive version.

Think of it: a Guide without space restrictions and without editors, where any Vogon can publish to his heart’s content.

Lovely.

$$$$

Warhol is Turning in His Grave

(Originally published in The Guardian, November 13, 2007)

The excellent little programmer book for the National Portrait Gallery’s current show POPARTPORTRAITS has a lot to say about the pictures hung on the walls, about the diverse source material the artists drew from in producing their provocative works. They cut up magazines, copied comic books, drew in trademarked cartoon characters like Minnie Mouse, reproduced covers from Time magazine, made ironic use of the cartoon figure of Charles Atlas, painted over an iconic photo of James Dean or Elvis Presley — and that’s just in the first room of seven.

The programmer book describes the aesthetic experience of seeing these repositioned icons of culture high and low, the art created by the celebrated artists Poons, Rauschenberg, Warhol, et al by nicking the work of others, without permission, and remaking it to make statements and evoke emotions never countenanced by the original creators.

However, the book does not say a word about copyright. Can you blame it? A treatise on the way that copyright and trademark were — had to be — trammeled to make these works could fill volumes. Reading the programmer book, you have to assume that the curators’ only message about copyright is that where free expression is concerned, the rights of the creators of the original source material appropriated by the pop school take a back seat.

There is, however, another message about copyright in the National Portrait Gallery: it’s implicit in the “No Photography” signs prominently placed throughout the halls, including one right by the entrance of the POPARTPORTRAITS exhibition. This isn’t intended to protect the works from the depredations of camera-flashes (it would read NO FLASH PHOTOGRAPHY if this were so). No, the ban on pictures is in place to safeguard the copyright in the works hung on the walls — a fact that every gallery staffer I spoke to instantly affirmed when I asked about the policy.

Indeed, it seems that every square centimeter of the Portrait Gallery is under some form of copyright. I wasn’t even allowed to photograph the NO PHOTOGRAPHS sign. A museum staffer explained that she’d been told that the typography and layout of the NO PHOTOGRAPHS legend was, itself, copyrighted. If this is true, then presumably, the same rules would prevent anyone from taking any pictures in any public place — unless you could somehow contrive to get a shot of Leicester Square without any writing, logos, architectural facades, or images in it. I doubt Warhol could have done it.

What’s the message of the show, then? Is it a celebration of remix culture, reveling in the endless possibilities opened up by appropriating and re-using without permission?

Or is it the epitaph on the tombstone of the sweet days before the UN’s chartering of the World Intellectual Property Organization and the ensuing mania for turning everything that can be sensed and recorded into someone’s property?

Does this show — paid for with public money, with some works that are themselves owned by public institutions — seek to inspire us to become 21st century pops, armed with cameraphones, websites and mixers, or is it supposed to inform us that our chance has passed, and we’d best settle for a life as information serfs, who can’t even make free use of what our eyes see, our ears hear, of the streets we walk upon?

Perhaps, just perhaps, it’s actually a Dadaist show masquerading as a pop art show! Perhaps the point is to titillate us with the delicious irony of celebrating copyright infringement while simultaneously taking the view that even the NO PHOTOGRAPHY sign is a form of property, not to be reproduced without the permission that can never be had.

$$$$

The Future of Ignoring Things

(Originally published on InformationWeek’s Internet Evolution, October 3, 2007)

For decades, computers have been helping us to remember, but now it’s time for them to help us to ignore.

Take email: Endless engineer-hours are poured into stopping spam, but virtually no attention is paid to our interaction with our non-spam messages. Our mailer may strive to learn from our ratings what is and is not spam, but it expends practically no effort on figuring out which of the non-spam emails are important and which ones can be safely ignored, dropped into archival folders, or deleted unread.

For example, I’m forever getting cc’d on busy threads by well-meaning colleagues who want to loop me in on some discussion in which I have little interest. Maybe the initial group invitation to a dinner (that I’ll be out of town for) was something I needed to see, but now that I’ve declined, I really don’t need to read the 300+ messages that follow debating the best place to eat.

I could write a mail-rule to ignore the thread, of course. But mail-rule editors are clunky, and once your rule-list grows very long, it becomes increasingly unmanageable. Mail-rules are where bookmarks were before the bookmark site del.icio.us showed up — built for people who might want to ensure that messages from the boss show up in red, but not intended to be used as a gigantic storehouse of a million filters, a crude means for telling the computers what we don’t want to see.

Rael Dornfest, the former chairman of the O’Reilly Emerging Tech conference and founder of the startup IWantSandy, once proposed an “ignore thread” feature for mailers: Flag a thread as uninteresting, and your mailer will start to hide messages with that subject-line or thread-ID for a week, unless those messages contain your name. The problem is that threads mutate. Last week’s dinner plans become this week’s discussion of next year’s group holiday. If the thread is still going after a week, the messages flow back into your inbox — and a single click takes you back through all the messages you missed.

We need a million measures like this, adaptive systems that create a gray zone between “delete on sight” and “show this to me right away.”

RSS readers are a great way to keep up with the torrent of new items posted on high-turnover sites like Digg, but they’re even better at keeping up with sites that are sporadic, like your friend’s brilliant journal that she only updates twice a year. But RSS readers don’t distinguish between the rare and miraculous appearance of a new item in an occasional journal and the latest click-fodder from Slashdot. They don’t even sort your RSS feeds according to the sites that you click-through the most.

There was a time when I could read the whole of Usenet — not just because I was a student looking for an excuse to avoid my assignments, but because Usenet was once tractable, readable by a single determined person. Today, I can’t even keep up with a single high-traffic message-board. I can’t read all my email. I can’t read every item posted to every site I like. I certainly can’t plough through the entire edit-history of every Wikipedia entry I read. I’ve come to grips with this — with acquiring information on a probabilistic basis, instead of the old, deterministic, cover-to-cover approach I learned in the offline world.

It’s as though there’s a cognitive style built into TCP/IP. Just as the network only does best-effort delivery of packets, not worrying so much about the bits that fall on the floor, TCP/IP users also do best-effort sweeps of the Internet, focusing on learning from the good stuff they find, rather than lamenting the stuff they don’t have time to see.

The network won’t ever become more tractable. There will never be fewer things vying for our online attention. The only answer is better ways and new technology to ignore stuff — a field that’s just being born, with plenty of room to grow.

$$$$

Facebook’s Faceplant

(Originally published as “How Your Creepy Ex-Co-Workers Will Kill Facebook,” in InformationWeek, November 26, 2007)

Facebook’s “platform” strategy has sparked much online debate and controversy. No one wants to see a return to the miserable days of walled gardens, when you couldn’t send a message to an AOL subscriber unless you, too, were a subscriber, and when the only services that made it were the ones that AOL management approved. Those of us on the “real” Internet regarded AOL with a species of superstitious dread, a hive of clueless noobs waiting to swamp our beloved Usenet with dumb flamewars (we fiercely guarded our erudite flamewars as being of a palpably superior grade), the wellspring of an

Facebook is no paragon of virtue. It bears the hallmarks of the kind of pump-and-dump service that sees us as sticky, monetizable eyeballs in need of pimping. The clue is in the steady stream of emails you get from Facebook: “So-and-so has sent you a message.” Yeah, what is it? Facebook isn’t telling — you have to visit Facebook to find out, generate a banner impression, and read and write your messages using the halt-and-lame Facebook interface, which lags even end-of-lifed email clients like Eudora for composing, reading, filtering, archiving and searching. Emails from Facebook aren’t helpful messages, they’re eyeball bait, intended to send you off to the Facebook site, only to discover that Fred wrote “Hi again!” on your “wall.” Like other “social” apps (cough eVite cough), Facebook has all the social graces of a nose-picking, hyperactive six-year-old, standing at the threshold of your attention and chanting, “I know something, I know something, I know something, won’t tell you what it is!”

If there was any doubt about Facebook’s lack of qualification to displace the Internet with a benevolent dictatorship/walled garden, it was removed when Facebook unveiled its new advertising campaign. Now, Facebook will allow its advertisers use the profile pictures of Facebook users to advertise their products, without permission or compensation. Even if you’re the kind of person who likes the sound of a “benevolent dictatorship,” this clearly isn’t one.

Many of my colleagues wonder if Facebook can be redeemed by opening up the platform, letting anyone write any app for the service, easily exporting and importing their data, and so on (this is the kind of thing Google is doing with its OpenSocial Alliance). Perhaps if Facebook takes on some of the characteristics that made the Web work — openness, decentralization, standardization — it will become like the Web itself, but with the added pixie dust of “social,” the indefinable characteristic that makes Facebook into pure crack for a significant proportion of Internet users.

The debate about redeeming Facebook starts from the assumption that Facebook is snowballing toward critical mass, the point at which it begins to define “the Internet” for a large slice of the world’s netizens, growing steadily every day. But I think that this is far from a sure thing. Sure, networks generally follow Metcalfe’s Law: “the value of a telecommunications network is proportional to the square of the number of users of the system.” This law is best understood through the analogy of the fax machine: a world with one fax machine has no use for faxes, but every time you add a fax, you square the number of possible send/receive combinations (Alice can fax Bob or Carol or Don; Bob can fax Alice, Carol and Don; Carol can fax Alice, Bob and Don, etc).

But Metcalfe’s law presumes that creating more communications pathways increases the value of the system, and that’s not always true (see Brook’s Law: “Adding manpower to a late softer project makes it later”).

Having watched the rise and fall of SixDegrees, Friendster, and the many other proto-hominids that make up the evolutionary chain leading to Facebook, MySpace, et al, I’m inclined to think that these systems are subject to a Brook’s-law parallel: “Adding more users to a social network increases the probability that it will put you in an awkward social circumstance.” Perhaps we can call this “boyd’s Law” [NOTE TO EDITOR: “boyd” is always lower-case] for danah [TO EDITOR: “danah” too!] boyd, the social scientist who has studied many of these networks from the inside as a keen-eyed net-anthropologist and who has described the many ways in which social software does violence to sociability in a series of sharp papers.

Here’s one of boyd’s examples, a true story: a young woman, an elementary school teacher, joins Friendster after some of her Burning Man buddies send her an invite. All is well until her students sign up and notice that all the friends in her profile are sunburnt, drug-addled techno-pagans whose own profiles are adorned with digital photos of their painted genitals flapping over the Playa. The teacher inveigles her friends to clean up their profiles, and all is well again until her boss, the school principal, signs up to the service and demands to be added to her friends list. The fact that she doesn’t like her boss doesn’t really matter: in the social world of Friendster and its progeny, it’s perfectly valid to demand to be “friended” in an explicit fashion that most of us left behind in the fourth grade. Now that her boss is on her friends list, our teacher-friend’s buddies naturally assume that she is one of the tribe and begin to send her lascivious Friendster-grams, inviting her to all sorts of dirty funtimes.

In the real world, we don’t articulate our social networks. Imagine how creepy it would be to wander into a co-worker’s cubicle and discover the wall covered with tiny photos of everyone in the office, ranked by “friend” and “foe,” with the top eight friends elevated to a small shrine decorated with Post-It roses and hearts. And yet, there’s an undeniable attraction to corralling all your friends and friendly acquaintances, charting them and their relationship to you. Maybe it’s evolutionary, some quirk of the neocortex dating from our evolution into social animals who gained advantage by dividing up the work of survival but acquired the tricky job of watching all the other monkeys so as to be sure that everyone was pulling their weight and not, e.g., napping in the treetops instead of watching for predators, emerging only to eat the fruit the rest of us have foraged.

Keeping track of our social relationships is a serious piece of work that runs a heavy cognitive load. It’s natural to seek out some neural prosthesis for assistance in this chore. My fiancee once proposed a “social scheduling” application that would watch your phone and email and IM to figure out who your pals were and give you a little alert if too much time passed without your reaching out to say hello and keep the coals of your relationship aglow. By the time you’ve reached your forties, chances are you’re out-of-touch with more friends than you’re in-touch with, old summer-camp chums, high-school mates, ex-spouses and their families, former co-workers, college roomies, dot-com veterans… Getting all those people back into your life is a full-time job and then some.

You’d think that Facebook would be the perfect tool for handling all this. It’s not. For every long-lost chum who reaches out to me on Facebook, there’s a guy who beat me up on a weekly basis through the whole seventh grade but now wants to be my buddy; or the crazy person who was fun in college but is now kind of sad; or the creepy ex-co-worker who I’d cross the street to avoid but who now wants to know, “Am I your friend?” yes or no, this instant, please.

It’s not just Facebook and it’s not just me. Every “social networking service” has had this problem and every user I’ve spoken to has been frustrated by it. I think that’s why these services are so volatile: why we’re so willing to flee from Friendster and into MySpace’s loving arms; from MySpace to Facebook. It’s socially awkward to refuse to add someone to your friends list — but removing someone from your friend-list is practically a declaration of war. The least-awkward way to get back to a friends list with nothing but friends on it is to reboot: create a new identity on a new system and send out some invites (of course, chances are at least one of those invites will go to someone who’ll groan and wonder why we’re dumb enough to think that we’re pals).

That’s why I don’t worry about Facebook taking over the net. As more users flock to it, the chances that the person who precipitates your exodus will find you increases. Once that happens, poof, away you go — and Facebook joins SixDegrees, Friendster and their pals on the scrapheap of net.history.

$$$$

The Future of Internet Immune Systems

(Originally published on InformationWeek’s Internet Evolution, November 19, 2007)

Bunhill Cemetery is just down the road from my flat in London. It’s a handsome old boneyard, a former plague pit (“Bone hill” — as in, there are so many bones under there that the ground is actually kind of humped up into a hill). There are plenty of luminaries buried there — John “Pilgrim’s Progress” Bunyan, William Blake, Daniel Defoe, and assorted Cromwells. But my favorite tomb is that of Thomas Bayes, the 18th-century statistician for whom Bayesian filtering is named.

Bayesian filtering is plenty useful. Here’s a simple example of how you might use a Bayesian filter. First, get a giant load of non-spam emails and feed them into a Bayesian program that counts how many times each word in their vocabulary appears, producing a statistical breakdown of the word-frequency in good emails.

Then, point the filter at a giant load of spam (if you’re having a hard time getting a hold of one, I have plenty to spare), and count the words in it. Now, for each new message that arrives in your inbox, have the filter count the relative word-frequencies and make a statistical prediction about whether the new message is spam or not (there are plenty of wrinkles in this formula, but this is the general idea).

The beauty of this approach is that you needn’t dream up “The Big Exhaustive List of Words and Phrases That Indicate a Message Is/Is Not Spam.” The filter naively calculates a statistical fingerprint for spam and not-spam, and checks the new messages against them.

This approach — and similar ones — are evolving into an immune system for the Internet, and like all immune systems, a little bit goes a long way, and too much makes you break out in hives.

ISPs are loading up their network centers with intrusion detection systems and tripwires that are supposed to stop attacks before they happen. For example, there’s the filter at the hotel I once stayed at in Jacksonville, Fla. Five minutes after I logged in, the network locked me out again. After an hour on the phone with tech support, it transpired that the network had noticed that the videogame I was playing systematically polled the other hosts on the network to check if they were running servers that I could join and play on. The network decided that this was a malicious port-scan and that it had better kick me off before I did anything naughty.

It only took five minutes for the software to lock me out, but it took well over an hour to find someone in tech support who understood what had happened and could reset the router so that I could get back online.

And right there is an example of the autoimmune disorder. Our network defenses are automated, instantaneous, and sweeping. But our fallback and oversight systems are slow, understaffed, and unresponsive. It takes a millionth of a second for the Transportation Security Administration’s body-cavity-search roulette wheel to decide that you’re a potential terrorist and stick you on a no-fly list, but getting un-Tuttle-Buttled is a nightmarish, months-long procedure that makes Orwell look like an optimist.

The tripwire that locks you out was fired-and-forgotten two years ago by an anonymous sysadmin with root access on the whole network. The outsourced help-desk schlub who unlocks your account can’t even spell “tripwire.” The same goes for the algorithm that cut off your credit card because you got on an airplane to a different part of the world and then had the audacity to spend your money. (I’ve resigned myself to spending $50 on long-distance calls with Citibank every time I cross a border if I want to use my debit card while abroad.)

This problem exists in macro-and microcosm across the whole of our technologically mediated society. The “spamigation bots” run by the Business Software Alliance and the Music and Film Industry Association of America (MAFIAA) entertainment groups send out tens of thousands of automated copyright takedown notices to ISPs at a cost of pennies, with little or no human oversight. The people who get erroneously fingered as pirates (as a Recording Industry Association of America (RIAA) spokesperson charmingly puts it, “When you go fishing with a dragnet, sometimes you catch a dolphin.”) spend days or weeks convincing their ISPs that they had the right to post their videos, music, and text-files.

We need an immune system. There are plenty of bad guys out there, and technology gives them force-multipliers (like the hackers who run 250,000-PC botnets). Still, there’s a terrible asymmetry in a world where defensive takedowns are automatic, but correcting mistaken takedowns is done by hand.

$$$$

All Complex Ecosystems Have Parasites

(Paper delivered at the O’Reilly Emerging Technology Conference, San Diego, California, 16 March 2005)

AOL hates spam. AOL could eliminate nearly 100 percent of its subscribers’ spam with one easy change: it could simply shut off its internet gateway. Then, as of yore, the only email an AOL subscriber could receive would come from another AOL subscriber. If an AOL subscriber sent a spam to another AOL subscriber and AOL found out about it, they could terminate the spammer’s account. Spam costs AOL millions, and represents a substantial disincentive for AOL customers to remain with the service, and yet AOL chooses to permit virtually anyone who can connect to the Internet, anywhere in the world, to send email to its customers, with any software at all.

Email is a sloppy, complicated ecosystem. It has organisms of sufficient diversity and sheer number as to beggar the imagination: thousands of SMTP agents, millions of mail-servers, hundreds of millions of users. That richness and diversity lets all kinds of innovative stuff happen: if you go to nytimes.com and “send a story to a friend,” the NYT can convincingly spoof your return address on the email it sends to your friend, so that it appears that the email originated on your computer. Also: a spammer can harvest your email and use it as a fake return address on the spam he sends to your friend. Sysadmins have server processes that send them mail to secret pager-addresses when something goes wrong, and GPLed mailing-list software gets used by spammers and people running high-volume mailing lists alike.

You could stop spam by simplifying email: centralize functions like identity verification, limit the number of authorized mail agents and refuse service to unauthorized agents, even set up tollbooths where small sums of money are collected for every email, ensuring that sending ten million messages was too expensive to contemplate without a damned high expectation of return on investment. If you did all these things, you’d solve spam.

By breaking email.

Small server processes that mail a logfile to five sysadmins every hour just in case would be prohibitively expensive. Convincing the soviet that your bulk-mailer was only useful to legit mailing lists and not spammers could take months, and there’s no guarantee that it would get their stamp of approval at all. With verified identity, the NYTimes couldn’t impersonate you when it forwarded stories on your behalf — and Chinese dissidents couldn’t send out their samizdata via disposable gmail accounts.

An email system that can be controlled is an email system without complexity. Complex ecosystems are influenced, not controlled.

The Hollywood studios are conniving to create a global network of regulatory mandates over entertainment devices. Here they call it the Broadcast Flag; in Europe, Asia, Australia and Latinamerica it’s called DVB Copy Protection Content Management. These systems purport to solve the problem of indiscriminate redistribution of broadcast programming via the Internet, but their answer to the problem, such as it is, is to require that everyone who wants to build a device that touches video has to first get permission.

If you want to make a TV, a screen, a video-card, a high-speed bus, an analog-to-digital converter, a tuner card, a DVD burner — any tool that you hope to be lawful for use in connection with digital TV signals — you’ll have to go on bended knee to get permission to deploy it. You’ll have to convince FCC bureaucrats or a panel of Hollywood companies and their sellout IT and consumer electronics toadies that the thing you’re going to bring to market will not disrupt their business models.

That’s how DVD works today: if you want to make a DVD player, you need to ask permission from a shadowy organization called the DVD-CCA. They don’t give permission if you plan on adding new features — that’s why they’re suing Kaleidascape for building a DVD jukebox that can play back your movies from a hard-drive archive instead of the original discs.

CD has a rich ecosystem, filled with parasites — entrepreneurial organisms that move to fill every available niche. If you spent a thousand bucks on CDs ten years ago, the ecosystem for CDs would reward you handsomely. In the intervening decade, parasites who have found an opportunity to suck value out of the products on offer from the labels and the dupe houses by offering you the tools to convert your CDs to ringtones, karaoke, MP3s, MP3s on iPods and other players, MP3s on CDs that hold a thousand percent more music — and on and on.

DVDs live in a simpler, slower ecosystem, like a terrarium in a bottle where a million species have been pared away to a manageable handful. DVDs pay no such dividend. A thousand dollars’ worth of ten-year old DVDs are good for just what they were good for ten years ago: watching. You can’t put your kid into her favorite cartoon, you can’t downsample the video to something that plays on your phone, and you certainly can’t lawfully make a hard-drive-based jukebox from your discs.

The yearning for simple ecosystems is endemic among people who want to “fix” some problem of bad actors on the networks.

Take interoperability: you might sell me a database in the expectation that I’ll only communicate with it using your authorized database agents. That way you can charge vendors a license fee in exchange for permission to make a client, and you can ensure that the clients are well-behaved and don’t trigger any of your nasty bugs.

But you can’t meaningfully enforce that. EDS and other titanic software companies earn their bread and butter by producing fake database clients that impersonate the real thing as they iterate through every record and write it to a text file — or simply provide a compatibility layer through systems provided by two different vendors. These companies produce software that lies — parasite software that fills niches left behind by other organisms, sometimes to those organisms’ detriment.

So we have “Trusted Computing,” a system that’s supposed to let software detect other programs’ lies and refuse to play with them if they get caught out fibbing. It’s a system that’s based on torching the rainforest with all its glorious anarchy of tools and systems and replacing it with neat rows of tame and planted trees, each one approved by The Man as safe for use with his products.

For Trusted Computing to accomplish this, everyone who makes a video-card, keyboard, or logic-board must receive a key from some certifying body that will see to it that the key is stored in a way that prevents end-users from extracting it and using it to fake signatures.

But if one keyboard vendor doesn’t store his keys securely, the system will be useless for fighting keyloggers. If one video-card vendor lets a key leak, the system will be no good for stopping screenlogging. If one logic-board vendor lets a key slip, the whole thing goes out the window. That’s how DVD DRM got hacked: one vendor, Xing, left its keys in a place where users could get at them, and then anyone could break the DRM on any DVD.

Not only is the Trusted Computing advocates’ goal — producing a simpler software ecosystem — wrongheaded, but the methodology is doomed. Fly-by-night keyboard vendors in distant free trade zones just won’t be 100 percent compliant, and Trusted Computing requires no less than perfect compliance.

The whole of DRM is a macrocosm for Trusted Computing. The DVB Copy Protection system relies on a set of rules for translating every one of its restriction states — such as “copy once” and “copy never” — to states in other DRM systems that are licensed to receive its output. That means that they’re signing up to review, approve and write special rules for every single entertainment technology now invented and every technology that will be invented in the future.

Madness: shrinking the ecosystem of everything you can plug into your TV down to the subset that these self-appointed arbiters of technology approve is a recipe for turning the electronics, IT and telecoms industries into something as small and unimportant as Hollywood. Hollywood — which is a tenth the size of IT, itself a tenth the size of telecoms.

In Hollywood, your ability to make a movie depends on the approval of a few power-brokers who have signing authority over the two-hundred-million-dollar budgets for making films. As far as Hollywood is concerned, this is a feature, not a bug. Two weeks ago, I heard the VP of Technology for Warners give a presentation in Dublin on the need to adopt DRM for digital TV, and his money-shot, his big convincer of a slide went like this:

“With advances in processing power, storage capacity and broadband access… EVERYBODY BECOMES A BROADCASTER!”

Heaven forfend.

Simple ecosystems are the goal of proceedings like CARP, the panel that set out the ruinously high royalties for webcasters. The recording industry set the rates as high as they did so that the teeming millions of webcasters would be rendered economically extinct, leaving behind a tiny handful of giant companies that could be negotiated with around a board room table, rather than dealt with by blanket legislation.

The razing of the rainforest has a cost. It’s harder to send a legitimate email today than it ever was — thanks to a world of closed SMTP relays. The cries for a mail-server monoculture grow more shrill with every passing moment. Just last week, it was a call for every mail-administrator to ban the “vacation” program that sends out automatic responses informing senders that the recipient is away from email for a few days, because mailboxes that run vacation can cause “spam blowback” where accounts send their vacation notices to the hapless individuals whose email addresses the spammers have substituted on the email’s Reply-To line.

And yet there is more spam than there ever was. All the costs we’ve paid for fighting spam have added up to no benefit: the network is still overrun and sometimes even overwhelmed by spam. We’ve let the network’s neutrality and diversity be compromised, without receiving the promised benefit of spam-free inboxes.

Likewise, DRM has exacted a punishing toll wherever it has come into play, costing us innovation, free speech, research and the public’s rights in copyright. And likewise, DRM has not stopped infringement: today, infringement is more widespread than ever. All those costs borne by society in the name of protecting artists and stopping infringement, and not a penny put into an artist’s pocket, not a single DRM-restricted file that can’t be downloaded for free and without encumbrance from a P2P network.

Everywhere we look, we find people who should know better calling for a parasite-free Internet. Science fiction writers are supposed to be forward looking, but they’re wasting their time demanding that Amazon and Google make it harder to piece together whole books from the page-previews one can get via the look-inside-the-book programs. They’re even cooking up programs to spoof deliberately corrupted ebooks into the P2P networks, presumably to assure the few readers the field has left that reading science fiction is a mug’s game.

The amazing thing about the failure of parasite-elimination programs is that their proponents have concluded that the problem is that they haven’t tried hard enough — with just a few more species eliminated, a few more policies imposed, paradise will spring into being. Their answer to an unsuccessful strategy for fixing the Internet is to try the same strategy, only moreso — only fill those niches in the ecology that you can sanction. Hunt and kill more parasites, no matter what the cost.

We are proud parasites, we Emerging Techers. We’re engaged in perl whirling, pythoneering, lightweight javarey — we hack our cars and we hack our PCs. We’re the rich humus carpeting the jungle floor and the tiny frogs living in the bromeliads.

The long tail — Chris Anderson’s name for the 95% of media that isn’t top sellers, but which, in aggregate, accounts for more than half the money on the table for media vendors — is the tail of bottom-feeders and improbable denizens of the ocean’s thermal vents. We’re unexpected guests at the dinner table and we have the nerve to demand a full helping.

Your ideas are cool and you should go and make them real, even if they demand that the kind of ecological diversity that seems to be disappearing around us.

You may succeed — provided that your plans don’t call for a simple ecosystem where only you get to provide value and no one else gets to play.

$$$

READ CAREFULLY

(Originally published as “Shrinkwrap Licenses: An Epidemic Of Lawsuits Waiting To Happen” in InformationWeek, February 3, 2007)

READ CAREFULLY. By reading this article, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

READ CAREFULLY — all in caps, and what it means is, “IGNORE THIS.” That’s because the small print in the clickwrap, shrinkwrap, browsewrap and other non-negotiated agreements is both immutable and outrageous.

Why read the “agreement” if you know that:

1) No sane person would agree to its text, and

2) Even if you disagree, no one will negotiate a better agreement with you?

We seem to have sunk to a kind of playground system of forming contracts. There are those who will tell you that you can form a binding agreement just by following a link, stepping into a store, buying a product, or receiving an email. By standing there, shaking your head, shouting “NO NO NO I DO NOT AGREE,” you agree to let me come over to your house, clean out your fridge, wear your underwear and make some long-distance calls.

If you buy a downloadable movie from Amazon Unbox, you agree to let them install spyware on your computer, delete any file they don’t like on your hard-drive, and cancel your viewing privileges for any reason. Of course, it goes without saying that Amazon reserves the right to modify the agreement at any time.

The worst offenders are people who sell you movies and music. They’re a close second to people who sell you software, or provide services over the Internet. There’s a rubric to this — you’re getting a discount in exchange for signing onto an abusive agreement, but just try and find the software that doesn’t come with one of these “agreements” — at any price.

For example, Vista, Microsoft’s new operating system, comes in a rainbow of flavors varying in price from $99 to $399, but all of them come with the same crummy terms of service, which state that “you may not work around any technical limitations in the software,” and that Windows Defender, the bundled anti-malware program, can delete any program from your hard drive that Microsoft doesn’t like, even if it breaks your computer.

It’s bad enough when this stuff comes to us through deliberate malice, but it seems that bogus agreements can spread almost without human intervention. Google any obnoxious term or phrase from a EULA, and you’ll find that the same phrase appears in a dozens — perhaps thousands — of EULAs around the Internet. Like snippets of DNA being passed from one virus to another as they infect the world’s corporations in a pandemic of idiocy, terms of service are semi-autonomous entities.

Indeed, when rocker Billy Bragg read the fine print on the MySpace user agreement, he discovered that it appeared that site owner Rupert Murdoch was laying claim to copyrights in every song uploaded to the site, in a silent, sinister land-grab that turned the media baron into the world’s most prolific and indiscriminate hoarder of garage-band tunes.

However, the EULA that got Bragg upset wasn’t a Murdoch innovation — it dates back to the earliest days of the service. It seems to have been posted at a time when the garage entrepreneurs who built MySpace were in no position to hire pricey counsel — something borne out by the fact that the old MySpace EULA appears nearly verbatim on many other services around the Internet. It’s not going out very far on a limb to speculate that MySpace’s founders merely copied a EULA they found somewhere else, without even reading it, and that when Murdoch’s due diligence attorneys were preparing to give these lucky fellows $600,000,000, that they couldn’t be bothered to read the terms of service anyway.

In their defense, EULAese is so mind-numbingly boring that it’s a kind of torture to read these things. You can hardly blame them.

But it does raise the question — why are we playing host to these infectious agents? If they’re not read by customers or companies, why bother with them?

If you wanted to really be careful about this stuff, you’d prohibit every employee at your office from clicking on any link, installing any program, creating accounts, signing for parcels — even doing a run to Best Buy for some CD blanks, have you seen the fine-print on their credit-card slips? After all, these people are entering into “agreements” on behalf of their employer — agreements to allow spyware onto your network, to not “work around any technical limitations in their software,” to let malicious software delete arbitrary files from their systems.

So far, very few of us have been really bitten in the ass by EULAs, but that’s because EULAs are generally associated with companies who have products or services they’re hoping you’ll use, and enforcing their EULAs could cost them business.

But that was the theory with patents, too. So long as everyone with a huge portfolio of unexamined, overlapping, generous patents was competing with similarly situated manufacturers, there was a mutually assured destruction — a kind of detente represented by cross-licensing deals for patent portfolios.

But the rise of the patent troll changed all that. Patent trolls don’t make products. They make lawsuits. They buy up the ridiculous patents of failed companies and sue the everloving hell out of everyone they can find, building up a war-chest from easy victories against little guys that can be used to fund more serious campaigns against larger organizations. Since there are no products to disrupt with a countersuit, there’s no mutually assured destruction.

If a shakedown artist can buy up some bogus patents and use them to put the screws to you, then it’s only a matter of time until the same grifters latch onto the innumerable “agreements” that your company has formed with a desperate dot-bomb looking for an exit strategy.

More importantly, these “agreements” make a mockery of the law and of the very idea of forming agreements. Civilization starts with the idea of a real agreement — for example, “We crap here and we sleep there, OK?” — and if we reduce the noble agreement to a schoolyard game of no-takebacks, we erode the bedrock of civilization itself.

$$$$

World of Democracycraft

(Originally published as “Why Online Games Are Dictatorships,” InformationWeek, April 16, 2007)

Can you be a citizen of a virtual world? That’s the question that I keep asking myself, whenever anyone tells me about the wonder of multiplayer online games, especially Second Life, the virtual world that is more creative playground than game.

These worlds invite us to take up residence in them, to invest time (and sometimes money) in them. Second Life encourages you to make stuff using their scripting engine and sell it in the game. You Own Your Own Mods — it’s the rallying cry of the new generation of virtual worlds, an updated version of the old BBS adage from the WELL: You Own Your Own Words.

I spend a lot of time in Disney parks. I even own a share of Disney stock. But I don’t flatter myself that I’m a citizen of Disney World. I know that when I go to Orlando, the Mouse is going to fingerprint me and search my bags, because the Fourth Amendment isn’t a “Disney value.”

Disney even has its own virtual currency, symbolic tokens called Disney Dollars that you can spend or exchange at any Disney park. I’m reasonably confident that if Disney refused to turn my Mickeybucks back into US Treasury Department-issue greenbacks that I could make life unpleasant for them in a court of law.

But is the same true of a game? The money in your real-world bank-account and in your in-game bank-account is really just a pointer in a database. But if the bank moves the pointer around arbitrarily (depositing a billion dollars in your account, or wiping you out), they face a regulator. If a game wants to wipe you out, well, you probably agreed to let them do that when you signed up.

Can you amass wealth in such a world? Well, sure. There are rich people in dictatorships all over the world. Stalin’s favorites had great big dachas and drove fancy cars. You don’t need democratic rights to get rich.

But you do need democratic freedoms to stay rich. In-world wealth is like a Stalin-era dacha, or the diamond fortunes of Apartheid South Africa: valuable, even portable (to a limited extent), but not really yours, not in any stable, long-term sense.

Here are some examples of the difference between being a citizen and a customer:

In January, 2006 a World of Warcraft moderator shut down an advertisement for a “GBLT-friendly” guild. This was a virtual club that players could join, whose mission was to be “friendly” to “Gay/Bi/Lesbian/Transgendered” players. The WoW moderator — and Blizzard management — cited a bizarre reason for the shut-down:

“While we appreciate and understand your point of view, we do feel that the advertisement of a ‘GLBT friendly’ guild is very likely to result in harassment for players that may not have existed otherwise. If you will look at our policy, you will notice the suggested penalty for violating the Sexual Orientation Harassment Policy is to ‘be temporarily suspended from the game.’ However, as there was clearly no malicious intent on your part, this penalty was reduced to a warning.”

Sara Andrews, the guild’s creator, made a stink and embarrassed Blizzard (the game’s parent company) into reversing the decision.

In 2004, a player in the MMO EVE Online declared that the game’s creators had stacked the deck against him, called EVE, “a poorly designed game which rewards the greedy and violent, and punishes the hardworking and honest.” He was upset over a change in the game dynamics which made it easier to play a pirate and harder to play a merchant.

The player, “Dentara Rask,” wrote those words in the preamble to a tell-all memoir detailing an elaborate Ponzi scheme that he and an accomplice had perpetrated in EVE. The two of them had bilked EVE’s merchants out of a substantial fraction of the game’s total GDP and then resigned their accounts. The objective was to punish the game’s owners for their gameplay decisions by crashing the game’s economy.

In both of these instances, players — residents of virtual worlds — resolved their conflicts with game management through customer activism. That works in the real world, too, but when it fails, we have the rule of law. We can sue. We can elect new leaders. When all else fails, we can withdraw all our money from the bank, sell our houses, and move to a different country.

But in virtual worlds, these recourses are off-limits. Virtual worlds can and do freeze players’ wealth for “cheating” (amassing gold by exploiting loopholes in the system), for participating in real-world gold-for-cash exchanges (eBay recently put an end to this practice on its service), or for violating some other rule. The rules of virtual worlds are embodied in EULAs, not Constitutions, and are always “subject to change without notice.”

So what does it mean to be “rich” in Second Life? Sure, you can have a thriving virtual penis business in game, one that returns a healthy sum of cash every month. You can even protect your profits by regularly converting them to real money. But if you lose an argument with Second Life’s parent company, your business vanishes. In other worlds, the only stable in-game wealth is the wealth you take out of the game. Your virtual capital investments are totally contingent. Piss off the wrong exec at Linden Labs, Blizzard, Sony Online Entertainment, or Sularke and your little in-world business could disappear for good.

Well, what of it? Why not just create a “democratic” game that has a constitution, full citizenship for players, and all the prerequisites for stable wealth? Such a game would be open source (so that other, interoperable “nations” could be established for you to emigrate to if you don’t like the will of the majority in one game-world), and run by elected representatives who would instruct the administrators and programmers as to how to run the virtual world. In the real world, the TSA sets the rules for aviation — in a virtual world, the equivalent agency would determine the physics of flight.

The question is, would this game be any fun? Well, democracy itself is pretty fun — where “fun” means “engrossing and engaging.” Lots of people like to play the democracy game, whether by voting every four years or by moving to K Street and setting up a lobbying operation.

But video games aren’t quite the same thing. Gameplay conventions like “grinding” (repeating a task), “leveling up” (attaining a higher level of accomplishment), “questing” and so on are functions of artificial scarcity. The difference between a character with 10,000,000 gold pieces and a giant, rare, terrifying crossbow and a newbie player is which pointers are associated with each character’s database entry. If the elected representatives direct that every player should have the shiniest armor, best space-ships, and largest bank-balances possible (this sounds like a pretty good election platform to me!), then what’s left to do?

Oh sure, in Second Life they have an interesting crafting economy based on creating and exchanging virtual objects. But these objects are also artificially scarce — that is, the ability of these objects to propagate freely throughout the world is limited only by the software that supports them. It’s basically the same economics of the music industry, but applied to every field of human endeavor in the entire (virtual) world.

Fun matters. Real world currencies rise and fall based, in part, by the economic might of the nations that issue them. Virtual world currencies are more strongly tied to whether there’s any reason to spend the virtual currency on the objects that are denominated in it. 10,000 EverQuest golds might trade for $100 on a day when that same sum will buy you a magic EQ sword that enables you to play alongside the most interesting people online, running the most fun missions online. But if all those players out-migrate to World of Warcraft, and word gets around that Warlord’s Command is way more fun than anything in poor old creaky EverQuest, your EverQuest gold turns into Weimar Deutschemarks, a devalued currency that you can’t even give away.

This is where the plausibility of my democratic, co-operative, open source virtual world starts to break down. Elected governments can field armies, run schools, provide health care (I’m a Canadian), and bring acid lakes back to health. But I’ve never done anything run by a government agency that was a lot of fun. It’s my sneaking suspicion that the only people who’d enjoy playing World of Democracycraft would be the people running for office there. The players would soon find themselves playing IRSQuest, Second Notice of Proposed Rulemaking Life, and Caves of 27 Stroke B.

Maybe I’m wrong. Maybe customership is enough of a rock to build a platform of sustainable industry upon. It’s not like entrepreneurs in Dubai have a lot of recourse if they get on the wrong side of the Emir; or like Singaporeans get to appeal the decisions of President Nathan, and there’s plenty of industry there.

And hell, maybe bureaucracies have hidden reserves of fun that have been lurking there, waiting for the chance to bust out and surprise us all.

I sure hope so. These online worlds are endlessly diverting places. It’d be a shame if it turned out that cyberspace was a dictatorship — benevolent or otherwise.

$$$$

Snitchtown

(Originally published in Forbes.com, June 2007)

The 12-story Hotel Torni was the tallest building in central Helsinki during the Soviet occupation of Finland, making it a natural choice to serve as KGB headquarters. Today, it bears a plaque testifying to its checkered past, and also noting the curious fact that the Finns pulled 40 kilometers of wiretap cable out of the walls after the KGB left. The wire was solid evidence of each operative’s mistrustful surveillance of his fellow agents.

The East German Stasi also engaged in rampant surveillance, using a network of snitches to assemble secret files on every resident of East Berlin. They knew who was telling subversive jokes—but missed the fact that the Wall was about to come down.

When you watch everyone, you watch no one.

This seems to have escaped the operators of the digital surveillance technologies that are taking over our cities. In the brave new world of doorbell cams, wi-fi sniffers, RFID passes, bag searches at the subway and photo lookups at office security desks, universal surveillance is seen as the universal solution to all urban ills. But the truth is that ubiquitous cameras only serve to violate the social contract that makes cities work.

The key to living in a city and peacefully co-existing as a social animal in tight quarters is to set a delicate balance of seeing and not seeing. You take care not to step on the heels of the woman in front of you on the way out of the subway, and you might take passing note of her most excellent handbag. But you don’t make eye contact and exchange a nod. Or even if you do, you make sure that it’s as fleeting as it can be.

Checking your mirrors is good practice even in stopped traffic, but staring and pointing at the schmuck next to you who’s got his finger so far up his nostril he’s in danger of lobotomizing himself is bad form—worse form than picking your nose, even.

I once asked a Japanese friend to explain why so many people on the Tokyo subway wore surgical masks. Are they extreme germophobes? Conscientious folks getting over a cold? Oh, yes, he said, yes, of course, but that’s only the rubric. The real reason to wear the mask is to spare others the discomfort of seeing your facial expression, to make your face into a disengaged, unreadable blank—to spare others the discomfort of firing up their mirror neurons in order to model your mood based on your outward expression. To make it possible to see without seeing.

There is one city dweller that doesn’t respect this delicate social contract: the closed-circuit television camera. Ubiquitous and demanding, CCTVs don’t have any visible owners. They … occur. They exist in the passive voice, the “mistakes were made” voice: “The camera recorded you.”

They are like an emergent property of the system, of being afraid and looking for cheap answers. And they are everywhere: In London, residents are photographed more than 300 times a day.

The irony of security cameras is that they watch, but nobody cares that they’re looking. Junkies don’t worry about CCTVs. Crazed rapists and other purveyors of sudden, senseless violence aren’t deterred. I was mugged twice on my old block in San Francisco by the crack dealers on my corner, within sight of two CCTVs and a police station. My rental car was robbed by a junkie in a Gastown garage in Vancouver in sight of a CCTV.

Three mad kids followed my friend out of the Tube in London last year and murdered him on his doorstep.

Crazy, desperate, violent people don’t make rational calculus in regards to their lives. Anyone who becomes a junkie, crack dealer, or cellphone-stealing stickup artist is obviously bad at making life decisions. They’re not deterred by surveillance.

Yet the cameras proliferate, and replace human eyes. The cops on my block in San Francisco stayed in their cars and let the cameras do the watching. The Tube station didn’t have any human guards after dark, just a CCTV to record the fare evaders.

Now London city councils are installing new CCTVs with loudspeakers, operated by remote coppers who can lean in and make a speaker bark at you, “Citizen, pick up your litter.” “Stop leering at that woman.” “Move along.”

Yeah, that’ll work.

Every day the glass-domed cameras proliferate, and the gate-guarded mentality of the deep suburbs threatens to invade our cities. More doorbell webcams, more mailbox cams, more cams in our cars.

The city of the future is shaping up to be a neighborly Panopticon, leeched of the cosmopolitan ability to see, and not be seen, where every nose pick is noted and logged and uploaded to the Internet. You don’t have anything to hide, sure, but there’s a reason we close the door to the bathroom before we drop our drawers. Everyone poops, but it takes a special kind of person to want to do it in public.

The trick now is to contain the creeping cameras of the law. When the city surveils its citizens, it legitimizes our mutual surveillance—what’s the difference between the cops watching your every move, or the mall owners watching you, or you doing it to the guy next door?

I’m an optimist. I think our social contracts are stronger than our technology. They’re the strongest bonds we have. We don’t aim telescopes through each others’ windows, because only creeps do that.

But we need to reclaim the right to record our own lives as they proceed. We need to reverse decisions like the one that allowed the New York Metropolitan Transit Authority to line subway platforms with terrorism cameras, but said riders may not take snapshots in the station. We need to win back the right to photograph our human heritage in museums and galleries, and we need to beat back the snitch-cams rent-a-cops use to make our cameras stay in our pockets.

They’re our cities and our institutions. And we choose the future we want to live in.

$$$$

Hope you enjoyed it! The actual, physical object that corresponds to this book is superbly designed, portable, and makes a great gift:

http://craphound.com/content/buy

If you would rather make a donation, you can buy a copy of the book for a worthy school, library or other institution of your choosing:

http://craphound.com/content/donate

$$$$

About the Author

Cory Doctorow (craphound.com) is an award-winning novelist, activist, blogger and journalist. He is the co-editor of Boing Boing (boingboing.net), one of the most popular blogs in the world, and has contributed to The New York Times Sunday Magazine, The Economist, Forbes, Popular Science, Wired, Make, InformationWeek, Locus, Salon, Radar, and many other magazines, newspapers and websites.

His novels and short story collections include Someone Comes to Town, Someone Leaves Town, Down and Out in the Magic Kingdom, Overclocked: Stories of the Future Present and his most recent novel, a political thriller for young adults called Little Brother, published by Tor Books in May, 2008. All of his novels and short story collections are available as free downloads under the terms of various Creative Commons licenses.

Doctorow is the former European Director of the Electronic Frontier Foundation (eff.org) and has participated in many treaty-making, standards-setting and regulatory and legal battles in countries all over the world. In 2006/2007, he was the inaugural Canada/US Fulbright Chair in Public Diplomacy at the Annenberg Center at the University of Southern California. In 2007, he was also named one of the World Economic Forum’s “Young Global Leaders” and one of Forbes Magazine’s top 25 “Web Celebrities.”

Born in Toronto, Canada in 1971, he is a four-time university dropout. He now resides in London, England with his wife and baby daughter, where he does his best to avoid the ubiquitous surveillance cameras while roaming the world, speaking on copyright, freedom and the future.

$$$$

previous page start