Pluralism and progress

Kim Cameron makes an excellent repost to Doc Searl’s comment that “two ways to do identity is one too many.”

Kim writes: “The problem is that - at the same time - one way to do identity is too few.  And this is what explains why the creation of a universal system of identity is one of the greatest challenges blocking the evolution of technology and the virtual world.”

Kim goes on to cite the idea of pluralism as a law relating to identity systems, but it is more widely applicable than that. One way is always too few ways to do anything. Evolution demands diversity, survival all the more so. Doc goes into this at length, adding:

First, politics hasn’t spoiled identity yet, because identity has hardly started as a topic, much less as a useful service for anybody on the Net. For all the effort that’s gone into identity, it’s still newer than RSS was when it still stood for Rich Site Summary (or whatever it was) rather than the Really Simple Syndication all of us bloggers employ today. And while the Liberty Alliance its participants deserve credit for all their work around federation, that’s still mostly BigCo stuff that benefits us as individuals mostly on the back ends of our relationships with various companies. We need something that works for us, as individuals, in a simple and obvious way.

Second, identity won’t happen as a service unless it comes up from the grass roots, from independent developers, the users who support them, and the big guys who follow indie developers and users into the marketplace. (Think about how the big publishers have deployed RSS, for example.)

I argue that politics is precisely what is saving identity from BigCo solutions.

In a situation where BigCo solutions are eschewed either for their failure to convince users or users’ concern about ceding power to a centralized entity, it is grassroots dissent that prevents a victory by those entities. It would be a good idea to recognize that pluralism is politics and embrace the debate as essentially good. I know what Doc means by “politics,” but that’s not politics, it’s market power, which is unrepresentative when existing influence is used to prevent competitors from introducing alternative options into technical standards or products.

If we get past damning politics by recognizing that the structure of the debate is what defines its value—an open exchange is far more egalitarian than a closed, industry-led one—we can tap the human passion for participation to reinvigorate standards setting and much more.

I will now remove my Semantics Police hat.

Now, from my identity thinking cap: An identity infrastructure needs a fundamental schema, but it must be extensible, so that every system introduced can communicate basic information while adding functionality. The debate need not revolve about what is included, rather it is best conducted in terms of what should be excluded in order to reduce that fundamental schema to the absolute essentials. The result will be a system in which we move between identity infrastructures, becoming more or less identifiable in different contexts; the idea that some of us will be identified one way and others another—that a competition between infrastructures will result in a winner—is misguided.

Creative and apart, but part of the world

I like this. Hugh McLeod does a nice job of explaining quickly and entertainingly how hard it is to be creative. I particularly agree with this:

“Don’t try to stand out from the crowd; avoid crowds altogether.”

Avoid labels and joining, too, while living in community. Folks have asked what my email handle, godsdog, means. It was originally “coyote@well.com” and somewhere along the way when I moved Netcom, the handle “coyote” was taken so I chose “godsdog,” the Hopi nickname for coyote. There’s a trickster connotation, which is what most people think of when I explain this, but the real reason is that a coyote is a shy dog that stays at the periphery watching everything. It’s not that I am frankly anti-social, but I’ve always agreed with Groucho Marx’s maxim: “I don’t care to belong to a club that accepts people like me as members.”

Two more Herring postings on tech business in 2005

For your consideration at The Red Herring:

  • Apple’s one-percent solutionApple Computer’s market share is still meager in contrast to those of Dell, Hewlett-Packard, and others, but the company has reinvented its computer business on the shoulders of the little iPod. During 2005, Apple clawed back slightly more than 1 percent of U.S. PC market share and is projected to finish the year at 4.9 percent of PCs sold.
  • Microsoft’s very bad yearMicrosoft, the largest and most feared software developer, was stopped in its tracks this year by the hobbits, elves, and free people of the IT world. While it isn’t in the intensive care unit, the Redmond, Washington-based behemoth has scars from the battle and is looking ahead to 2006 as a turnaround year as important as its legendary turn to address the World Wide Web in 1995.

My brief decapitation to cap a decade of savings!

The site was temporarily inaccessible for about 20 hours, as I switched to a new DSL service. For the first time since 1996, I don’t have a synchronous T-1 level connection to the Internet from my home, but I’ve also seen a vast savings over that time that, in these tight economic times, justifies surrendering some upstream throughput. However, I am paying 93 percent less or connectivity than back then.

I moved from a $310-a-month 1.5Mbps/1.5Mbps DSL connection to a 3.0Mbps/768Kbps DSL connection that costs only $110 a month. For about $2,400 a year in savings, I can live with the slower upstream connection and by offloading some fileserving to a hosted site I have, provide the same or better site performance for you.

Back when I got my first T-1, in summer 1996, the circuit cost $1,700 a month, or $20,400 a year. It paid off in spades, allowing me to experiment with Web sites while offering my writing and research customers very fast turnaround on work, because I was not sitting and waiting for a dial-up connection to load pages. Where another writer might have to wait five or ten seconds for a page to load—longer with a lot of graphics—I was getting pages in a second. Over the course of 1,000 pages of surfing, I might pick up 100 minutes of extra time that my competition lost to waiting for information. That allowed me to write more and, thus, earn more.

The fast connection, for instance, allowed me to take on producing a daily news site, ZDY2K, for Ziff-Davis, from home. When I joined ON24, it allowed me to work remotely a day or two a week, keeping my family in the Northwest where we wanted to be rather than picking up and going back to San Francisco. Only when I joined an industry that valued face-time over delivery of intelligence—investment banking—did my remoteness become an issue (along with the general decline in the business anyway, so it wasn’t a singular problem) and I moved on, back to living over the connection rather than flying to SFO every Monday and home every Thursday, which I’d been doing many of the years since 1993.

Early on, I learned that synchronous connections were powerful because they offered you a voice equal to or larger than a corporation employing hundreds. So why abandon synchrony now? Simple, with the distributed computing facilitated by the Net, I can put files on servers with faster full-time connections while benefiting from an almost 100 percent increase in download speeds—for $200 less a month. In less than a decade, my home broadband expenses have fallen by $19,080—93.5 percent. That’s pretty phenomenal and awfully bad news for the folks who build and provision broadband services, since if this pace continues it suggests that by 2014 the typical home broadband connection will cost somewhere on the order of $16 a month.

No one has a single connection to the Net anymore, because we maintain many virtual connections through hosted services. There’s an important distinction in this: I am not talking about keeping separate accounts to connect, as we did with dial-up and dedicated circuits in 1996 and still do with wireless, dial-up, dedicated and roaming services today. Today, we also maintain presence—access to our knowledge and ourselves—through Web services.

The cool trick now is learning to distribute your upstream communication to maximize the performance for a lower price. At a time when connectivity is completely commoditized, plainly evident in the drastic slope of my cost curve, it’s not how fast your connection is that matters; it’s how you use multiple connections to stay jacked in to the things you need to know and, more importantly, how you allow others to stay jacked in to you.

Personal Grid Computing

Luc Julia, the founder and CTO of Orb Networks has engaged me in a good-natured dialogue about my posting last week saying that the coverage his company is winning is setting it up for a promise to the consumer it can’t keep. He says I need to see the technology work to believe it and so I must, but I also found out more that should be put out for consideration.

The “streaming live TV” demo touted by the company and by AlwaysOn’s Tony Perkins—the crux of my concern that the company is over-promising because it is dependent on too many network hops out of Orb’s control—requires a Media Center PC and works with a PC with an unsupported Hauppage tuner card, but it is more or less the fluff on what is a meaningful system for sharing media and handling DRM issues. The demo described by Tony Perkins did take place over a 44 Kbps wireless network connection from Cingular, according to Luc Julia.

As I’ve explained, there are other players moving into this market, some with service ambitions similar to Orb’s and others with the goal of being acquired by a Cisco that wants to build media sharing services into their products.

I also made the point, which Luc Julia agreed with, that describing this system as “distributed computing” is somewhat problematic, because Orb’s server does play a role—both as a sort of router and providing key management services that can allow secure access to subscription content. The Orb network offloads encoding of audio and video to the user’s PC, but that is not truly distributed computing because the system doesn’t work if Orb’s server isn’t available.

Here’s where I think we are going with all these systems: Personal Grid Computing. All the computational devices we own will work in concert with a conductor to orchestrate the process (Orb’s server plays that role here) of allocating processor cycles to specific tasks, such as calculating optimal screen dimensions, bit depth and encoding the file to get a video file from a PC to a handheld.

As I’ve written here and here, this is a market that will quickly commodify and I wonder about Orb’s business model. But as Luc Julia said, it’s not likely the first business model at any company will exactly right and I’ll give him that. Just don’t hang the business on live TV streaming, there’s so much more to what a personal grid computing will do. And keep an eye on this business, because it will grow—both in terms of the number of competitors and the amount of money invested—during 2005.

Forbes columnist wants his video iPod

Arik Hasseldahl, a columnist at Forbes, wants Apple to “solve” the video player problem. He describes an 80-GB iPod capable of playing back programs recorded or downloaded at his leisure. I still say that capturing video is more important than just playing it, as evidenced by people’s use of camera phones. They not only want to see what they want when they want it (The Sopranos in the palm of your hand), they want to film their lives (look at the waves of video of the tsunami today, all taken by tourists) and share it.

Think about your eyes. What do you do with them all day long? Look at a TV screen or look around? We do both, but we do the latter more, and a device that enhances those daily experiences will capture public attention.

The iPod touch wheel is an ideal tool for quickly setting insertion points in video, quickly editing it and posting it to a site or sharing it with someone over the network.

Tsunamis and U.S. media

One of my most vivid early memories is the time my bed chased me out of my bedroom, crashing into the door frame behind me as I stumbled down the hall toward my parents’ room. The floor was shaking and I couldn’t keep from lurching into the wall. That was March 27, 1964 and part of the Pacific plate had slipped, causing the earthquake that yesterday’s Sumatra quake is being compared to (“the worst earthquake in 40 years). I was almost 1,500 miles from the epicenter of that quake and it is burned in my memory. Having lived through the Loma Prieta quake in San Francisco, too, these tectonic events are important benchmarks for me that tell a lot about the nature of media and story-telling.

Before all that, though, the tsunamis that devastated the coastlines of six countries are a terrible tragedy. Give generously to the Red Cross and other relief organizations.

What stood out for me as I watched the news after the quake and tsunami yesterday on U.S., British and Canadian television, was the U.S. media’s reliance on a few uninformative images.

For several hours after the waves struck Thailand, Indonesia, Sri Lanka, India, Malaysia and Bangladesh, the lead news on U.S. media was a combination of the inevitable end-of-the-year retrospective reports and Reggie White’s sudden death at 43, certainly a tragic story, but by comparison to an event that would lead to the death of tens of thousands of people, pretty small stuff. Meanwhile, Canadian and British television was talking through what was known, sans pictures and relying on a few still images while describing the scope of the devastation in terms that made clear something of massive import had happened. It was serious coverage that commanded attention.

The story broke on U.S. media, taking the lead on most networks only when the first video images of people dying—a group of people being swept into the floodwaters from a walkway or retaining wall that was repeated ceaselessly—and then the story was covered poorly with a few minutes per hour repeated over the same images. The only thing that changed was the death toll, which kept rising.

Throughout the day it became clear how under-covered the Indian Ocean region really is, as the only U.S. news people on the ground seemed to be reporters or producers on vacation there rather than covering a beat. By contrast, the BBC, albeit the news organ of the former colonial power in the region, was all over the story doing a great job.

The U.S. media continues today to focus almost exclusively on images of people being swept away and waves striking the beaches, rather than the growing human crisis—disease and homelessness—that will likely not be covered at all once the initial shock of the event passes. Yet it is the global response to that massive human disaster that will ultimately define this event. We probably won’t see tht story, except in passing.

Americans are blind to much of the world. You can see it in the fact that even today as the morning news shows are relying on videophone connections to one or two newspeople in the region rather than bureaus able to cover the story effectively. Keep in mind that except for a few hundred yards of shoreline in most of the affected region (packed with people, both the poor and tourists) and in low-lying areas much wider areas, the infrastructures were not destroyed and any news organization with a sincere interest in covering it would not be crippled by the tsunamis.

The U.S. networks just aren’t there. They rely on stringers without real support to cover anything but the highlights of a disaster or regional news organizations that do not have the resources a CNN, CBS or ABC does.

Looking back at the Alaska quake, which didn’t kill even a single percentage point of the dead yesterday, but where destruction and death reached Hawaii and the coasts of Oregon and California, you can see how much media has changed. There are a few photographs available of the destruction, but little coverage. Yet, this is an event that happened in the United States.

When San Francisco was struck by a quake as the World Series began, every network went live for days. Living there, it was oppressive to watch the CNN feed from a rooftop downtown describing “fires in the Marina District” that we could see as a glow in the night sky. But the media didn’t blink and there are literally thousands of hours of coverage and tens of thousands of images. I remember being in shock from the coverage as much as the earthquake itself, finally tearing myself and my wife away when her father appeared at our front door three days later.

We are used to 24/7 coverage of spectacular murder trials or the round-the-clock coverage of a death like that of John F. Kennedy Jr. or Ronald Reagan. These events seem to be more important than the death of at least 24,000 people; based on the way the stories are covered Scott Peterson’s trial, which the networks covered non-stop in its final days, is immensely more important. We need to realize that when we program (in the scheduling sense) people’s attention that we are educating them about the world, telling them not to give much weight to a life in Chennai or Phuket (I wonder if this story would be getting the coverage it is if there hadn’t been a lot of western tourists present, mainly because they provide the networks with English-language accounts that are convenient to present on-air).

The morning shows (I watched CNN) drifted from top-of-the-hour coverage of the tsunamis to fluff pieces about the coolest Christmas gift and, again, end-of-the-year wrap-ups. The anchors segued between laughing about something trivial to their solemn faces intended to convey the gravity of the situation in the Indian Ocean and Bay of Bengal, only to get right back to laughing about the headaches endured by holiday travelers. What’s the point of the news? To make us happy or to inform us? On days like yesterday and today, the shortcomings of the institutional media are clear.

I wonder, though, how a civic journalism can be organized to respond to those shortcomings. We’ll see. Another encouraging development on that front came to my attention yesterday: Global Voices. Endless interviews with seismologists about the source of tsunamis and the apparently blameful question “Why don’t they have a warning system?” certainly isn’t the answer. An organized effort to cover the recovery from this disaster with locals working out of Internet cafes would be a great start toward hearing the stories of the people who will live with the ongoing consequences, as well as a collective blog by tourists who were there but have gone home and struggle with the memories, would be grand workshops that help turn this disaster into a triumph.

Perhaps the purpose of the civic media is precisely that, to transform observation into participation.

MORE: At about 8:15 AM this morning, the quake and tsunami became “Tsunami Disaster” on CNN. No reference to locale, a sort of sanitized subject matter that allows a lot of “can it happen here” coverage. One anchor continues to refer to the tsunami as “the storm.”

The Tsunamihelp blog is an community blogging effort of Dina Mehta and others in the region to address coverage and aide efforts.

BoingBoing has a good summary of blogs by people in the region (natives and visitors)—the self-reported coverage is getting much deeper. People with cameras are contributing thousands of their own pictures of the disaster.