Health

You are currently browsing the archive for the Health category.

Obamacare matters. But the debate about it also misdirects attention away from massive collateral damage to patients. How massive? Dig To Make Hospitals Less Deadly, a Dose of Data, by Tina Rosenberg in The New York Times. She writes,

Until very recently, health care experts believed that preventable hospital error caused some 98,000 deaths a year in the United States — a figure based on 1984 data. But a new report from the Journal of Patient Safety using updated data holds such error responsible for many more deaths — probably around some 440,000 per year. That’s one-sixth of all deaths nationally, making preventable hospital error the third leading cause of death in the United States. And 10 to 20 times that many people suffer nonlethal but serious harm as a result of hospital mistakes.

The bold-facing is mine. In 2003, one of those statistics was my mother. I too came close in 2008, though the mistake in that case wasn’t a hospital’s, but rather a consequence of incompatibility between different silo’d systems for viewing MRIs, and an ill-informed rush into a diagnostic procedure that proved unnecessary and caused pancreatitis (which happens in 5% of those performed — I happened to be that one in twenty). That event, my doctors told me, increased my long-term risk of pancreatic cancer.

Risk is the game we’re playing here: the weighing of costs and benefits, based on available information. Thus health care is primarily the risk-weighing business we call insurance. For generations, the primary customers of health care — the ones who pay for the services — have been insurance companies. Their business is selling bets on outcomes to us, to our employers, or both. They play that game, to a large extent, by knowing more than we do. Asymmetrical knowledge R them.

Now think about the data involved. Insurance companies live in a world of data. That world is getting bigger and bigger. And yet, McKinsey tells us, it’s not big enough. In The big-data revolution in US health care: Accelerating value and innovation (subtitle: Big data could transform the health-care sector, but the industry must undergo fundamental changes before stakeholders can capture its full value), McKinsey writes,

Fiscal concerns, perhaps more than any other factor, are driving the demand for big-data applications. After more than 20 years of steady increases, health-care expenses now represent 17.6 percent of GDP—nearly $600 billion more than the expected benchmark for a nation of the United States’s size and wealth.1 To discourage overutilization, many payors have shifted from fee-for-service compensation, which rewards physicians for treatment volume, to risk-sharing arrangements that prioritize outcomes. Under the new schemes, when treatments deliver the desired results, provider compensation may be less than before. Payors are also entering similar agreements with pharmaceutical companies and basing reimbursement on a drug’s ability to improve patient health. In this new environment, health-care stakeholders have greater incentives to compile and exchange information.

While health-care costs may be paramount in big data’s rise, clinical trends also play a role. Physicians have traditionally used their judgment when making treatment decisions, but in the last few years there has been a move toward evidence-based medicine, which involves systematically reviewing clinical data and making treatment decisions based on the best available information. Aggregating individual data sets into big-data algorithms often provides the most robust evidence, since nuances in subpopulations (such as the presence of patients with gluten allergies) may be so rare that they are not readily apparent in small samples.

Although the health-care industry has lagged behind sectors like retail and banking in the use of big data—partly because of concerns about patient confidentiality—it could soon catch up. First movers in the data sphere are already achieving positive results, which is prompting other stakeholders to take action, lest they be left behind. These developments are encouraging, but they also raise an important question: is the health-care industry prepared to capture big data’s full potential, or are there roadblocks that will hamper its use

The word “patient” appears nowhere in that long passage. The word “stakeholder” appears twice, plus eight more times in the whole piece. Still, McKinsey brooks some respect for the patient, though more as a metric zone than as a holder of a stake in outcomes:

Health-care stakeholders are well versed in capturing value and have developed many levers to assist with this goal. But traditional tools do not always take complete advantage of the insights that big data can provide. Unit-price discounts, for instance, are based primarily on contracting and negotiating leverage. And like most other well-established health-care value levers, they focus solely on reducing costs rather than improving patient outcomes. Although these tools will continue to play an important role, stakeholders will only benefit from big data if they take a more holistic, patient-centered approach to value, one that focuses equally on health-care spending and treatment outcomes.

McKinsey’s customers are not you and me. They are business executives, many of which work in health care. As players in their game, we have zero influence. As voters in the democracy game, however, we have a bit more. That’s one reason we elected Barack Obama.

So, viewed from the level at which it plays out, the debate over health care, at least in the U.S., is between those who believe in addressing problems with business (especially the big kind) and those who believe in addressing problems with policy (especially the big kind, such as Obamacare).

Big business has been winning, mostly. This is why Obamacare turned out to be a set of policy tweaks on a business that was already highly regulated, mostly by captive lawmakers and regulators.

Meanwhile we have this irony to contemplate: while dying of bad data at a rate rivaling war and plague, our physical bodies are being doubled into digital ones. It is now possible to know one’s entire genome, including clear markers of risks such as cancer and dementia. That’s in addition to being able to know one’s quantified self (QS), plus one’s health care history.

Yet all of that data is scattered and silo’d. This is why it is hard to integrate all our available QS data, and nearly impossible to integrate all our health care history. After I left the Harvard University Health Services (HUHS) system in 2010, my doctor at the time (Richard Donohue, MD, whom I recommend highly) obtained and handed over to me the entirety of my records from HUHS. It’s not data, however. It’s a pile of paper, as thick as the Manhattan phone book. Its utility to other doctors verges on nil. Such is the nature of the bizarre information asymmetry (and burial) in the current system.

On top of that, our health care system incentivizes us to conceal our history, especially if any of that history puts us in a higher risk category, sure to pay more in health insurance premiums.

But what happens when we solve these problems, and our digital selves become fully knowable — by both our selves and our health care providers? What happens to the risk calculation business we have today, which rationalizes more than 400,000 snuffed souls per annum as collateral damage? Do we go to single-payer then, for the simple reason that the best risk calculations are based on the nation’s entire population?

I don’t know.

I do know the current system doesn’t want to go there, on either the business or the policy side. But it will. Inevitably.

At the end of whatever day this is, our physical selves will know our data selves better than any system built to hoard and manage our personal data for their interests more than for ours. When that happens the current system will break, and another one will take its place.

How many more of us will die needlessly in the meantime? And does knowing (or guessing at) that number make any difference? It hasn’t so far.

But that shouldn’t stop us. Hats off to leadership in the direction of actually solving these problems, starting with Adrian Gropper, ePatient Dave, Patient Privacy RightsBrian Behlendorf, Esther Dyson, John Wilbanks, Tom Munnecke and countless other good people and organizations who have been pushing this rock up a hill for a long time, and aren’t about to stop. (Send me more names or add them in the comments below.)

monofocal interocular lens“I see,” we say, when we mean “I understand.” To make something “clear” is to make it vivid and unmistakable to the mind’s eye. There are no limits to the ways sight serves as metaphor for many good and necessary things in life. The importance of vision, even for the sightless (who still use language), is beyond full accounting. As creatures we are exceptionally dependent on vision. For us upright walkers sight is, literally and figuratively, out topmost sense.

It is also through our eyes that we express ourselves and make connections with each other. That eyes are windows of the soul is so well understood, and so often said, that no one author gets credit for it.

Yet some of us are more visual than others. Me, for example. One might think me an auditory or kinesthetic type, but in fact I am a highly visual learner. That’s one reason photography is so important to me. Of the many ways I study the world, vision is foremost, and always has been.

But my vision has been less than ideal for most of my adult life. When I was a kid it was exceptional. I liked to show off my ability to read license plates at great distances. But in college, when I finally developed strong study habits, I began getting nearsighted. By the time I graduated, I needed glasses. At 40 I was past minus-2 dioptres for both eyes, which is worse than 20/150. That was when I decided that myopia, at least in my case, was adaptive, and I stopped wearing glasses as much as possible. Gradually my vision improved. In 1999, when the title photo of this blog was taken, I was down to about 1.25 dioptres, or 20/70. A decade later I passed eye tests at the DMV and no longer required corrective lenses to drive. (Though I still wore them, with only a half-dioptre or so of correction, plus about the same for a slight astigmatism. They eye charts said I was then at about 20/25 in both eyes.

My various eye doctors over the years told me reversal of myopia was likely due to cataracts in my lenses. Whether or not that was the case, my cataracts gradually got worse, especially in my right eye, and something finally needed to be done.

So yesterday the lens in my right eye was replaced. That one was, in the words of the surgeon, “mature.” Meaning not much light was getting through it. The left eye is still quite functional, and the cataract remains, for now, mild.

Cataract surgery has become a routine outpatient procedure. The prep takes about an hour, but the work itself is over in fifteen minutes, if nothing goes wrong, which it usually doesn’t. But my case was slightly unusual, because I have a condition called pseudoexfoliation syndrome, or PEX, which presents some challenges to the surgery itself.

As I understand it, PEX is dandruff of the cornea, and the flakes do various uncool things, such as clog up the accordion-like pleats of the iris, so the eye sometimes doesn’t dilate quickly or well in response to changing light levels. But the bigger risk is that these flakes sometimes weaken zonules, which are what hold the lens in place. Should those fail, the lens may drop into the back of the eye, where a far more scary and complicated procedure is required to remove it, after which normal cataract surgery becomes impossible.

In the normal version, the surgeon makes a small incision at the edge of the cornea, and then destroys and removes the old lens with through a process called phaceomulsification. He or she then inserts an intraocular lens, or IOL, like the one above. In most cases, it’s a monofocal lens. This means you no longer have the capacity to focus, so you need to choose the primary purpose you would like your new lens to support.  Most choose looking at distant things, although some choose reading or using a computer screen. Some choose to set one eye for distance and the other for close work. Either way you’ll probably end up wearing glasses for some or all purposes. I chose distance, because I like to drive and fly and look at stars and movie screens and other stuff in the world that isn’t reading-distance away.

The doctor’s office measured the dimensions of my eye and found that I wouldn’t need any special corrections in the new lens, such as for astigmatism — that in fact, my eyes, except for the lens, are ideally shaped and quite normal. It was just the lenses that looked bad. They also found no evidence of glaucoma or other conditions that sometime accompany PEX. Still, I worried about it, which turned out to be a waste, because the whole thing went perfectly. (It did take awhile to get my iris to fully dilate, but that was the only hitch.)

What’s weird about the surgery is that you’re awake and staring straight forward while they do all this. They numb the eye with topical anesthetic, and finally apply a layer of jelly. (They actually call it that. “Okay, now layer on the jelly,” the doctor says.) Thanks to intravenous drugs, I gave a smaller shit than I normally would have, but I was fully conscious the whole time. More strangely, I had the clear sense of standing there on my retina, looking up at the action as if in the Pantheon, watching the hole in its dome. I could see and hear the old lens being emulsified and sucked away, and then saw the new lens arriving like a scroll in a tube, all curled up. As the doctor put it in place, I could see the lens unfurl, and studied one of the curved hair-like springs that holds it in place. Shortly after that, the doctor pronounced the thing done. Nurses cleaned me up, taped a clear shield over my eye, and I was ready to go.

By evening the vision through that eye became clearer than through my “good” left eye. By morning everything looked crystalline. In my follow-up visit, just 24 hours after the surgery, my vision was 20/20. Then, after the doctor relieved a bit of pressure that had built up inside the cornea, it was better than that — meaning the bottom line of the eye chart was perfectly clear.

Now it’s evening of Day 2, and I continue to be amazed at how well it’s going. My fixed eye is like a new toy. It’s not perfect yet, and may never be; but it’s so much clearer than what I saw before — and still see with my left eye — that I’m constantly looking at stuff, just to see the changes.

The only nit right now is  little rays around points of light, such as stars. But the surgeon says this is due to a bit of distortion in my cornea, and that it will vanish in a week or so.

The biggest difference I notice is color. It is now obvious that I haven’t seen pure white in years. When I compare my left and right eyes, everything through my left — the one with the remaining cataract — has a sepia tint. It’s like the difference between an old LCD screen and a new LED one. As with LED screens, whites and blues are especially vivid.

Amazingly, my computer and reading glasses work well enough, since the correction for my left eye is still accurate and the one for my right one isn’t too far off. For driving I removed the right lenses from my distance glasses, since only the left eye now needs correction.

But the experience of being inside my eye watching repairs in the space of the eye alone — sticks with me. All vision is in the brain, of course, and the world we see is largely a set of descriptions we project from the portfolio of things we already know. We can see how this works when we disconnect raw sensory perception from our descriptive engines. This is what happens with LSD. As I understand it (through study and not experience, alas), LSD disconnects the world we perceive from the nouns and verbs we use to describe it. So do other hallucinogens.

So did I actually see what I thought I saw? I believe so, but I don’t know. I had studied the surgical procedure before going into it, so I knew much of what was going on. Maybe I projected it. Either way, that’s over. Now I don’t see that new lens, but rather the world of light refracting through it. That world is more interesting than my own, by a wider margin than before yesterday. It’s a gift I’m enjoying to the fullest.

Uninstalled is Michael O'Connor ClarkeMichael O’Connor Clarke’s blog — a title that always creeped me out a bit, kind of the way Warren Zevon‘s My Ride’s Here did, carrying more than a hint of prophesy. Though I think Michael meant something else with it. I forget, and now it doesn’t matter because he’s gone: uninstalled yesterday. Esophogeal cancer. A bad end for a good man.

All that matters, of course, is his life. Michael was smart and funny and loving and wise far beyond his years. We bonded as blogging buddies back when most blogs were journals and not shingles of “content” built for carrying payloads of advertising. Start to finish, he was a terrific writer. Enviable, even. He always wrote for the good it did and not the money it brought. (Which, in his case, like mine and most other friends in the ‘sphere, was squat.) I’ll honor that, his memory and many good causes at once by sharing most of one of his last blog posts:

Leaky Algorithmic Marketing Efforts or Why Social Advertising Sucks

Posted on May 9, 2012

A couple of days ago, the estimable JP Rangaswami posted a piece in response to a rather weird ad he saw pop up on Facebook. You should go read the full post for the context, but here’s the really quick version.

JP had posted a quick Facebook comment about reading some very entertainingly snarky Amazon.com reviews for absurdly over-priced speaker cables.

Something lurking deep in the dark heart of the giant, steam-belching, Heath Robinson contraption that powers Facebook’s social advertising engine took a shine to JP’s drive-by comment, snarfled it up, and spat it back out again with an advert attached. A rather… odd choice of “ad inventory unit”, to say the least. Here’s how it showed up on on of JP’s friends’ Facebook news feeds:

I saw JP post about this on Facebook and commented. The more I thought about the weirdness of this, the longer my comment became – to the point where I figured it deserved to spill over into a full-blown blog rant. Strap in… you have been warned.

I’ve seen a lot of this kind of thing happening in the past several months. Recently I’ve been tweeting and Facebooking my frustration with social sharing apps that behave in similar ways. You know the kind of thing – those ridiculous cluewalls implemented by Yahoo!, SocialCam, Viddy, and several big newspapers. You see an interesting link posted by one of your friends, click to read the article, and next thing you know you’re expected to grant permission to some rotten app to start spamming all your friends every time you read something online. Ack.

The brilliant Matthew Inman, genius behind The Oatmeal, had a very smart, beautifully simple take on all this social reader stupidity.

It’s the spread of this kind of leaky algorithmic marketing that is starting to really discourage me from sharing or, sometimes, even consuming content. And I’m a sharer by nature – I’ve been willingly sharing and participating in all this social bollocks for a heck of a long time now.

But now… well, I’m really starting to worry about the path we seem to be headed down. Or should I say, the path we’re being led down.

Apps that want me to hand over the keys to my FB account before I can read the news or watch another dopey cat video just make me uncomfortable. If I inadvertently click through an interesting link only to find that SocialCam or Viddy or somesuch malarkey wants me to accept its one-sided Terms of Service, then I nope the hell out of there pretty darn fast.

How can this be good for the Web? It denies content creators of traffic and views, and ensures that I *won’t* engage with their ideas, no matter how good they might be.

All these examples are bad cases of Leaky Algorithmic Marketing Efforts (or L.A.M.E. for short). It’s a case of developers trying to be smart in applying their algorithms to user-generated content – attempting to nail the sweet spot of personal recommendations by guessing what kind of ad inventory to attach to an individual comment, status update, or tweet.

It results in unsubtle, bloody-minded marketing leaking across into personal conversations. Kinda like the loud, drunken sales rep at the cocktail party, shoe-horning a pitch for education savings plans into a discussion about your choice of school for your kids.

Perhaps I wouldn’t mind so much if it wasn’t so awfully bloody cack-handed as a marketing tactic. I mean – take another look at the ad unit served up to run alongside JP’s status update. What the hell has an ad for motorbike holidays got to do with him linking to snarky reviews of fancyass (and possibly fictional) speaker cables? Where’s the contextual connection?

Mr. Marketer: your algorithm is bad, and you should feel bad.

As you see, Michael was one of those rare people who beat the shit out of marketing from the inside. Bless him for that. It’s not a welcome calling, and Lord knows marketing needs it, now more than ever.

Here are some memorial posts from other old friends. I’ll add to the list as I spot them.

And here is his Facebook page. Much to mull and say there too. Also at a new memorial page there.

It’s good, while it lasts, that our presences persist on Facebook after we’re gone. I still visit departed friends there: Gil Templeton, Ray Simone, R.L. “Bob” Morgan, Nick Givotovsky.SupportMichaelOCC.ca is still up, and should stay up, to help provide support for his family.

His Twitter stream lives here. Last tweet: 26 September. Here’s that conversation.

Geologists have an informal name for the history of human influence on the Earth. They call it the Anthropocene. It makes sense. We have been raiding the earth for its contents, and polluting its atmosphere, land and oceans for as long as we’ve been here, and it shows. By any objective perspective other than our own, we are a pestilential species. We consume, waste and fail to replace everything we can, with  little regard for consequences beyond our own immediate short-term needs and wants. Between excavation, erosion, dredgings, landfills and countless other alterations of the lithosphere, evidence of human agency in the cumulative effects studied by geology is both clear and non-trivial.

As for raiding resources, I could list a hundred things we’ll drill, mine or harvest out of the planet and never replace — as if it were in our power to do so — but instead I’ll point to just one small member of the periodic table: helium. Next to hydrogen, it’s the second lightest element, with just two electrons and two protons. Also, next to hydrogen, it is the second most abundant, comprising nearly a quarter of the universe’s elemental mass.  It is also one of the first elements to be created out of the big bang, and remains essential to growing and lighting up stars.

Helium is made in two places: burning stars and rotting rock. Humans can do lots of great stuff, but so far making helium isn’t one of them. Still, naturally, we’ve been using that up: extracting it away, like we do so much else. Eventually, we’ll run out.

Heavy elements are also in short supply. When a planet forms, the heaviest elements sink to the core. The main reason we have gold, nickel, platinum, tungsten, titanium and many other attractive and helpful elements laying around the surface or within mine-able distance below is that meteorites put them there, long ago. At our current rate of consumption, we’ll be mining the moon and asteroids for them. If we’re still around.

Meanwhile the planet’s climates are heating up. Whether or not one ascribes this to human influence matters less than the fact that it is happening. NASA has been doing a fine job of examining symptoms and causes. Among the symptoms are the melting of Greenland and the Arctic. Lots of bad things are bound to happen. Seas rising. Droughts and floods. Methane releases. Bill McKibben is another good source of data and worry. He’s the main dude behind 350.org, named after what many scientists believe is the safe upper limit for carbon dioxide in the atmosphere: 350 parts per million. We’re over that now, at about 392. (Bonus link.)

The main thing to expect, in the short term — the next few dozen or hundreds of years — is rising sea levels, which will move coastlines far inland for much of the world, change ecosystems pretty much everywhere, and alter the way the whole food web works.

Here in the U.S., neither major political party has paid much attention to this. On the whole the Republicans are skeptical about it. The Democrats care about it, but don’t want to make a big issue of it. The White House has nice things to say, but has to reconcile present economic growth imperatives with the need to save the planet from humans in the long run.

I’m not going to tell you how to vote, or how I’m going to vote, because I don’t want this to be about that. What I’m talking about here is evolution, not election. That’s the issue. Can we evolve to be symbiotic with the rest of the species on Earth? Or will we remain a plague?

Politics is for seasons. Evolution is inevitable. One way or another.

(The photo at the top is one among many I’ve shot flying over Greenland — a place that’s changing faster, perhaps, than any other large landform on Earth.)

[18 September...] I met and got some great hang time with Michael Schwartz (@Sustainism) of Sustainism fame, at PICNIC in Amsterdam, and found ourselves of one, or at least overlapping, mind on many things. I don’t want to let the connection drop, so I’m putting a quick shout-out here, before moving on to the next, and much-belated, post.

Also, speaking of the anthropocene, dig The ‘Anthropocene’ as Environmental Meme and/or Geological Epoch, in Dot Earth, by Andrew Revkin, in The New York Times. I met him at an event several years ago and let the contact go slack. Now I’m reeling it in a bit. :-) Here’s why his work is especially germane to the topic of this here post:  “Largely because of my early writing on humans as a geological force, I am a member of the a working group on the Anthropocene established by the Subcommission on Quaternary Stratigraphy.” Keep up the good work, Andy.

When I was a kid I had near-perfect vision. I remember being able to read street signs and license plates at a distance, and feeling good about that. But I don’t think that was exceptional. Unless we are damaged in some way, the eyes we are born with tend to be optically correct. Until… what?

In my case it was my junior year in college. That’s when I finally became a good student, spending long hours reading and writing in my carrel in the library basement, bad flourescent light, cramping my vision at a single distance the whole time. Then, when I’d walk out and the end of the day or the evening, I’d notice that things were a little blurry at a distance. After a few minutes, my distance vision would gradually clear up. By the end of the year, however, my vision had begun to clear up less and less. By the end of my senior year, I needed glasses for distance: I had become myopic. Nearsighted. I remember the prescription well: -.75 dioptres for my left eye and -1.oo dioptres for my right.

I then began the life of a writer, with lots of sitting still, reading things and writing on a typewriter or (much later) a computer. Since I tended to wear glasses full-time, the blurred distance vision when work was done — and then the gradual recovery over the following minutes or hours — continued. And my myopia gradually increased. So, by the time I reached my forties, I was down to -3 dioptres of correction for both eyes.

A digression into optics… “Reading” glasses, for hyperopia, or farsightedness, are in positive dioptres: +1, +2, etc. As magnifiers, they tend toward the convex, thicker in the middle and thinner toward the edges, or frames. Corrections for myopia tend toward the concave, thicker on the edges. You can sort-of see the thick edges of my frames in the YouTube video above, shot in June, 1988, when I was a month away from turning 42 (and looked much younger, which I wish was still the case). My glasses were Bill Gates-style aviators.

I also began to conclude that myopia, at least in my case was adaptive. It made sense to me that the most studious kids — the ones who read the most, and for the longest times each day — wore glasses, almost always for myopia.

So I decided to avoid wearing glasses as much as I could. I would wear none while writing and reading (when I didn’t need them), and only wear them for driving, or at other times when distance vision mattered, such as when watching movies or attending sports events. Over the years, my vision improved. By the time I was 55, I could pass the eye test at the DMV, and no longer required glasses for driving. In another few years my vision was 20/25 i

n one eye and 20/30 in the other. I still had distance glasses (mostly for driving), but rarely used them otherwise.

I’ve been told by my last two optometrists that most likely my changes were brought on by onset of cataracts (which I now have, though mostly in my right eye), and maybe that was a factor, but I know of at least two other cases like mine, in which myopia was reduced by avoiding correction for it. And no optometrist or opthamologist I visted in my forties or fifties noted cataracts during eye examinations. But all have doubted my self-diagnosis of adaptive myopia.

Now I read stories like, “Why Up to 90% of Asian Schoolchildren Are Nearsighted: Researchers say the culprit is academic ambition: spending too much time studying indoors and not enough hours in bright sunlight is ruining kids’ eyesight“… and the provisional conclusion of my one-case empirical study seems, possibly, validated.

It also seems to me that the prevalence of myopia, worldwide, is high enough to make one wonder if it’s a feature of civilization, like cutting hair and wearing shoes.

I also wonder whether Lasik is a good idea, especially when I look at the large number of old glasses,  all with different prescriptions, in my office drawer at home. What’s to stop one’s eyes from changing anyway, after Lasik? Maybe Lasik itself? I know many people who have had Lasik procedures, and none of them are unhappy with the results. Still, I gotta wonder.

 

is one of the world’s truly great guys. Besides being smart, funny, caring, hard-working, a good husband and father — and pretty much all the other positive stuff you could pack into a bio, Michael was one of the first people to not only dig  , but to grok it thoroughly at every level, including the multiple ironies at all of them. And to continue doing so through all the years since.

Like three of Cluetrain’s authors, Michael was a marketing guy who was never fully comfortable with the label or the role, and broke every mold that failed to contain him. Unlike those three, however, he continued to labor inside the business, which still needs many more like him. Because, from the start, Michael has always stood up for the the user, the customer, the individual whose reach should rightly exceed others’ grasp.

His labors are suspended, however, while he takes on a personal battle with .

Friends of Michael’s have put up SupportMichaelOCC.ca, so all of us who care about him and his family can easily lend support. He’s a sole breadwinner with four kids, so this is a tall order. Whether you know Michael or not, please do what you can.

Bonus links:

“When I’m Sixty-Four” is 44 years old. I was 20 when it came out, in the summer of 1967,  one among thirteen perfect tracks on The Beatles‘ Sgt. Pepper’s Lonely Hearts Club Band album. For all the years since, I’ve thought the song began, “When I get older, losing my head…” But yesterday, on the eve of actually turning sixty-four, I watched this video animation of the song (by theClemmer) and found that Paul McCartney actually sang, “… losing my hair.”

Well, that’s true. I’m not bald yet, but the bare spot in the back and the thin zone in the front are advancing toward each other, while my face continues falling down below.

In July 2006, my old friend Tom Guild put Doc Searls explains driftwood of the land up on YouTube. It’s an improvisational comedy riff that Tom shot with his huge new shoulder-fire video camera at our friend Steve Tulsky’s house on a Marin County hillside in June, 1988. It was a reunion of sorts. Tom, Steve and I had all worked in radio together in North Carolina. I was forty at the time, and looked about half that age. When my ten-year-old kid saw it, he said “Papa, you don’t look like that.” I replied, “No, I do look like that. I don’t look like this,” pointing to my face.

Today it would be nice if I still looked like I did five years ago. The shot in the banner at the top of this blog was taken in the summer of 1999 (here’s the original), when I was fifty-two and looked half that age. The one on the right was taken last summer (the shades on my forehead masking a scalp that now reflects light), when I was a few days short of sixty-three. By then I was finally looking my age.

A couple months back I gave a talk at the Personal Democracy Forum where I was warmly introduced as one of those elders we should all listen to. That was nice, but here’s the strange part: when it comes to what I do in the world, I’m still young. Most of the people I hang and work with are half my age or less, yet I rarely notice or think about that, because it’s irrelevant. My job is changing the world, and that’s a calling that tends to involve smart, young, energetic people. The difference for a few of us is that we’ve been young a lot longer.

But I don’t have illusions about the facts of life. It’s in one’s sixties that the croak rate starts to angle north on the Y axis as age ticks east on the X. Still, I’m in no less hurry to make things happen than I ever was. I’m just more patient. That’s because one of the things I’ve learned is that now is always earlier than it seems. None of the future has happened yet, and it’s always bigger than the past.

We are what we do.

We are more than that, of course, but it helps to have answers to the questions “What do you do?” and “What have you done?”

Among many other notable things l did was survive breast cancer. It was a subject that came up often during the year we shared as fellows at the Berkman Center. It may not have been a defining thing, but it helped build her already strong character. Persephone also said she knew that her personal war with the disease might not be over. The risks for survivors are always there.

So it was not just by awful chance that Persephone showed up at a Berkman event this Spring wearing a turban. She was on chemo, she said, but optimistic. Thin and frail, she was still pressing on with work, carrying the same good humor, toughness, intelligence and determination.

The next time I saw her, in early June, she looked worse. Then, on June 24, Ethan Zuckerman sent an email to Berkman friends, letting us know that Persephone’s health was diminishing quickly, and that she “probably will not live through July.” He also said that she had moved to a hospice, but was doing well enough to read email and accept a few visitors — and that he had hoped to visit her on July 6. Just five days later, Ethan wrote to say that Persephone had died the night before. I had been working in slow motion on an email to her — thinking, I guess, that Ethan’s July 6 date was an appointment she would keep. This post began as that email.

Persephone is gone, but her work isn’t, and that’s what I want to talk about. It’s a subject I wanted to bring up with her, and one I’m sure all her friends care about. We all should.

What I want to talk about is not “carrying on” the work of the deceased in the usual way that eulogizers do. What I’m talking about is keeping Persephone’s public archives in a published, accessible and easily found state. I fear that if we don’t make an effort to do that — for everybody — that we’ll lose them.

The Web went commercial in 1995, and has only become more so since. Today it is a boundless live public marketplace, searched mostly through one company’s engine, which continues to adapt accordingly. While Google’s original mission (“to organize the world’s information and make it universally accessible and useful”) persists, its commercial imperatives cannot help but subordinate its noncommercial ones.

In my own case I’m finding it harder and harder to use Google (or any search engine) to find my own archived work, even if there are links to it. The Live Web, which I first wrote about in 2005, has come to be known as the “real time” Web, which is associated with Twitter and Facebook as well as Google. What’s live, what’s real time, is now. Not then.

Today almost no time passes between the publishing of anything and its indexing by Google. This is good, but it is also aligned with commercial imperatives that emphasize the present and dismiss the past. No seller has an interest in publishing last week’s offerings, much less last year’s or last decade’s. What would be the point?

It would help if there were competition among search engines, or more specialized ones, but there’s not much hope for that. Bing’s business model is the same as Google’s. And the original Live Web search engines — Technorati, PubSub, Blogpulse, among others — are gone or moved on to other missions. Perhaps ironically, Technorati maintained an archive of all blogging for half a decade. But I’ve been told that’s gone. is still there, but re-cast as a news engine. Only persists as a straightforward Live Web engine, sustained, I suppose, by Mark Cuban‘s largesse. (For which I thank him. IceRocket is outstanding.)

For archives we have two things, it seems. One is search engines concerned mostly about the here and now, and the other is Archive.org. The latter does an amazing job, but finding stuff there is a chore if you don’t start with a domain name.

Meanwhile I have no idea how long tweets last, and no expectation that Twitter (or anybody other than a few individuals) will maintain them for the long term. Nor do I have a sense of how long anything will (or should) last inside Facebook, Linkedin or any other commercial walled garden.

To be fair, everything on the Web is rented, starting with domain names. I “own” , only for as long as I keep paying a domain registrar for the rights to use it. Will it stay around after I’m gone? For how long? All of us rent our servers, even if we own them, simply because they use electricity, take up space and need to be maintained. Who will do that after their paid-for purposes expire? Why? And again, for how long?

Persephone worked for years at Internews.org. I assume her work there will last as long as the organization does. Here’s the Google cache of her Key Staff bio. Her tweets as (her last was June 9th) will persist as long as Twitter doesn’t bother to get rid of them, I suppose. Here’s a Google search for her name. Here’s her Berkman alum page. Here’s her Linkedin. Here are her Delicious bookmarks. More to the point of this post, here’s her Media Re:public blog, with many links out to other sources, including her own. Here’s the Media Re:public report she led. And here’s an Internews search for Persephone, which has five pages of results.

All of this urges us toward a topic and cause that was close to Persephone’s mind and heart: journalism. If we’re serious about practicing journalism on the Web, we need to preserve it at least as well as we publish it.

Tags: , , , ,

News Without the Narrative Needed to Make Sense of the News: What I Will Say at SXSW is where and how Jay Rosen lays out his current thinking on new agendas for whatever journalism will become after we’re done with the current transition.

He has long been concerned with how explanation is “under-emphasized in the modern newsroom” and offers excellent examples of how explaining should work, as well as ideas about how to institutionalize it. For example, “The goal is to surface the hidden demand for explanation and create a kind of user-driven assignment desk for the explainer genre, which is itself under-developed in pro journalism”. He adds, “Are there other ways to surface this kind of demand?”

I’d call attention to the imperatives of stories, and the role that might be played by new sets of well-explained facts that can help frame or re-frame a story.

See, stories are what assignment editors want. They’re also what readers want. And stories are different to some degree from the current vogue-word narrative. They do overlap, but they are different.

A few months back I visited the subject of story in What’s right with Wikipedia? — a piece I wrote in response to a What’s Wrong With Wikipedia story that had run in the Wall Steet Journal. I don’t know if that story was part of the WSJ’s GOP-aligned “What’s Wrong With Everything Liberals Do” narrative, but in any case I felt the matter needed explaining. Some Wikipedians did a good job of showing how there wasn’t much of a story there (read the piece to see how). For my part, I felt the need to explain what stories are actually about, which is problems, or struggles. Said I,

Three elements make stories interesting: 1) a protagonist we know, or is at least interesting; 2) a struggle of some kind; and 3) movement (or possible movement) toward a resolution. Struggle is at the heart of a story. There has to be a problem (what to do with Afghanistan), a conflict (a game between good teams, going to the final seconds), a mystery (wtf was Tiger Woods’ accident all about?), a wealth of complications (Brad and Angelina), a crazy success (the iPhone), failings of the mighty (Nixon and Watergate). The Journal‘s Wikipedia story is of the Mighty Falling variety.

In his piece Jay mentions what a good Job the Giant Pool of Money episode of This American Life did of bringing sense to the country’s financial crisis. This gave rise to the PlanetMoney podcast, which is also terrific at explaining things. PlanetMoney feeds some of its best stuff to NPR’s news flow as well. One good example is Accidents of History Created U.S. Health System, which made it clear how we got to our wacky employer-supported health insurance system. Go listen to it and see if you don’t have a much better grasp on the challenge, if not of the solutions, currently on the table.

My point here, or one of them, is that the real story isn’t Obama vs. Intransigent Republicans (the Dems’ narrative) or Sensible Americans against Government Takeover (the Reps narrartive), but that we’ve got a health care system that burdens employers almost exclusively, rather than individuals, government (save for VA, Medicare and Medicaid), or other institutions. It’s an open quetion whether or not that’s screwed up, but at least it’s a question that ought to be at the center of the table, or the “debate” that been both boring and appalling.

This is consistent with what Matt Thompson says in The three key parts of news stories you usually don’t get, # 2 of which is WHAT WE MISS (1): The longstanding facts. But we also miss seeing the role that longstanding overlooked facts might play amongst the three story elements: protagonist, problem and movement. Take the problem of employer responsibility as a structural premise for health care. By itself, the problem just sits there. We need a protagonist and a sense that the story has movement. In the absence of either, we look for other defaults. Thus we cast Obama and his opponents as the protagonists, or to get into characterization as the issue if the topic gets logjammed, which it has been for awhile. So we hear about problems with the president’s charactrer. He’s not leading. Or … whatever. You can fill in the blanks

Meanwhile, we live in a world where employers are almost nothing like they were when the current health care system solidified at the end of World War II. In many towns (Santa Barbara, for example) the (or at least a) leading employer is “self”. Tried to get insurance for your self-employed butt lately? How about if you’re older than a child and have a medical history that’s other than perfect? Scary shit. Does the Obama plan make things better for you? According to this story in CNN, “Health insurance exchanges would be created to make it easier for small businesses, the self-employed and unemployed to pool resources and purchase less expensive coverage.” Hmm. “Easier” doesn’t sound like much relief. But doing nothing doesn’t sound good either.

So the easy thing is to go back to covering the compromise bill’s chances in Congress, and the politics surrounding it. That at least makes some kind of sense. We have all our story elements in place. It’s all politics from here on. Bring in the sports and war metaphors and let automated processes carry the rest. Don’t dig, just dine. The sausage-machine rocks on.

As Matt says, “… rarely do we acknowledge what we’re pursuing. When our questions make it into the coverage at all, they have to appear in the mouths of our sources, resulting in paltry, contorted pieces like this one, from the AP. Or they’re attributed to no one, weaseled into a headline that says only, ‘[Such-and-such] raises questions.’ Whose questions? Not ours, certainly.”

I also wonder if we’re barking up the wrong tree (or down the wrong hole) when we obsess about “curation” of news — a favorite topic of mainstream media preservationists. Maybe what we need is to see explainers as advocates of our curiosity about the deep questions, or deep facts, such that they might become unavoidable in news coverage.

This, of course, begs the creation of whole new institutions. Which is the job that Jay has taken up here. Let’s help him out with it.

[Later...] An additional thought: statistics aren’t stories.

I remember hearing about what were later called the killing fields of Cambodia, after refugees reported Pol Pot and the Khmer Rouge were murdering what eventually became more than a million people. Hughes Rudd delivered the story one on the CBS Morning News, as I recall between items on the Superbowl and Patty Hearst. He said that perhaps half a million people were already dead. But the story wasn’t a story. It was an item. It wasn’t until Sydney Shamberg ran “The Death and Life of Dith Pran” in the New York Times’ Sunday Magazine that the story got real. It got human. It had a protagonist. It became a movie.

I thought about this when I noticed there were exactly no comments following my Gendercide post. Here’s the fact that matters: countless baby girls are being killed, right now. But that’s not a story. Not yet. Not even with help from The Economist. I think the job here isn’t just to get more facts, or even to get the right name and the right face. The story needs its Dith Pran, and doesn’t have her yet. (Or, if it does, news hasn’t spread.)

Tags: , , , ,

witw1
Years ago, before Flickr came into my life and provided incentives for hyper-identifying everything about every photograph, I had a brief-lived series of photographic teases called Where in the World? — or something like that. (Can’t find the links right now. Maybe later.)

So I thought I’d fire it up again for the shot above, which I took recently on a road trip. Can anybody guess what this is? Bonus points if you can say exactly where.

Tags: , ,

« Older entries