Category: Problems (Page 1 of 5)

Shall we commit advercide?

On our mailing list, there is a suggestion that we need a browser that kills all the advertising it sees on the Web. Not just the rude kind, or the tracking-based kind. The idea is to waste it all. The business model is, “$10 a month for a browser which guarantees no adverts, ever. If you see an advert, you file a bug report.”

I dismissed the idea a few years ago, when it first came up, for what seemed good and obvious reasons: that lots of advertising is informative and useful, that good and honest (e.g. non-tracking-based) advertising supports most of the world’s journalism, and so on.

But now most advertising on the Web is tracking-based (“programmatic” mostly means tracking-based), and most of the businesses involved seem hellbent on keeping it that way.

As for regulations, the GDPR and CCPA mean well, but they’ve done little to stop tracking, and much to make it worse.  Search for gdpr+compliance on Google and right now and see how many results you get. (I get way over a billion.) Nearly all of the finds you’ll see are pitches for ways sites and services can obey the letter of the GDPR while screwing its spirit. In other words, the GDPR and the CCPA have created a giant market for working around them.

Clearly the final market for goods and services on the Net—that’s you and me, ordinary human beings—don’t like being tracked like marked animals, and all the lost privacy that tracking involves. And hell, ad blocking alone was the biggest boycott in world history, way back in 2015. That says plenty.

So why not give our market a way to speak? Why not incentivize publishers to start making money in ways that respect everyone’s privacy?

Also, we’re not alone. Dig CheckMyAds.org and their efforts, such as this one.

Comments work on this blog again, so feel free to weigh in.

 

How the Web sucks

This spectrum of emojis is a map of the Web’s main occupants (the middle three) and outliers (the two on the flanks). It provides a way of examining who is involved, where regulation fits, and where money gets invested and made. Yes, it’s overly broad, but I think it’s helpful in understanding where things went wrong and why. So let’s start.

Wizards are tech experts who likely run their own servers and keep private by isolating themselves and communicating with crypto. They enjoy the highest degrees of privacy possible on and around the Web, and their approach to evangelizing their methods is to say “do as I do” (which most of us, being Muggles, don’t). Relatively speaking, not much money gets made by or invested in Wizards, but much money gets made because of Wizards’ inventions. Those inventions include the Internet, the Web, free and open source software, and much more. Without Wizards, little of what we enjoy in the digital world today would be possible. However, it’s hard to migrate their methods into the muggle population.

‍Muggles are the non-Wizards who surf the Web and live much of their digital lives there, using Web-based services on mobile apps and browsers on computers. Most of the money flowing into the webbed economy comes from Muggles. Still, there is little investment in providing Muggles with tools for operating or engaging independently and at scale across the websites and services of the world. Browsers and email clients are about it, and the most popular of those (Chrome, Safari, Edge) are by the grace of corporate giants. Almost everything Muggles do on the Web and mobile devices is on apps and tools that are what the trade calls silos or walled gardens: private spaces run by the websites and services of the world.

Sites. This category also includes clouds and the machinery of e-commerce. These are at the heart of the Web: a client-server (aka calf-cow) top-down, master-slave environment where servers rule and clients obey. It is in this category that most of the money on the Web (and e-commerce in general) gets made, and into which most investment money flows. It is also here that nearly all development n the connected world today happens.

 Ad-tech, aka adtech, is the home of surveillance capitalism, which relies on advertisers and their agents knowing all that can be known about every Muggle. This business also relies on absent Muggle agency, and uses that absence as an excuse for abusing the privilege of committing privacy violations that would be rude or criminal in the natural world. Also involved in this systematic compromise are adtech’s dependents in the websites and Web services of the world, which are typically employed by adtech to inject tracking beacons in Muggles’ browsers and apps. It is to the overlap between adtech and sites that all privacy regulation is addressed. This is why, the GDPR sees Muggles as mere “data subjects,” and assigns responsibility for Muggle’s privacy to websites and services the regulation calls “data controllers” and “data processors.” The regulation barely imagines that Muggles could perform either of those roles, even though personal computing was invented so every person can do both. (By the way, the adtech business and many of its dependents in publishing like to say the Web is free because advertising pays for it. But the Web is as free by nature as are air and sunlight. And most of the money Google makes, for example, comes from plain old search advertising, which can get along fine without tracking. There is also nothing about advertising itself that requires tracking.)

 Crime happens on the Web, but its center of gravity is outside, on the dark web. This is home to botnets, illegal porn, terrorist activity, ransom attacks, cyber espionage, and so on. There is a lot of overlap between crime and adtech, however, given the moral compromises required for adtech to function, plus the countless ways that bots, malware and other types of fraud are endemic to the adtech business. (Of course, to be an expert criminal on the dark web requires a high degree of wizardry. So I one could arrange these categories in a circle, with an overlap between wizards and criminals.)

I offer this set of distinctions for several reasons. One is to invite conversation about how we have failed the Web and the Web has failed us—the Muggles of the world—even though we enjoy apparently infinite goodness from the Web and handy services there. Another is to explain why ProjectVRM has been more aspirational than productive in the fifteen years it has been working toward empowering people on the commercial Net. (Though there has been ample productivity.) But mostly it is to explain why I believe we will be far more productive if we start working outside the Web itself. This is why our spinoff, Customer Commons, is pushing forward with the Byway toward i-commerce. Check it out.

Finally, I owe the idea for this visualization to Iain Henderson, who has been with ProjectVRM since before it started. (His other current involvements are with JLINC and Customer Commons.) Hope it proves useful.

Is being less tasty vegetables our best strategy?

We are now being farmed by business. The pretense of the “customer is king” is now more like “the customer is a vegetable” — Adrian Gropper

That’s a vivid way to put the problem.

There are many approaches to solutions as well. One is suggested today in the latest by @_KarenHao in MIT Technology Review, titled

How to poison the data that Big Tech uses to surveil you:
Algorithms are meaningless without good data. The public can exploit that to demand change.

An  excerpt:

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:
Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

The sourced paper* is titled Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies, and concludes,

In this paper, we presented a framework for using “data leverage” to give the public more influence over technology company behavior. Drawing on a variety of research areas, we described and assessed the “data levers” available to the public. We highlighted key areas where researchers and policymakers can amplify data leverage and work to ensure data leverage distributes power more broadly than is the case in the status quo.

I am all for screwing with overlords, and the authors suggest some fun approaches. Hell, we should all be doing whatever it takes, lawfully (and there is a lot of easement around that) to stop rampant violation of our privacy—and not just by technology companies. The customers of those companies, which include every website that puts up a cookie notice that nudges visitors into agreeing to be tracked all over the Web (in observance of the letter of the GDPR, while screwing its spirit), are also deserving of corrective measures. Same goes for governments who harvest private data themselves, or gather it from others without our knowledge or permission.

My problem with the framing of the paper and the story is that both start with the assumption that we are all so weak and disadvantaged that our only choices are: 1) to screw with the status quo to reduce its harms; and 2) to seek relief from policymakers.  While those choices are good, they are hardly the only ones.

Some context: wanton privacy violations in our digital world has only been going on for a little more than a decade, and that world is itself barely more than  a couple dozen years old (dating from the appearance of e-commerce in 1995). We will also remain digital as well as physical beings for the next few decades or centuries.

So we need more than these kinds of prescriptive solutions. For example, real privacy tech of our own, that starts with giving us the digital versions of the privacy protections we have enjoyed in the physical world for millennia: clothing, shelter, doors with locks, and windows with curtains or shutters.

We have been on that case with ProjectVRM since 2006, and there are many developments in progress. Some even comport with our Privacy Manifesto (a work in progress that welcomes improvement).

As we work on those, and think about throwing spanners into the works of overlords, it may also help to bear in mind one of Craig Burton‘s aphorisms: “Resistance creates existence.” What he means is that you can give strength to an opponent by fighting it directly. He applied that advice in the ’80s at Novell by embracing 3Com, Microsoft and other market opponents, inventing approaches that marginalized or obsolesced their businesses.

I doubt that will happen in this case. Resisting privacy violations has already had lots of positive results. But we do have a looong way to go.

Personally, I welcome throwing a Theia.


* The full list of authors is Nicholas Vincent, Hanlin Li (@hanlinliii), Nicole Tilly and Brent Hecht (@bhecht) of Northwestern University, and Stevie Chancellor (@snchencellor) of the University of Minnesota,

What if we called cookies “worms”?

While you ponder that, read Exclusive: New York Times phasing out all 3rd-party advertising data, by Sara Fischer in Axios.

The cynic in me translates the headline as “Leading publishers cut out the middle creep to go direct with tracking-based advertising.” In other words, same can, nicer worms.

But maybe that’s wrong. Maybe we’ll only be tracked enough to get put into one of those “45 new proprietary first-party audience segments” or  “at least 30 more interest segments.” And maybe only tracked on site.

But we will be tracked, presumably. Something needs to put readers into segments. What else will do that?

So, here’s another question: Will these publishers track readers off-site to spy on their “interests” elsewhere? Or will tracking be confined to just what the reader does while using the site?

Anyone know?

In a post on the ProjectVRM list, Adrian Gropper says this about the GDPR (in response to what I posted here): “GDPR, like HIPAA before it, fails because it allows an unlimited number of dossiers of our personal data to be made by unlimited number of entities. Whether these copies were made with consent or without consent through re-identification, the effect is the same, a lack of transparency and of agency.”

So perhaps it’s progress that these publishers (the Axios story mentions The Washington Post and Vox as well as the NYTimes) are only keeping limited dossiers on their readers alone.

But that’s not progress enough.

We need global ways to say to every publisher how little we wish them to know about us. Also ways to keep track of what they actually do with the information they have. (And we’re working on those. )

Being able to have one’s data back (e.g. via the CCPA) is a kind of progress (as is the law’s discouragement of collection in the first place), but we need technical as well as legal mechanisms for projecting personal agency online. (Models for this are Archimedes and Marvel heroes.)  Not just more ways to opt out of being observed more than we’d like—especially when we still lack ways to audit what others do with the permissions we give them.

That’s the only way we’ll get rid of the worms.

Bonus link.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

The only path from subscription hell to subscription heaven

I subscribe to Vanity Fair. I also get one of its newsletters, replicated on a website called The Hive. At the top of the latest Hive is this come-on: “For all that and more, don’t forget to sign up for our metered paywall, the greatest innovation since Nitroglycerin, the Allman Brothers, and the Hangzhou Grand Canal.”

When I clicked on the metered paywall link, it took me to a plain old subscription page. So I thought, “Hey, since they have tracking cruft appended to that link, shouldn’t it take me to a page that says something like, “Hi, Doc! Thanks for clicking, but we know you’re already a paying subscriber, so don’t worry about the paywall”?

So I clicked on the Customer Care link to make that suggestion. This took me to a login page, where my password manager filled in the blanks with one of my secondary email addresses. That got me to my account, which says my Condé Nast subscriptions look like this:

Oddly, the email address at the bottom there is my primary one, not the one I just logged in with.  (Also oddly, I still get Wired.)

So I went to the Vanity Fair home page, found myself logged in there, and clicked on “My Account.” This took me to a page that said my email address was my primary one, and provided a way to change my password, to subscribe or unsubscribe to four newsletters, and a way to “Receive a weekly digest of stories featuring the players you care about the most.” The link below said “Start following people.” No way to check my account itself.

So I logged out from the account page I reached through the Customer Care link, and logged in with my primary email address, again using my password manager. That got me to an account page with the same account information you see above.

It’s interesting that I have two logins for one account. But that’s beside more important points, one of which I made with this message I wrote for Customer Care in the box provided for that:

Curious to know where I stand with this new “metered paywall” thing mentioned in the latest Hive newsletter. When I go to the link there — https://subscribe.condenastdigital.com/subscribe/splits/vanityfair/ — I get an apparently standard subscription page. I’m guessing I’m covered, but I don’t know. Also, even as a subscriber I’m being followed online by 20 or more trackers (reports Privacy Badger), supposedly for personalized advertising purposes, but likely also for other purposes by Condé Nast’s third parties. (Meaning not just Google, Facebook and Amazon, but Parsely and indexww, which I’ve never heard of and don’t trust. And frankly I don’t trust those first three either.) As a subscriber I’d want to be followed only by Vanity Fair and Condé Nast for their own service-providing and analytic purposes, and not by who-knows-what by all those others. If you could pass that request along, I thank you. Cheers, Doc

When I clicked on the Submit button, I got this:

An error occurred while processing your request.An error occurred while processing your request.

Please call our Customer Care Department at 1-800-667-0015 for immediate assistance or visit Vanity Fair Customer Care online.

Invalid logging session ID (lsid) passed in on the URL. Unable to serve the servlet you’ve requested.

So there ya go: one among .X zillion other examples of subscription hell, differing only in details.

Fortunately, there is a better way. Read on.

The Path

The only way to pave a path from subscription and customer service hell to the heaven we’ve never had is by  normalizing the ways both work, across all of business. And we can only do this from the customer’s side. There is no other way. We need standard VRM tools to deal with the CRM and CX systems that exist on the providers’ side.

We’ve done this before.

We fixed networking, publishing and mailing online with the simple and open standards that gave us the Internet, the Web and email. All those standards were easy for everyone to work with, supported boundless economic and social benefits, and began with the assumption that individuals are full-privilege agents in the world.

The standards we need here should make each individual subscriber the single point of integration for their own data, and the responsible party for changing that data across multiple entities. (That’s basically the heart of VRM.)

This will give each of us a single way to see and manage many subscriptions, see notifications of changes by providers, and make changes across the board with one move. VRM + CRM.

The same goes for customer care service requests. These should be normalized the same way.

In the absence of normalizing how people manage subscription and customer care relationships, all the companies in the world with customers will have as many different ways of doing both as there are companies. And we’ll languish in the login/password hell we’re in now.

The VRM+CRM cost savings to those companies will also be enormous. For a sense of that, just multiply what I went through above by as many people there are in the world with subscriptions, and  multiply that result by the number of subscriptions those people have — and then do the same for customer service.

We can’t fix this inside the separate CRM systems of the world. There are too many of them, competing in too many silo’d ways to provide similar services that work differently for every customer, even when they use the same back-ends from Oracle, Salesforce, SugarCRM or whomever.

Fortunately, CRM systems are programmable. So I challenge everybody who will be at Salesforce’s Dreamforce conference next week to think about how much easier it will be when individual customers’ VRM meets Salesforce B2B customers’ CRM. I know a number of VRM people  who will be there, including Iain Henderson, of the bonus link below. Let me know you’re interested and I’ll make the connection.

And come work with us on standards. Here’s one.

Bonus link: Me-commerce — from push to pull, by Iain Henderson (@iaianh1)

Actual chat with an Internet Disservice Provider

customerdisservice

After failing to get a useful answer from Verizon about FiOS availabilty at a Manhattan address (via http://fios.verizon.com/fios-coverage.html), I engaged the site’s chat agent system, and had this dialog:

Jessica: Hi! I am a Verizon specialist, can I help you today?

You: I am trying to help a friend moving into ______ in New York City. The Web interface here gives a choice of three addresses, two of which are that address, but it doesn’t seem to work. She wants to know if the Gigabit deal — internet only (she doesn’t watch TV or want a phone) — is available there.
Jessica: By chatting with us, you grant us permission to review your services during the chat to offer the best value. Refusing to chat will not affect your current services. It is your right and our duty to protect your account information. For quality, we may monitor and/or review this chat.

You: sure.
Jessica: Hey there! My name is Jessica. Happy to help!

Jessica: Thank you for considering Verizon services. I would be glad to assist you with Verizon services.

You: Did you see my question?
Jessica: Thank you for sharing the address, please allow me a moment to check this for you.

Jessica: Yes, please allow me a moment to check this for you.

Jessica: I appreciate your patience.

Jessica: Do you live in the apartment?

You: No. I am looking for a friend who is moving into that building.
You: I had FiOS where I used to live near Boston and was pleased with it.
Jessica: Thank you for your consideration.

Jessica: The address where your friend will be moving require to enter the apartment number.

You: hang on
Jessica: Sure, take your time.

You: 5B
You: When we are done I
Jessica: Thank you, one more moment please.

You: would also like you to check my building as well.
Jessica: Sure, allow me a moment.

Jessica: I appreciate your patience.

Jessica: I’m extremely sorry to share this, currently at your friend’s location we don’t have Fios services.

You: Okay. How about _________ ?
You: Still there?
Jessica: Yes, I’m checking for this.

Jessica: Please stay connected.

Jessica has left the chat
You are being transferred, please hold…
You are now chatting with LOUIS
LOUIS: Good morning. I’ll be happy to assist you today. May I start by asking for your name, the phone number we are going to be working with today, and your account pin please?

You: I want to know if FiOS is available at _________.
You: __________. It is not a landline and I do not have an account.
LOUIS: Hello. You’ve reached our Verizon Wireless chat services. I don’t have an option to check on our Fios services for your area. You are able to contact our Fios sister company at the number 1-800-483-3000

You: this makes no sense. I was transfered to you by Jessica in FiOS.
LOUIS: Looks like Jessica is one of our chat agents, but we are with Verizon Wireless. Fios is our sister company, which is a different entity than us

You: Well, send some feedback to whoever or whatever is in charge. Not sure what the problem is, but it’s a fail in this round. Best to you. I now your job isn’t easy.
LOUIS: I do apologize about this, I will certainly relay this feedback on this matter. Here is a link to Verizon Communications for your residential services:https://www.verizon.com/support/residential/contact-us/index.htm

You: Thanks.
LOUIS: I want to thank you for chatting with me today. Hope you have a great day! You can find answers to additional questions at vzw.com/support. Please click on the “X” or “End Chat” button to end this chat.

You: Thanks agin.

The only way to fix this, as we’ve said here countless times, is from the customer’s side. Meanwhile, please dig Despair.com, source of the image above. For so many companies, it remains too true.

Let’s give some @VRM help to the @CFPB

cfpbThe Consumer Financial Protection Bureau (@CFPB) is looking to help you help them—plus everybody else who uses financial services.

They explain:

Many new financial innovations rely on people choosing to give a company access to their digital financial records held by another company. If you’re using these kinds of services, we’d love to hear from you…

Make your voice heard. Share your comments on Facebook or Twitter . If you want to give us more details, you can share your story with us through our website. To see and respond to the full range of questions we’re interested in learning about, visit our formal Request for Information

For example,

Services that rely on consumers granting access to their financial records include:

  • Budgeting analysis and advice:  Some tools let people set budgets and analyze their spending activity.  The tools organize your purchases across multiple accounts into categories like food, health care, and entertainment so you can see trends. Some services send a text or email notification when a spending category is close to being over-budget.

  • Product recommendations: Some tools may make recommendations for new financial products based on your financial history. For example, if your records show that you have a lot of ATM fees, a tool might recommend other checking accounts with lower or no ATM fees.

  • Account verification: Many companies need you to verify your identity and bank account information. Access to your financial records can speed that process.

  • Loan applications: Some lenders may access your financial records to confirm your income and other information on your loan application.

  • Automatic or motivational savings: Some companies analyze your records to provide you with automatic savings programs and messages to keep you motivated to save.

  • Bill payment: Some services may collect your bills and help you organize your payments in a timely manner.

  • Fraud and identity theft protection: Some services analyze your records across various accounts to alert you about potentially fraudulent transactions.

  • Investment management: Some services use your account records to help you manage your investments.

A little more about the CFPB:

Our job is to put consumers first and help them take more control over their financial lives. We’re the one federal agency with the sole mission of protecting consumers in the financial marketplace. We want to make sure that consumer financial products and services are helping people rather than harming them.

A hat tip to @GeneKoo (an old Berkman Klein colleague) at the CFPB,  who sees our work with ProjectVRM as especially relevant to what they’re doing.  Of course, we agree. So let’s help them help us, and everybody else in the process.

Some additional links:

If it weren’t for retargeting, we might not have ad blocking

jblflip2This is a shopping vs. advertising story that starts with the JBP Flip 2 portable speaker I bought last year, when Radio Shack was going bankrupt and unloading gear in “Everything Must Go!” sales. I got it half-off for $50, choosing it over competing units on the same half-bare shelves, mostly because of the JBL name, which I’ve respected for decades. Before that I’d never even listened to one.

The battery life wasn’t great, but the sound it produced was much better than anything my laptop, phone or tablet put out. It was also small, about the size of a  beer can, so I could easily take it with me on the road. Which I did. A lot.

Alas, like too many other small devices, the Flip 2’s power jack was USB micro-b. That’s the tiny flat one that all but requires a magnifying glass to see which side is up, and tends to damage the socket if you don’t slip it in exactly right, or if you force it somehow. While micro-b jacks are all design-flawed that way, the one in my Flip 2 was so awful that it took great concentration to make sure the plug jacked in without buggering the socket.

Which happened anyway. One day, at an AirBnB in Maine, the Flip 2’s USB socket finally failed. The charger cable would fit into the socket, but the socket was loose, and the speaker wouldn’t take a charge. After efforts at resuscitation failed, I declared the Flip 2 dead.

But I was still open to buying another one. So, to replace it, I did what most of us do: I went to Amazon. Naturally, there were plenty of choices, including JBL Flip 2s and newer Flip 3s, at attractive prices. But Consumer Reports told me the best of the bunch was the Bose Soundlink Color, for $116.

So I bought a white Bose, because my wife liked that better than the red JBL.

The Bose filled Consumer Reports’ promise. While it isn’t stereo, it sounds much better than the JBL (voice quality and bass notes are remarkable). It’s also about the same size (though with a boxy rather than a cylindrical shape), has better battery life, and a better user interface. I hate that it  charges through a micro-b jack, but at least this one is easier to plug and unplug than the Flip 2 had been. So that story had a happy beginning, at least for me and Bose.

It was not happy, however, for me and Amazon.

Remember when Amazon product pages were no longer than they needed to be? Those days are gone. Now pages for every product seem to get longer and longer, and can take forever to load. Worse, Amazon’s index page is now encrusted with promotional jive. Seems like nearly everything “above the fold” (before you scroll down) is now a promo for Amazon Fashion, the latest Kindle, Amazon Prime, or the company credit card—plus rows of stuff “inspired by your shopping trends” and “related to items you’ve viewed.”

But at least that stuff risks being useful. What happens when you leave the site, however, isn’t. That’s because, unless you’re running an ad blocker or tracking protection, Amazon ads for stuff you just viewed, or put in your shopping cart, follow you from one ad-supported site to another, barking at you like a crazed dog. For example:

amazon1

I lost count of how many times, and in how many places, I saw this Amazon ad, or one like it, for one speaker, the other, or both, after I finished shopping and put the Bose speaker in my cart.

Why would Amazon advertise something at me that I’ve already bought, along with a competing product I obviously chose not to buy? Why would Amazon think it’s okay to follow me around when I’m not in their store? And why would they think that kind of harassment is required, or even okay, especially when the target has been a devoted customer for more than two decades, and sure to return and buy all kinds of stuff anyway?  Jeez, they have my business!

And why would they go out of their way to appear both stupid and robotic?

The answers, whatever they are, are sure to be both fully rationalized and psychotic, meaning disconnected from reality, which is the marketplace where real customers live, and get pissed off.

And Amazon is hardly alone at this. In fact the practice is so common that it became an Onion story in October 2018: Woman Stalked Across 8 Websites By Obsessed Shoe Advertisement.

The ad industry’s calls this kind of stalking “retargeting,” and it is the most obvious evidence that we are being tracked on the Net. The manners behind this are completely at odds with those in the physical world, where no store would place a tracking beacon on your body and use it to follow you everywhere you go after you leave. But doing exactly that is pro forma for marketing in the digital world.

When you click on that little triangular symbol in the corner of the ad, you can see how the “interactive” wing of the advertising business, generally called adtech, rationalizes surveillance:

adchoices1The program is called AdChoices, and it’s a creation of those entities in the lower right corner. The delusional conceits behind AdChoices are many:

  1. That Ad Choices is “yours.” It’s not. It’s theirs.
  2. That “right ads” exist, and that we want them to find us, at all times.
  3. That making the choices they provide actually gives us control of advertising online.
  4. That our personal agency—the power to act with full effect in the world—is a grace of marketers, and not of our own independent selves.

Not long after I did that little bit of shopping on Amazon, I also did a friend the favor of looking for clothes washers, since the one in her basement crapped out and she’s one of those few people who don’t use the Internet and never will. Again I consulted Consumer Reports, which recommended a certain LG washer in my friend’s price range. I looked for it on the Web and found the best price was at Home Depot. So I told her about it, and that was that.

For me that should have been the end of it. But it wasn’t, because now I was being followed by Home Depot ads for the same LG washer and other products I wasn’t going to buy, from Home Depot or anybody else. Here’s one:

homedepot1

Needless to say, this didn’t endear me to Home Depot, to LG, or to any of the sites where I got hit with these ads.

All these parties failed not only in their mission to sell me something, but to enhance their own brands. Instead they subtracted value for everybody in the supply chain of unwelcome tracking and unwanted message targeting. They also explain (as Don Marti does here) why ad blocking has grown exactly in pace with growth in retargeting.

I subjected myself to all this by experimentally turning off tracking protection and ad blockers on one of my browsers, so I could see how the commercial Web works for the shrinking percentage of people who don’t protect themselves from this kind of abuse. I do a lot of that, as part of my work with ProjectVRM. I also experiment a lot with different kinds of tracking protection and ad blocking, because the developers of those tools are encouraged by that same work here.

For those new to the project, VRM stands for Vendor Relationship Management, the customer-side counterpart of Customer Relationship Management, the many-$billion business by which companies manage their dealings with customers—or try to.

Our purpose with ProjectVRM is to encourage development of tools that give us both independence from the companies we engage with, and better ways of engaging than CRM alone provides: ways of engaging that we own, and are under our control. And relate to the CRM systems of the world as well. Our goal is VRM+CRM, not VRM vs. CRM.

Ad blocking and tracking protection are today at the leading edge of VRM development, because they are popular and give us independence. Engagement, however, isn’t here yet—at least not at the same level of popularity. And it probably won’t get here until we finish curing business of the brain cancer that adtech has become.

[Later…] After reading this, a friend familiar with the adtech business told me he was sure Bose’s and JPL’s agencies paid Amazon’s system for showing ads to “qualified leads,” and that Amazon’s system preferred to call me a qualified lead rather than a customer whose purchase of a Bose speaker (from Amazon!) mattered less than the fact that its advertising system could now call me a qualified lead. In other words, Amazon was, in a way, screwing Bose and JPL. If anyone has hard facts about this, please send them along. Until then I’ll consider this worth sharing but still unproven.

« Older posts

© 2024 ProjectVRM

Theme by Anders NorenUp ↑