You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'Teaching' Category

The Last Post

Tuesday, September 1st, 2015

(or, I’m Moving My Blogging to Other Platforms.)

After a great run of six full years, I’ve decided to retire this blog. It worked well, but increasingly I find that most of the readership from my writing comes from my blogging at The Social Media Collective and occasionally at other venues like The Huffington Post and Wired.

Thanks so much for reading this. I’ll still be blogging and I hope that you’ll keep reading after I move things over there.

In the unlikely event that I launch any new standalone blogs I’ll be sure to alert you via my homepage.

2015 Advice For Your 856-Year-Old Ph.D.

Wednesday, August 5th, 2015

(or, What’s New About Getting an Old Degree?)

I’m delighted to be teaching an intro seminar for all the new Ph.D. students in my department’s graduate program. One of my goals is to give these students a place to talk about the environment of graduate school itself. How does getting a Ph.D. work? What do you need to know?

This task has made me reflective. At first I thought I should pass along readings that had been inspirational for me during grad school. That sure didn’t work. Here is the advice I apparently once loved:

Once you have identified some [thesis] topics you are interested in, you can research them rapidly by spending a few hours on the telephone calling up experts in the field and pumping them for information…although it may cost you a few dollars in long-distance bills…  —Getting What You Came For: The Smart Student’s Guide to Earning a Master’s or Ph.D., p. 182

Or:

I wrote the paper with which this book begins on a microcomputer. Though this first experience with one frightened me a little at first, writing soon seemed so much less work that I wondered how I had managed before. —Writing for Social Scientists, p. 151

Or:

Having surveyed the basics…it’s time to consider the role that electronic communication can play. The most important thing is to employ electronic media consciously and deliberately as part of a larger strategy for your career. —Networking on the Network: A Guide to Professional Skills for PhD Students

Or:

Fortunately, these days every legitimate library has a copy machine, and each copy costs about a dime. —How to Write a Thesis, p. 86

The process of getting a Ph.D. is very old. Wikipedia claims the first Ph.D. was awarded in Paris in 1150. I thought Ph.D. advice would be more likely to stand the test of time.

These days you’ll find better dissertation advice on Tumblr. Or at least you’ll find some comic relief from Tumblrs like When in Academia

when someone asks you how the diss is going

(That’s some great tagging.)

The upshot is that it looks like a fair amount of the advice about how to get a Ph.D. has to do with the available communication technology of the time.  Both the stuff that’s in everyday use, and also the scholarly communication infrastructure (which I’ve also blogged about recently).

Has anyone reading this ever attended a conference paper sale? (No, that’s not about buying pre-written term papers.) Or have you ever received an academic journal article “preprint request postcard?” Here’s an image of one:

reprint-request-1

Source: Google Scholar Blog.

So far I’ve come up with a list of things that seem to still be helpful. Caveats: I’m aiming to help the social science and humanities students interested in communication and information. Our first year students won’t be teaching yet, so I am not focusing on teaching with this list.

Hopefully there are some readers who will find this list useful too.

How to Get a Ph.D. — The Draft Reading List

Agre, P. (2002). Networking on the Network: A Guide to Professional Skills for PhD Studentshttp://vlsicad.ucsd.edu/Research/Advice/network.html  I’ll excerpt the following sections:

  • Building a Professional Identity
    • Socializing at Conferences
    • Publication and Credit
    • Recognizing Difference
  • Your Dissertation
  • Academic Language

anonymous. (ed.) (2015). “When in Academia.” http://wheninacademia.tumblr.com/

Becker, H. S. (2007). Writing for Social Scientists. Chicago: University of Chicago Press. — Don’t let the title of this book fool you, it is equally applicable to graduate students in the humanities and professional programs. I’m excerpting the following sections:

  • Freshman English for Graduate Students
  • Persona and Authority
  • Learning to Write as a Professional
  • Risk
  • Terrorized by the Literature

Cham, J. (2013, January 21). “Your Conference Presentation.” (image.) PhD Comics.

Edwards, P. N. (2014). “How to Give an Academic Talk.” http://pne.people.si.umich.edu/PDF/howtotalk.pdf (13 pp.)

Germano, W. (2013) From Dissertation to Book. (2nd ed.) Chicago: University of Chicago Press. — Note: “Passive Is Spoken Here” is a great section heading. I’ll excerpt the chapter:

  • Making Prose Speak

Sterne, J. (2014). How to Peer Review Something You Hate. ICA Newsletter. (2 pp.)

Shore, B. M. (2014). The Graduate Advisor Handbook. Chicago: University of Chicago Press. I’ll excerpt:

  • Mutual Expectations for Research Advising (pp. 143-146)

Strunk, W., Jr. & White, E. B. (2000). The Elements of Style. (4th ed.) New York: Longman. (Important: You must avoid any “Original Edition” or public domain reprint that does not include E. B. White as a co-author. The version without E. B. White is a different book.)

@yourpapersucks (ed.) (2015). “Shit My Reviewers Say.”   http://shitmyreviewerssay.tumblr.com/

…however…

I see that it’s a list woefully lacking in anything like “social media savvy for Ph.D. students” or “How new forms of scholarly communication are changing the dissertation.” I’m sure there are other newish domains I’ve left out, too. What am I missing? Can anyone help me out?  Please add a comment or e-mail me.

Yours in futurity.

.

.

.

(this blog post was cross-posted to The Social Media Collective.)

Corrupt Personalization

Thursday, June 26th, 2014

(“And also Bud Light.”)

In my last two posts I’ve been writing about my attempt to convince a group of sophomores with no background in my field that there has been a shift to the algorithmic allocation of attention — and that this is important. In this post I’ll respond to a student question. My favorite: “Sandvig says that algorithms are dangerous, but what are the the most serious repercussions that he envisions?” What is the coming social media apocalypse we should be worried about?

google flames

This is an important question because people who study this stuff are NOT as interested in this student question as they should be. Frankly, we are specialists who study media and computers and things — therefore we care about how algorithms allocate attention among cultural products almost for its own sake. Because this is the central thing that we study, we don’t spend a lot of time justifying it.

And our field’s most common response to the query “what are the dangers?” often lacks the required sense of danger. The most frequent response is: “extensive personalization is bad for democracy.” (a.k.a. Pariser’s “filter bubble,” Sunstein’s “egocentric” Internet, and so on). This framing lacks a certain house-on-fire urgency, doesn’t it?

(sarcastic tone:) “Oh, no! I’m getting to watch, hear, and read exactly what I want. Help me! Somebody do something!”

Sometimes (as Hindman points out) the contention is the opposite, that Internet-based concentration is bad for democracy.  But remember that I’m not speaking to political science majors here. The average person may not be as moved by an abstract, long-term peril to democracy as the average political science professor. As David Weinberger once said after I warned about the increasing reliance on recommendation algorithms, “So what?” Personalization sounds like a good thing.

As a side note, the second most frequent response I see is that algorithms are now everywhere. And they work differently than what came before. This also lacks a required sense of danger! Yes, they’re everywhere, but if they are a good thing

So I really like this question “what are the the most serious repercussions?” because I think there are some elements of the shift to attention-sorting algorithms that are genuinely “dangerous.” I can think of at least two, probably more, and they don’t get enough attention. In the rest of this post I’ll spell out the first one which I’ll call “corrupt personalization.”

Here we go.

Common-sense reasoning about algorithms and culture tells us that the purveyors of personalized content have the same interests we do. That is, if Netflix started recommending only movies we hate or Google started returning only useless search results we would stop using them. However: Common sense is wrong in this case. Our interests are often not the same as the providers of these selection algorithms.  As in my last post, let’s work through a few concrete examples to make the case.

In this post I’ll use Facebook examples, but the general problem of corrupt personalization is present on all of our media platforms in wide use that employ the algorithmic selection of content.

(1) Facebook “Like” Recycling

Screen Shot 2012-12-10 at 12.44.34 PM

(Image from ReadWriteWeb.)

On Facebook, in addition to advertisements along the side of the interface, perhaps you’ve noticed “featured,” “sponsored,” or “suggested” stories that appear inside your news feed, intermingled with status updates from your friends. It could be argued that this is not in your interest as a user (did you ever say, “gee, I’d like ads to look just like messages from my friends”?), but I have bigger fish to fry.

Many ads on Facebook resemble status updates in that there can be messages endorsing the ads with “likes.” For instance, here is an older screenshot from ReadWriteWeb:

pages you may like on facebook

Another example: a “suggested” post was mixed into my news feed just this morning. recommending World Cup coverage on Facebook itself. It’s a Facebook ad for Facebook, in other words.  It had this intriguing addendum:

CENSORED likes facebook

So, wait… I have hundreds of friends and eleven of them “like” Facebook?  Did they go to http://www.facebook.com and click on a button like this:

Facebook like button magnified

But facebook.com doesn’t even have a “Like” button!  Did they go to Facebook’s own Facebook page (yes, there is one) and click “Like”? I know these people and that seems unlikely. And does Nicolala really like Walmart? Hmmm…

What does this “like” statement mean? Welcome to the strange world of “like” recycling. Facebook has defined “like” in ways that depart from English usage.  For instance, in the past Facebook has determined that:

  1. Anyone who clicks on a “like” button is considered to have “liked” all future content from that source. So if you clicked a “like” button because someone shared a “Fashion Don’t” from Vice magazine, you may be surprised when your dad logs into Facebook three years later and is shown a current sponsored story from Vice.com like “Happy Masturbation Month!” or “How to Make it in Porn” with the endorsement that you like it. (Vice.com example is from Craig Condon [NSFW].)
  2. Anyone who “likes” a comment on a shared link is considered to “like” wherever that link points to.  a.k.a. “‘liking a share.” So if you see a (real) FB status update from a (real) friend and it says: “Yuck! The McLobster is a disgusting product idea!” and your (real) friend include a (real) link like this one — that means if you clicked “like” your friends may see McDonald’s ads in the future that include the phrase “(Your Name) likes McDonalds.” (This example is from ReadWriteWeb.)

fauxLike_mcdonalds

This has led to some interesting results, like dead people “liking” current news stories on Facebook.

There is already controversy about advertiser “like” inflation, “like” spam, and fake “likes,” — and these things may be a problem too, but that’s not what we are talking about here.  In the examples above the system is working as Facebook designed it to. A further caveat: note that the definition of “like” in Facebook’s software changes periodically and when they are sued. Facebook now has an opt-out setting for the above two “features.”

But these incendiary examples are exceptional fiascoes — on the whole the system probably works well. You likely didn’t know that your “like” clicks are merrily producing ads on your friends pages and in your name because you cannot see them.  These “stories” do not appear on your news feed and cannot be individually deleted.

Unlike the examples from my last post you can’t quickly reproduce these results with certainty on your own account. Still, if you want to try, make a new Facebook account under a fake name (warning! dangerous!) and friend your real account. Then use the new account to watch your status updates.

Why would Facebook do this? Obviously it is a controversial practice that is not going to be popular with users. Yet Facebook’s business model is to produce attention for advertisers, not to help you — silly rabbit. So they must have felt that using your reputation to produce more ad traffic from your friends was worth the risk of irritating you. Or perhaps they thought that the practice could be successfully hidden from users — that strategy has mostly worked!

In sum this is a personalization scheme that does not serve your goals, it serves Facebook’s goals at your expense.

(2) “Organic” Content

This second group of examples concerns content that we consider to be “not advertising,” a.k.a. “organic” content. Funnily enough, algorithmic culture has produced this new use of the word “organic” — but has also made the boundary between “advertising” and “not advertising” very blurry.

funny-organic-food-ad

 

The general problem is that there are many ways in which algorithms act as mixing valves between things that can be easily valued with money (like ads) and things that can’t. And this kind of mixing is a normative problem (what should we do) and not a technical problem (how do we do it).

For instance, for years Facebook has encouraged nonprofits, community-based organizations, student clubs, other groups, and really anyone to host content on facebook.com.  If an organization creates a Facebook page for itself, the managers can update the page as though it were a profile.

Most page managers expect that people who “like” that page get to see the updates… which was true until January of this year. At that time Facebook modified its algorithm so that text updates from organizations were not widely shared. This is interesting for our purposes because Facebook clearly states that it wants page operators to run Facebook ad campaigns, and not to count on getting traffic from “organic” status updates, as it will no longer distribute as many of them.

This change likely has a very differential effect on, say, Nike‘s Facebook page, a small local business‘s Facebook page, Greenpeace International‘s Facebook page, and a small local church congregation‘s Facebook page. If you start a Facebook page for a school club, you might be surprised that you are spending your labor writing status updates that are never shown to anyone. Maybe you should buy an ad. Here’s an analytic for a page I manage:

this week page likes facebook

 

The impact isn’t just about size — at some level businesses might expect to have to insert themselves into conversations via persuasive advertising that they pay for, but it is not as clear that people expect Facebook to work this way for their local church or other domains of their lives. It’s as if on Facebook, people were using the yellow pages but they thought they were using the white pages.  And also there are no white pages.

(Oh, wait. No one knows what yellow pages and white pages are anymore. Scratch that reference, then.)

No need to stop here, in the future perhaps Facebook can monetize my family relationships. It could suggest that if I really want anyone to know about the birth of my child, or I really want my “insightful” status updates to reach anyone, I should turn to Facebook advertising.

Let me also emphasize that this mixing problem extends to the content of our personal social media conversations as well. A few months back, I posted a Facebook status update that I thought was humorous. I shared a link highlighting the hilarious product reviews for the Bic “Cristal For Her” ballpoint pen on Amazon. It’s a pen designed just for women.

bic crystal for her

The funny thing is that I happened to look at a friend of mine’s Facebook feed over their shoulder, and my status update didn’t go away. It remained, pegged at the top of my friend’s news feed, for as long as 14 days in one instance. What great exposure for my humor, right? But it did seem a little odd… I queried my other friends on Facebook and some confirmed that the post was also pegged at the top of their news feed.

I was unknowingly participating in another Facebook program that converts organic status updates into ads. It does this by changing their order in the news feed and adding the text “Sponsored” in light gray, which is very hard to see. Otherwise at least some updates are not changed. I suspect Facebook’s algorithm thought I was advertising Amazon (since that’s where the link pointed), but I am not sure.

This is similar to Twitter’s “Promoted Tweets” but there is one big difference.  In the Facebook case the advertiser promotes content — my content — that they did not write. In effect Facebook is re-ordering your conversations with your friends and family on the basis of whether or not someone mentioned Coke, Levi’s, and Anheuser Busch (confirmed advertisers in the program).

Sounds like a great personal social media strategy there–if you really want people to know about your forthcoming wedding, maybe just drop a few names? Luckily the algorithms aren’t too clever about this yet so you can mix up the word order for humorous effect.

(Facebook status update:) “I am so delighted to be engaged to this wonderful woman that I am sitting here in my Michelob drinking a Docker’s Khaki Collection. And also Coke.”

Be sure to use links. I find the interesting thing about this mixing of the commercial and non-commercial to be that it sounds to my ears like some sort of corny, unrealistic science fiction scenario and yet with the current Facebook platform I believe the above example would work. We are living in the future.

So to recap, if Nike makes a Facebook page and posts status updates to it, that’s “organic” content because they did not pay Facebook to distribute it. Although any rational human being would see it as an ad. If my school group does the same thing, that’s also organic content, but they are encouraged to buy distribution — which would make it inorganic. If I post a status update or click “like” in reaction to something that happens in my life and that happens to involve a commercial product, my action starts out as organic, but then it becomes inorganic (paid for) because a company can buy my words and likes and show them to other people without telling me. Got it? This paragraph feels like we are rethinking CHEM 402.

The upshot is that control of the content selection algorithm is used by Facebook to get people to pay for things they wouldn’t expect to pay for, and to show people personalized things that they don’t think are paid for. But these things were in fact paid for.  In sum this is again a scheme that does not serve your goals, it serves Facebook’s goals at your expense.

The Danger: Corrupt Personalization

With these concrete examples behind us, I can now more clearly answer this student question. What are the most serious repercussions of the algorithmic allocation of attention?

I’ll call this first repercussion “corrupt personalization” after C. Edwin Baker. (Baker, a distinguished legal philosopher, coined the phrase “corrupt segmentation” in 1998 as an extension of the theories of philosopher Jürgen Habermas.)

Here’s how it works: You have legitimate interests that we’ll call “authentic.” These interests arise from your values, your community, your work, your family, how you spend your time, and so on. A good example might be that as a person who is enrolled in college you might identify with the category “student,” among your many other affiliations. As a student, you might be authentically interested in an upcoming tuition increase or, more broadly, about the contention that “there are powerful forces at work in our society that are actively hostile to the college ideal.”

However, you might also be authentically interested in the fact that your cousin is getting married. Or in pictures of kittens.

Grumpy-Cat-meme-610x405

Corrupt personalization is the process by which your attention is drawn to interests that are not your own. This is a little tricky because it is impossible to clearly define an “authentic” interest. However, let’s put that off for the moment.

In the prior examples we saw some (I hope) obvious places where my interests diverged from that of algorithmic social media systems. Highlights for me were:

  • When I express my opinion about something to my friends and family, I do not want that opinion re-sold without my knowledge or consent.
  • When I explicitly endorse something, I don’t want that endorsement applied to other things that I did not endorse.
  • If I want to read a list of personalized status updates about my friends and family, I do not want my friends and family sorted by how often they mention advertisers.
  • If a list of things is chosen for me, I want the results organized by some measure of goodness for me, not by how much money someone has paid.
  • I want paid content to be clearly identified.
  • I do not want my information technology to sort my life into commercial and non-commercial content and systematically de-emphasize the noncommercial things that I do, or turn these things toward commercial purposes.

More generally, I think the danger of corrupt personalization is manifest in three ways.

  1. Things that are not necessarily commercial become commercial because of the organization of the system. (Merton called this “pseudo-gemeinschaft,” Habermas called it “colonization of the lifeworld.”)
  2. Money is used as a proxy for “best” and it does not work. That is, those with the most money to spend can prevail over those with the most useful information. The creation of a salable audience takes priority over your authentic interests. (Smythe called this the “audience commodity,” it is Baker’s “market filter.”)
  3. Over time, if people are offered things that are not aligned with their interests often enough, they can be taught what to want. That is, they may come to wrongly believe that these are their authentic interests, and it may be difficult to see the world any other way. (Similar to Chomsky and Herman’s [not Lippman’s] arguments about “manufacturing consent.”)

There is nothing inherent in the technologies of algorithmic allocation that is doing this to us, instead the economic organization of the system is producing these pressures. In fact, we could design a system to support our authentic interests, but we would then need to fund it. (Thanks, late capitalism!)

To conclude, let’s get some historical perspective. What are the other options, anyway? If cultural selection is governed by computer algorithms now, you might answer, “who cares?” It’s always going to be governed somehow. If I said in a talk about “algorithmic culture” that I don’t like the Netflix recommender algorithm, what is supposed to replace it?

This all sounds pretty bad, so you might think I am asking for a return to “pre-algorithmic” culture: Let’s reanimate the corpse of Louis B. Mayer and he can decide what I watch. That doesn’t seem good either and I’m not recommending it. We’ve always had selection systems and we could even call some of the earlier ones “algorithms” if we want to.  However, we are constructing something new and largely unprecedented here and it isn’t ideal. It isn’t that I think algorithms are inherently dangerous, or bad — quite the contrary. To me this seems like a case of squandered potential.

With algorithmic culture, computers and algorithms are allowing a new level of real-time personalization and content selection on an individual basis that just wasn’t possible before. But rather than use these tools to serve our authentic interests, we have built a system that often serves a commercial interest that is often at odds with our interests — that’s corrupt personalization.

If I use the dominant forms of communication online today (Facebook, Google, Twitter, YouTube, etc.) I can expect content customized for others to use my name and my words without my consent, in ways I wouldn’t approve of. Content “personalized” for me includes material I don’t want, and obscures material that I do want. And it does so in a way that I may not be aware of.

This isn’t an abstract problem like a long-term threat to democracy, it’s more like a mugging — or at least a confidence game or a fraud. It’s violence being done to you right now, under your nose. Just click “like.”

In answer to your question, dear student, that’s my first danger.

* * *

ADDENDUM:

This blog post is already too long, but here is a TL;DR addendum for people who already know about all this stuff.

I’m calling this corrupt personalization because I cant just apply Baker’s excellent ideas about corrupt segments — the world has changed since he wrote them. Although this post’s reasoning is an extension of Baker, it is not a straightforward extension.

Algorithmic attention is a big deal because we used to think about media and identity using categories, but the algorithms in wide use are not natively organized that way. Baker’s ideas were premised on the difference between authentic and inauthentic categories (“segments”), yet segments are just not that important anymoreBermejo calls this the era of post-demographics.

Advertisers used to group demographics together to make audiences comprehensible, but it may no longer be necessary to buy and sell demographics or categories as they are a crude proxy for purchasing behavior. If I want to sell a Subaru, why buy access to “Brite Lights, Li’l City” (My PRIZM marketing demographic from the 1990s) when I can directly detect “intent to purchase a station wagon” or “shopping for a Subaru right now”? This complicates Baker’s idea of authentic segments quite a bit. See also Gillespie’s concept of calculated publics.

Also Baker was writing in an era where content was inextricably linked to advertising because it was not feasible to decouple them. But today algorithmic attention sorting has often completely decoupled advertising from content. Online we see ads from networks that are based on user behavior over time, rather than what content the user is looking at right now. The relationship between advertising support and content is therefore more subtle than in the previous era, and this bears more investigation.

Okay, okay I’ll stop now.

(This post was cross-posted to The Social Media Collective.)

Show and Tell: Algorithmic Culture

Tuesday, March 25th, 2014

(or, What you need to know about “puppy dog hate”)

(or, “It’s not that I’m uninterestedin hygiene…”)

Last week I tried to get a group of random sophomores to care about algorithmic culture. I argued that software algorithms are transforming communication and knowledge. The jury is still out on my success at that, but in this post I’ll continue the theme by reviewing the interactive examples I used to make my point. I’m sharing them because they are fun to try. I’m also hoping the excellent readers of this blog can think of a few more.

I’ll call my three examples “puppy dog hate,” “top stories fail,” and “your DoubleClick cookie filling.”  They should highlight the ways in which algorithms online are selecting content for your attention. And ideally they will be good fodder for discussion. Let’s begin:

Three Ways to Demonstrate Algorithmic Culture

(1.) puppy dog hate (Google Instant)

You’ll want to read the instructions fully before trying this. Go to http://www.google.com/ and type “puppy”, then [space], then “dog”, then [space], but don’t hit [Enter].  That means you should have typed “puppy dog ” (with a trailing space). Results should appear without the need to press [Enter]. I got this:

Now repeat the above instructions but instead of “puppy” use the word “bitch” (so: “bitch dog “).  Right now you’ll get nothing. I got nothing. (The blank area below is intentionally blank.) No matter how many words you type, if one of the words is “bitch” you’ll get no instant results.

What’s happening? Google Instant is the Google service that displays results while you are still typing your query. In the algorithm for Google Instant, it appears that your query is checked against a list of forbidden words. If the query contains one of the forbidden words (like “bitch”) no “instant” results will be shown, but you can still search Google the old-fashioned way by pressing [Enter].

This is an interesting example because it is incredibly mild censorship, and that is typical of algorithmic sorting on the Internet. Things aren’t made to be impossible, some things are just a little harder than others. We can discuss whether or not this actually matters to anyone. After all, you could still search for anything you wanted to, but some searches are made slightly more time-consuming because you will have to press [Enter] and you do not receive real-time feedback as you construct your search query.

It’s also a good example that makes clear how problematic algorithmic censorship can be. The hackers over at 2600 reverse engineered Google Instant’s blacklist (NSFW) and it makes absolutely no sense. The blocked words I tried (like “bitch”) produce perfectly inoffensive search results (sometimes because of other censorship algorithms, like Google SafeSearch). It is not clear to me why they should be blocked. For instance, anatomical terms for some parts of the female anatomy are blocked while other parts of the female anatomy are not blocked.

Some of the blocking is just silly. For instance, “hate” is blocked. This means you can make the Google Instant results disappear by adding “hate” to the end of an otherwise acceptable query. e.g., “puppy dog hate ” will make the search results I got earlier disappear as soon as I type the trailing space. (Remember not to press [Enter].)

This is such a simple implementation that it barely qualifies as an algorithm. It also differs from my other examples because it appears that an actual human compiled this list of blocked words. That might be useful to highlight because we typically think that companies like Google do everything with complicated math and not site-by-site or word-by-word rules–they have claimed as much, but this example shows that in fact this crude sort of blacklist censorship still goes on.

Google does censor actual search results (what you get after pressing [Enter]) in a variety of ways but that is a topic for another time. This exercise with Google Instant at least gets us started thinking about algorithms, whose interests they are serving, and whether or not they are doing their job well.

(2.) Top Stories Fail (Facebook)

In this example, you’ll need a Facebook account.  Go to http://www.facebook.com/ and look for the tiny little toggle that appears under the text “News Feed.” This allows you to switch between two different sorting algorithms: the Facebook proprietary EdgeRank algorithm (this is the default), and “most recent.” (On my interface this toggle is in the upper left, but Facebook has multiple user interfaces at any given time and for some people it appears in the center of the page at the top.)

Switch this toggle back and forth and look at how your feed changes.

What’s happening? Okay, we know that among 18-29 year-old Facebook users the median number of friends is now 300. Even given that most people are not over-sharers, with some simple arithmetic it is clear that some of the things posted to Facebook may never be seen by anyone. A status update is certainly unlikely to be seen by anywhere near your entire friend network. Facebook’s “Top Stories” (EdgeRank) algorithm is the solution to the oversupply of status updates and the undersupply of attention to them, it determines what appears on your news feed and how it is sorted.

We know that Facebook’s “Top Stories” sorting algorithm uses a heavy hand. It is quite likely that you have people in your friend network that post to Facebook A LOT but that Facebook has decided to filter out ALL of their posts. These might be called your “silenced Facebook friends.” Sometimes when people do this toggling-the-algorithm exercise they exclaim: “Oh, I forgot that so-and-so was even on Facebook.”

Since we don’t know the exact details of EdgeRank, it isn’t clear exactly how Facebook is deciding which of your friends you should hear from and which should be ignored. Even though the algorithm might be well-constructed, it’s interesting that when I’ve done this toggling exercise with a large group a significant number of people say that Facebook’s algorithm produces a much more interesting list of posts than “Most Recent,” while a significant number of people say the opposite — that Facebook’s algorithm makes their news feed worse. (Personally, I find “Most Recent” produces a far more interesting news feed than “Top Stories.”)

It is an interesting intellectual exercise to try and reverse-engineer Facebook’s EdgeRank on your own by doing this toggling. Why is so-and-so hidden from you? What is it they are doing that Facebook thinks you wouldn’t like? For example, I think that EdgeRank doesn’t work well for me because I select my friends carefully, then I don’t provide much feedback that counts toward EdgeRank after that. So my initial decision about who to friend works better as a sort without further filtering (“most recent”) than Facebook’s decision about what to hide. (In contrast, some people I spoke with will friend anyone, and they do a lot more “liking” than I do.)

What does it mean that your relationship to your friends is mediated by this secret algorithm? A minor note: If you switch to “most recent” some people have reported that after a while Facebook will switch you back to Facebook’s “Top Stories” algorithm without asking.

There are deeper things to say about Facebook, but this is enough to start with. Onward. 

(3.) Your DoubleClick Cookie Filling (DoubleClick)

This example will only work if you browse the Web regularly from the same Web browser on the same computer and you have cookies turned on. (That describes most people.) Go to the Google Ads settings page — the URL is a mess so here’s a shortcut: http://bit.ly/uc256google

Look at the right column, headed “Google Ads Across The Web,” then scroll down and look for the section marked “Interests.” The other parts may be interesting too, such as Google’s estimate of your Gender, Age, and the language you speak — all of which may or may not be correct.  Here’s a screen shot:

If you have “interests” listed, click on “Edit” to see a list of topics.

What’s Happening? Google is the largest advertising clearinghouse on the Web. (It bought DoubleClick in 2007 for over $3 billion.) When you visit a Web site that runs Google Ads — this is likely quite common — your visit is noted and a pattern of all of your Web site visits is then compiled and aggregated with other personal information that Google may know about you.

What a big departure from some old media! In comparison, in most states it is illegal to gather a list of books you’ve read at the library because this would reveal too much information about you. Yet for Web sites this data collection is the norm.

This settings page won’t reveal Google’s ad placement algorithm, but it shows you part of the result: a list of the categories that the algorithm is currently using to choose advertising content to display to you. Your attention will be sold to advertisers in these categories and you will see ads that match these categories.

This list is quite volatile and this is linked to the way Google hopes to connect advertisers with people who are interested in a particular topic RIGHT NOW. Unlike demographics that are presumed to change slowly (age) or not to change at all (gender), Google appears to base a lot of its algorithm on your recent browsing history. That means if you browse the Web differently you can change this list fairly quickly (in a matter of days, at least).

Many people find the list uncannily accurate, while some are surprised at how inaccurate it is. Usually it is a mixture. Note that some categories are very specific (“Currency Exchange”), while others are very broad (“Humor”).  Right now it thinks I am interested in 27 things, some of them are:

  • Standardized & Admissions Tests (Yes.)
  • Roleplaying Games (Yes.)
  • Dishwashers (No.)
  • Dresses (No.)

You can also type in your own interests to save Google the trouble of profiling you.

Again this is an interesting algorithm to speculate about. I’ve been checking this for a few years and I persistently get “Hygiene & Toiletries.” I am insulted by this. It’s not that I’m uninterested in hygiene but I think I am no more interested in hygiene than the average person. I don’t visit any Web sites about hygiene or toiletries. So I’d guess this means… what exactly? I must visit Web sites that are visited by other people who visit sites about hygiene and toiletries. Not a group I really want to be a part of, to be honest.

These were three examples of algorithm-ish activities that I’ve used. Any other ideas? I was thinking of trying something with an item-to-item recommender system but I could not come up with a great example. I tried anonymized vs. normal Web searching to highlight location-specific results but I could not think of a search term that did a great job showing a contrast.  I also tried personalized twitter trends vs. location-based twitter trends but the differences were quite subtle. Maybe you can do better.

In my next post I’ll write about how the students reacted to all this.

 

(This was also cross-posted to The Social Media Collective.)

 

Writing the Casual Games Syllabus

Monday, June 3rd, 2013

(or, “I don’t know how to skim a game.”)

Here’s my question: What is the ideal list of 16 games that, if you played them, would give you a picture of all that is possible in gaming? Oh, yeah, and they have to be fast, quick-to-learn, and mostly free (hence the “casual” in the title).

I’ll be teaching a course next Fall at the University of Michigan entitled “Play and Technology.” It’s an advanced seminar that surveys the social science and humanities literature on the idea of “play,” then applies that literature to computer-mediated communication, video games, and other kinds of what we’ll call “playful technologies.” It requires both a midterm and a final project that each require students to craft a conceptual design for a playful technology.  Hopefully we’ll learn something about people and something about designing play experiences.

Still curious? Here’s a printable flyer for the course (PDF).

In the past I’ve taught a similar course. A serious problem with it has been that people come to the topic of play and new media from such a wide variety of practical perspectives. Since it is an elective, usually everyone who enrolls likes games or play or technology — likely all three. And people like particular games A LOT. But… everyone’s a fanatic about a different thing.

So student #1 will let loose in a class discussion with what is probably a brilliant analysis of Aristotle’s Poetics as applied to Escape from Rungistan which he/she plays religiously every evening on an Apple II emulator.  But after they’ve finished speaking, since no one else in the class has ever played Escape from Rungistan (or heard of it)* there is an awkward silence.

Escape From Rungistan Screenshot

Escape From Rungistan, c. 1982

 

(*Okay actually that’s not 100% true.  I’ve played Escape from Rungistan.)

Then after a long pause, Student #2 will try to explain Piaget using an example from Farm TownFarm Town is the game that FarmVille ripped off, by the way. So no one else — maybe no one else in this state — has ever played it except for student #2.*  But student #2 knows every nuance. Every vegetable.  And student #2 wants to get down and dirty in the details. Student #2 is talking about growing Chamomile vs. Quinoa and their implications for the ontological trajectory of developmental psychology, which is totally a level 112 kind of debate. Since no one else has any idea what he/she is talking about, there is an awkward silence.

Farm Town Screenshot

Farm Town, c. 2009

 

(*Okay actually that’s not 100% true.  I’ve played Farm Town.)

So what’s the solution? In the past I’ve asked students to try a specific game that we all play together.  It has often been a recognizable game (e.g., once, a long time ago, we played a version of Quake). That’s useful but it really does an injustice to the great diversity of kinds of play that are possible. We get stuck in one play mode (FPS, in this case). It also feels unfair because many students are already experts in any given mainstream title, and I find the novices resent it.

What students seem to need is a variety of ideas that they can use to template their own projects, not an in-depth, semester-long study of a mainstream title. And many mainstream games are LONG. I once required that an undergraduate class play Civilization IV. I thought it would be great (bestselling, award-winning game, right?), but a lot of students absolutely hated the fact that it was so involved.

One student summed it up by saying: “If you assign a game instead of a reading, I don’t know how to skim a game.” It takes hours and hours of work to get anything out of Civ IV. Come to think of it, it takes hours and hours of work to finish a single game of Civ IV.

So here is my challenge to you, dear reader. I have sixteen weeks in the semester. Let’s say we assign a game a week. For the reasons specified above these games would have to be short (“casual”) or at least you should be able to get the idea in the first level (or in a demo). Honestly I think these games should ideally be obscure so that everyone starts on the same page. The set of games as a whole, as befits a syllabus, would emphasize the diversity of different kinds of games that are possible.

Being required to do something can completely drain the fun out for some people. So this isn’t supposed to be a list of super fun games, since as soon as I require them I will drain the fun out (at least for some students). Instead, each game should have something unique to say about the art and science of game design. Each should have something to say about human behavior. If the game isn’t particularly fun (hello, Ian Bogost’s brilliant Cow Clicker), so what? It’s required. It’s important. There’s something to learn from it. We can have a productive conversation about it.  This is not a “T0p 16 Cazual Games EVAR!!!1!!1!!” blog post.

The games would have to be free or cheap. Just as I try to keep assigned textbook costs down, I want to keep assigned game costs down. I would feel OK if a few weeks of the class required a game purchase — we can set the game up in a computer lab for those unwilling or unable to pay. But a console title per week? Impossible. That’s a $700 textbook budget for one class.

I have some key dimensions in mind that it would be great to explore with this list: e.g., social vs. not social, narrative vs. non-narrative, violent vs. non-violent, historical vs. contemporary, etc.  But I think rather than giving you an exhaustive list I’d rather hear what you are thinking and adapt this to my own purposes.

However, to get things started here is a draft of what I am thinking about. What are the areas that I’ve left off?  What are the games that are better exemplars in their category — however you define their category?

Example Syllabus (DRAFT)

  1. Passage. A free art game that defies simple explanation and takes just 5 minutes to play.
  2. World of Tanks.  Quick online team combat with strangers. Likely they’ll be some weird lobby talk (“Hetzer gonna Hetz!”). A standout in the freemium realm, it would helps people experience an FPS-like game even if you suck at shooting and running around — just pick a slow tank.
  3. Escape From Rungistan. (You saw that coming, right?) The text/graphics split screen adventure game has died out. Playing it via an emulator would be an interesting way to comment on history, genre, and technological limitations of a platform. Not a particularly fast game but we can play just the first few screens and get an idea of things.
  4. SpaceChem. We have to have a puzzle game, and I think it would be interesting to put in one game that is just terribly and intentionally hard for most people. It’s a great game but it’s an interesting design choice to make a game that most players will never be able to finish. Also there’s a free demo.
  5. (or 4.5?) Lego Junkbot. Another ingenious puzzler. Could be paired with SpaceChem so that there is a simple puzzle alternative to SpaceChem’s insanity. However I can’t find Lego Junkbot online anymore. Is it dead?
  6. Diner Dash. Classic. Quick to play and you get to experience the real time management genre.  I see that I’m on a bit of an Eric Zimmerman theme now but that’s only because he is brilliant. It looks like you can play it for free with a trial subscription.
  7. (or 6.5?) Atom Zombie Smasher. Also a real time management game but quite a different take on things. And so much style! It has a free demo, at least on Steam.
  8. Façade. Fast to play, free — and great way to talk about narrative. Can be paired with an article talking about the game.
  9. (or 8.5?) Thirty Flights of Loving. Oooh, this could be assigned along with Façade. Another interesting take on narrative. Another art-y, indie blast of freshness. Now I’m on a Brendon Chung roll here. But I may have to repeat some game designers due to their absolute brilliance. 
  10. Electro City. Simple and obscure city simulator that has a green power agenda. Free online, quick to learn, quick to play, and speaks to G4C and simulations.  Not a great game though — maybe there is something better?
  11. Some sort of children’s game that is supposed to teach you something? My gaming repertoire is too antiquated to know what to put here.  Lemonade Stand anyone? Not sure.
  12. Something from GWAP (Games With a Purpose)… maybe The ESP Game — a free online multiplayer anonymous guessing game that serves the strict master of human computation.
  13. Some kind of game of chance or gambling.  Hard to think of one that would be unfamiliar and not illegal, but this is such a big domain of human play it seems important to include.
  14. Some kind of multiplayer game with really simple rules that leads to very complex gameplay, so that we can talk about how to write rulesSiSSYFiGHT 2000 would be perfect if it is finished in time. But that would be my third Zimmerman.
  15. Habbo Hotel or another social environment without much gameplay per se. Hopefully class members will not be arrested as stalkers.
  16. Maybe another classic game included because it was historically significant in the development of games?  Hard to think of one right now.  A kind of “this was the first game to do X” kind of game. Not sure.  You can see I’m running out of ideas at #16!

I pledge to you that the most useful response submitted will receive a prize of my choosing, entirely at my discretion. I will actually mail it to you. It will be a physical object. You are welcome to submit a thought, an idea, a criticism, a single game, or an entire syllabus.

If you’d like, please include your suggestions as a comment to this post. Or if you’d prefer to do this privately, email me at casual-games@umich.edu.  Let the syllabus writing begin!

[This post was also cross-posted to The Social Media Collective.]

Bad Behavior has blocked 105 access attempts in the last 7 days.