XBRB – stories from the Singularity.
A Blue/Red/Brown production.
A cluster-class rant. It warms up around 2:45.
Dimple jousting is the purest form of duel. Everyone can play on equal footing, the winner is obvious, and there is no chance involved.
One of the Wikipedia projects that has been developing slowly over the past two years is the Article Feedback Tool. In its first incarnation, it let readers rate articles with a star system (1 to 5 stars for each of the areas of being Well-Sourced, Complete, Neutral, and Readable).
The latest version of the tool, version 5, shifts the focus of the person giving feedback to leaving a comment, and noting whether or not they found what they were looking for. After some interation and tweaking, including an additional abuse filter for comments, it has recently been turned on for 10% of the articles on the English Wikipedia.
This is generating roughly 1 comment per minute; or 10/min if it were running on all articles. In comparison, the project gets around 1 edit per second overall. So if turned on for 100% of articles, it would add 15-20% to the editing activity on the site. This is clearly a powerful channel for input, for readers who have something to share but aren’t drawn in by the current ‘edit’ tabs.
What is the community’s response? Largely critical so far. The primary criticism is that the ease of commenting encourages short, casual/random/non-useful comments; and that it tends to be one-way communication [because there's no obvious place to find responses? this isn't necessarily so; replies could auto-generate a notice on the talk page of the related IP]. Many specific suggestions and rebuttals of the initial implementation have been made, some heard more than others. The implementation was overall not quite sensitive to the implications for curation and followthrough.
A roadmap that included a timeframe for expanding the tool from 10% to 100% of articles was posted, without a community discussion; so a Request for Comments was started by an interested community member (rather than by the designers). This started in mid-January, and currently has a plurality of respondents asking to turn the tool off until it has addressed some of the outstanding issues.
The impression of the developers, here as with some other large organically-developing feature rollouts, was not that they had gotten thorough and firm testing, but that editors were fighting over every detail, making communication about what works and why hard. Likewise there has been a shortage of good facilitators to take in all varieties of feedback and generate an orderly summary and practical solutions.
So how did things go wrong? Pete gets to the heart of it in his comment, where he asks for a clearer presentation of the project hopes and goals, measures of success, and a framework for community engagement, feedback, and approval:
I think it’s a mere mistake, but it does get frustrating because WMF has made this same mistake in other big technical projects…
What I’m looking for is the kind of basic framework that would encompass possible objections, and establish a useful way of communicating about them…
WMF managed that really well with the Strategic Planning process, and with the TOU rewrite. The organization knows how to do it. I believe if it had been done in this case, things would look very different right now…
It is our technical projects that are most likely to stumble at that stage – sometimes for many months – despite putting significant energy into communication.
Can we do something about it now? Like most of the commenters on the RfC, including those opposing the current implementation, I see a great deal of potential good in this tool, while also seeing why it frustrates many active editors. It seems close to something that could be rolled out with success to the contentment of commenters and long-time editors alike; but perhaps not through the current process of defining and discussing features / feedback / testing (which begs for confrontational challenge/response discussions that are draining, time-consuming, and avoid actually resolving the issues raised!).
I’ll write more about this over the coming week.
MIT looked into the problem, and some reported a link to a router configuration bug that’s been happening sporadically in recent weeks. This didn’t stop many on the Internet from seeing an omen or intervention or DDOS attack related to Aaron’s death.
But there may be a connection. An hour ago, after access to most of the MIT network was restored, two specific MIT sites cogen.mit.edu and rledev.mit.edu) were hacked by Anonymous to display a page remembering Aaron. The MIT Tech has the most up to date coverage: (“Anonymous Hacks MIT“)
The Anonymous message said, in part:
“We tender apologies to the administrators at MIT for this temporary use of their websites. We understand that it is a time of soul-searching for all those within this great institution as much — perhaps for some involved even more so — than it is for the greater internet community.”
This just went out by email, from MIT President Reif, who was inaugurated president in September:
To the members of the MIT community:
Yesterday we received the shocking and terrible news that on Friday in New York, Aaron Swartz, a gifted young man well known and admired by many in the MIT community, took his own life. With this tragedy, his family and his friends suffered an inexpressible loss, and we offer our most profound condolences. Even for those of us who did not know Aaron, the trail of his brief life shines with his brilliant creativity and idealism.
Although Aaron had no formal affiliation with MIT, I am writing to you now because he was beloved by many members of our community and because MIT played a role in the legal struggles that began for him in 2011.
I want to express very clearly that I and all of us at MIT are extremely saddened by the death of this promising young man who touched the lives of so many. It pains me to think that MIT played any role in a series of events that have ended in tragedy.
I will not attempt to summarize here the complex events of the past two years. Now is a time for everyone involved to reflect on their actions, and that includes all of us at MIT. I have asked Professor Hal Abelson to lead a thorough analysis of MIT’s involvement from the time that we first perceived unusual activity on our network in fall 2010 up to the present. I have asked that this analysis describe the options MIT had and the decisions MIT made, in order to understand and to learn from the actions MIT took. I will share the report with the MIT community when I receive it.
I hope we will all reach out to those members of our community we know who may have been affected by Aaron’s death. As always, MIT Medical is available to provide expert counseling, but there is no substitute for personal understanding and support.
With sorrow and deep sympathy,
L. Rafael Reif
And at least seventeen more. (In Canada, works enter PD 50 years after the author’s final circumvention of their mortal coil.)
Filed under: %a la mod,Glory, glory, glory,metrics,popular demand
Now I want to hear more… but I’m bullish on it.
Filed under: %a la mod,fly-by-wire,Glory, glory, glory,Seraphic,wikipedia
Huge props to the team working on this and the underlying parsoid. It’s still in Alpha, so it’s only on the English Wikipedia this week. And you have to turn it on via user prefs; and it wants good feedback, but it makes the old heart-cockles sing.
Filed under: %a la mod,chain-gang,Glory, glory, glory,international,meta,zyzzlvaria
via Global Voices, the Top 10 Chinese Internet Memes of 2012.
On the power and community of open source, from the WH Blog.
This isn’t written to publish their Drupal code, which they’ve been doing for some time and will continue to do (though they do announce creation of their own space within the Drupal community), it’s primarily about how and when open source is awesome and why it is the way to go for many practices. A great message to send; a small step towards more open tools for society.
Crash course in false equivalence.
Filed under: %a la mod,gustatory,metrics,poetic justice,wikipedia
This overview of pattern-creation in the guise of science and its mob effect on whole fields must be read and relished.
The Six Symptoms of Pathological Science:
- The maximum effect observed is produced by an agent of barely detectable intensity. The magnitude of the effect is largely independent of the intensity of the cause.
- The effect is of a magnitude close to the limit of detectability, or many measurements are necessary because of low statistical significance of individual results.
- There are claims of great, even extraordinary, accuracy
- Fantastic theories contrary to experience are suggested (with enthusiasm)
- Criticisms are met by ad hoc excuses thought up on the spur of the moment (this may be contagious)
- The ratio of supporters to critics rises to somewhere near 50%, then falls gradually to zero.
Also, note that the “Allison effect” and mechanism is the most amazing example given, and may show something different than standard pathological science: it was considered good science for over a decade, and by hundreds of practitioners.
via Mitt Romney.