Most strong statements you find in the wilds of the media are about reliability :
“The best paper shredder on the market”… “the finest Earl in all of
Christendom”… “better and longer-lasting than ever!”… “More Comprehensive than Any Other English-language Encyclopedia.“
Worse, they don’t try very hard to earn your trust; they demand it. It is awfully hard to find statements about unreliability (or even adjective-free metrics) which are, funnily enough, more reliable than the first sort. You have to make it up to the industrial-strength copiers, scanners, and shredders before you find manufacturers telling you about the mean time(copies, scans, page-shreds) to failure for their products.
This arrogation of trust by media and advertisers finds a parallel in individual lives, with people saying whatever is necessary to claim the trust of others, not respectfully, but as though playing a
short-term game… mentioning only positive points, lying where they think it’s safe (after all, everyone is doing it!), and avoiding discussion or measurement of unreliability. This is, broadly speaking, a serious moral and economic offense. It contributes to inefficiency and ignorance in the community at large, and stalls the development of a profession of measurers (what is the right word?) — people skilled in identifying
and evaluating metrics of every sort.
To take a practical example (practical both in terms of familiar topics and in terms of things people reading this essay can directly do something about) : there are an increasing number of articles and works
published which refer to Wikipedia as an implicitly reliable source — often in inappropriate contexts. As its quality improves, Wikipedia seems to be shirking a certain quiet duty to be modest; something which was not a problem back when none would have mistaken it for a meticulously edited compilation.
Example: Ann Simmons, writing in the LA Times on a matter of British peerage earlier this summer, used the clause “according to Burke’s and Wikipedia,” something which should immediately give one pause. It seems that an editor hastily added the following clause, “, an online encyclopedia,” recognizing that many audience members would have no idea why this unusual name had been placed at that particular point in the
paragraph. The full quote:
According to Burke’s and Wikipedia, an online encyclopedia, Frederick
succeeded his father, Robert Capell, the 10th Earl, who died in June.
(The late earl was a distant cousin of the 9th Lord Essex.)
The 11th Earl is a bachelor and has no children. With no other
apparent successor in sight, Capell is the new heir to the earldom. His
aristocratic genealogy is documented in the 106th edition of “Burke’s
Peerage & Baronetage.”
Please understand me; I will be the first to tell you that you can find articles and collections on Wikipedia – including many on peerage and royalty – which are among the best overviews in the English language; if only you know where to look (and how to check the latest revisions in each article’s history). But they are overviews and (at their best) reference the most authoritative sources; they should never be used as one.
The process for checking information added to Burke’s and that for adding information to Wikipedia are vastly dissimilar. The Wikipedia article on the Earl of Essex, continued to list no references, two months after the above bit of news drew new attention to the articles on Frederick and Robert Capell; and while the average quality of information in the Earl is excellent, on any given day it might include mistakes or overlooked fabrications. [yes, just like any publication; WP likely has fewer such fabrications and errors overall, but the standard deviation for correctness is much higher.]
It is embarrassing to imagine some newscasster, writer, lawyer, politician, student, professor, or publicist
citing a random article from Wikipedia, on peerage or anything else, without somehow verifying that the source article had been carefully researched. So what can be done? Short of the full-fledged drive for moderated or static views of the project, that is. What I would like to see is an internal quality review group that issues regular recommendations to the rest of the world. At first these recommendations would look like a brief whitelist of the categories and subsubfields that are really top-notch and being monitored by a healthy community of respected users. Slowly it would add various hard metrics for each of two score top-level categories — spot-check accuracy; vandalism frequency/longevity; proportion/longevity of POV and other disputes; rates of new-article creation and editing, and article deletion; &c, &c.
The recommendations could go out to educational, librarian, and research bodies – including some of you reading this. They would be prominently linked to the sitewide disclaimer[s]. The metrics would be available to anyone as feedback, including those working on relevant WikiProjects. What do you think? (Original blog post | A boldfaced tip o’ the cursor to lotsofissues)
1 Comment so far
Leave a comment
Leave a comment
Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>