Wikipedia:Wikipedia Signpost/Single/2015-09-30
Wikimedia Foundation fundraising report, Montreal to host 2017 Wikimania
Wikimedia fundraising report
The Wikimedia Foundation has published its 2014–2015 Fundraising Report (mailing-list announcement).
Some key points:
- The Wikimedia Foundation's 2014–2015 fiscal year was "the most successful fundraising cycle in our history", with $75.5 million dollars raised from 4.9 million donations
- This is $22.9 million up on the year prior, and $17 million more than the planned total of $58.5 million that was indicated in the 2014–2015 Annual Plan
- Biggest day for online donations: December 3rd, 2014
- Donations less than $100 account for 74% of total revenue, with the average donation being $15
- 18% of donations are under $10, 35% are in the $10–$30 range
The Foundation has also surveyed users to assess the perceived intrusiveness of fundraising banners of different sizes as well as users' sentiment towards Wikipedia, and has expanded its email campaign: "Readers submit their email address for future communications when they make a donation and a year later the fundraising team sends an annual reminder to donate." The Foundation sent 5,710,299 such emails, resulting in a total of $8,310,107 raised from 370,205 donations, a 90% increase over 2013–2014.
Major gifts exceeding $1,000 have grown, representing a total of $10,700,000 from 1,397 donations. In 2014, the Wikimedia Foundation also received the largest single gift in its history, "a $5 million unrestricted donation from an anonymous donor that will support $1 million worth of expenses per year, for the next five years".
The report concludes with a reminder that readership is in decline, especially in a number of key fundraising countries, with the rise in mobile readership unable to make up for the rather greater loss in desktop pageviews. In the United States for example, total pageviews are expected to be down by 5% this December compared to December 2014. Another aspect of the widespread shift from desktop to mobile is that mobile readers are generally less likely to donate than desktop readers. AK
Wikimania steering committee selects Montreal to host 2017 iteration of conference
The Wikimania Steering Committee's plans for future Wikimanias—the annual conference of the Wikimedia community[1]—have been revealed in an apparent leak by Leung Chung-ming (春卷柯南) to the Wikimedia-l mailing list.
According to Google Documents linked to on the mailing list, Wikimania locations will now be determined based on a three-year rotation. Starting in the first year with Western, Northern, and Southern Europe—specifically excluding Eastern Europe—the second year will feature the United States/Canada, and the third year's conference will be somewhere in the rest of the world. Without apparent irony, the committee implies that fewer areas of the world will now be "ignored" by Wikimania.[2]
The first two, and possibly three, locations have already been determined: Italy, Canada, and South Asia.
First, Wikimania 2016 will be held in Esino Lario, a small village in the Italian Alps. This selection, made last year through the now-deprecated bidding process, was not uncontroversial.
Wikimania 2017: Montreal
Second, the Steering Committee plans to bring Wikimania to Montreal, Canada, in 2017—a choice that appears to have been made by last August without public consultation, transparent planning, or announcement to other potential bidding teams, even though they were "happy to endorse" the location as far back as their August 2015 meeting.
The Montreal team will be led by Marc-André Pelletier (Coren), a former member of the English Wikipedia's Arbitration Committee. The draft announcement does not note that he is also a current employee of the Wikimedia Foundation.
The draft announcement says that the committee reviewed several options and talked with "several" community members. These evidently did not include planned bidders Perth, who have been preparing a bid since at least as late as September 19, or Manila, who have long planned to revive their failed 2016 bid for 2017. Josh Lim (Sky Harbor), an organizer for the latter, wrote to Wikimedia-l that "I am at a loss for words as to how to express my utter disappointment at how this process seems to have been rammed through without any sort of consultation taking place whatsoever, despite assurances made to the contrary ... my faith in the entire Wikimania process at this point is visibly shaken."
Wikimania 2018: South Asia?
Last, the Steering Committee appears to be planning for a Wikimania 2018 in South Asia. Deror Avi—the representative remaining on the Steering Committee from Wikimania 2011 in Haifa, Israel—writes that there are "keen" individuals from South Asia who would like to host the conference, and James Forrester, the chair of the Steering Committee and a Wikimedia Foundation employee, appears ready to select the region in a comment dated August 18. Ellie Young, the Foundation's events manager, is not.
Bidding process changes
A somewhat less controversial change may lie in the move away from the old bidding process. As the Steering Committee notes in a draft message:
The existing bidding process has developed over time. It has become unwieldy and hard work for the community and staff. It demands that people pour a huge amount of effort into building local teams, contracts and institutional relationships only for rejected bids' work to be left unused. A lot of pressure is put on volunteers to try to work on logistics rather than dream about what would make a great programme for our communities. Each year, the jury has to decide on a venue based on what is presented by each group divisively, rather than what we as a community could come together and build.
The process is too short-term, setting out venue[s] much less than two years ahead (often only just more than twelve months in advance). This greatly increases expenses when other similar conferences plan locations out many years ahead. This makes it impossible for us to be strategic about location, prevents us from arranging co-location with like-minded conferences, and it means that some areas of the world are ignored when they could provide great Wikimanias.
- ^ The Wikimedia Conference, usually held in Berlin each year, is principally for affiliate organizations.
- ^ The specific quote in the new process announcement says that the current bidding process "means that some areas of the world are ignored when they could provide great Wikimanias."
Brief notes
- WMF hiring: The WMF has announced that Boryana Dineva will be the new Vice President for Human Resources. She joins the WMF from Tesla Motors. G
- Wikimedia Foundation seeks new board members: The Wikimedia Foundation's new VP of Human Resources, Boryana Dineva, issued a Call for Board nominees on September 26; nominations had to be received by September 30. A separate pdf document indicates that the Foundation seeks to add two board members by the end of 2015. The desired areas of expertise focus on governance (finance, auditing, operations, business models) and diversity (human dynamics & social behavior at scale, online cultures & culture diversity, emerging economics, emerging & global media). AK
- New functionary: Philippe Beaudette (Philippe) was appointed a CheckUser and Oversighter by the Arbitration Committee. Beaudette, an administrator since 2007, recently left the position of Director of Community Advocacy at the WMF. Arbitrators and supporters cited Beaudette's lengthy experience on Wikipedia and with the WMF, where he was one of the most senior employees, having joined the WMF in 2009. Some users expressed concern that the Committee was acting out of process with a quick motion and appointment in less than two days and that Beaudette should be appointed as part of a regular call for candidates. G
- New user-groups: The Affiliations Committee announced the approval of this week's newest Wikimedia movement affiliate, Wikimedia User Group Nigeria. G
- Milestones: The Wikimedia blog discusses the Swedish Wikipedia's recent passing of the 2-million-articles milestone. G
- Beta testing: Some editors are experiencing problems related to the Content Translation beta feature (see previous Signpost coverage). Those who have it enabled are reporting issues with other gadgets, including Twinkle and HotCat. Disabling the beta feature causes the other gadgets to act normally. G
Irish legislative editing; coffee quarrel; more sports vandalism
Jim Walsh biography: Wikipedia's revenge?
In a follow-up to an earlier story describing how an IP traced to Ireland's legislature, the Oireachtas, had removed controversial content from the biography of Irish politician Jim Walsh (see previous Signpost coverage), the Irish Independent reports (Sept. 27) that Jim Walsh has admitted making the edits, saying he believed "a person from the gay lobby groups" had edited his biography.
More than half of Walsh's 950-word biography is currently focused on "controversies" relating to his views about gay marriage and civil partnerships, including a 90-word paragraph about his attempt to remove related material from his biography.
Walsh made his comments to the Sunday Times, which provided further examples of politicians editing their own entries. AK
The flat white
The Sydney Morning Herald reports (Sept. 27) on a long-running argument over whether the flat white was invented in Australia or New Zealand. The Wikipedia article has repeatedly flip-flopped between the two theories. The Herald quotes Australian Alan Preston complaining about his putative New Zealand adversaries:
“ | They control the Wikipedia page [...] I go back in and change it and the next day they go back in. Tourism Australia is aware of this and sympathetic to my story. |
” |
Since the Herald article (also carried by other outlets including the Brisbane Times and goodfood.com.au) has appeared, there has been another flurry of edits in the article, and the Australian claim is – for now – in the ascendancy. AK
When trolls attack
Sports site Fansided reports (Sept. 24) on edits made to the Wikipedia biography of Lane Kiffin, repeating various rumors currently circulating in social media about Kiffin's private life and his continued employment as a member of the coaching staff for the University of Alabama's Crimson Tide football team.
Fansided's Stu White was not impressed:
“ | [...] the Wikipedia trolls are out in full force, facts be damned. [...] Living in a world where one of the biggest and most important sources of information can be altered to reflect unsubstantiated rumors is quite a trip. Remember that the next time one of your buddies tries to settle an argument by pulling up a Wikipedia page on his or her phone. |
” |
White also criticized some of the edits for their inherent sexism. AK
Non-profits, venues and businesses edit-a-thon in South Bend
The South Bend Tribune notes (Sept. 26) a Wikipedia editing event organized by the "South Bend Office of Innovation and enFocus, in partnership with the St. Joseph County Chamber of Commerce and University of Notre Dame Hesburgh Libraries".
The event, which took place this Tuesday, aimed "to teach residents how to edit Wikipedia pages to increase the representation of South Bend non-profits, venues, and businesses online".
Local TV station WSBT-TV covered the edit-a-thon (Sept. 29). AK
In brief
- Clinton Global Initiative: Wikimedia Foundation Executive Director Lila Tretikov was on the list of speakers for the Clinton Global Initiative (CGI) 2015 Annual Meeting (Sept. 26–29). AK
- Wiki Loves Monuments: Israeli news site Arutz Sheva highlights (Sept. 27) the international Wiki Loves Monuments photo competition. Noting that upheaval like the current situation in Syria often leads to the destruction of heritage sites, Dror Lin of Wikimedia Israel explained to Arutz Sheva: "the images are meant to preserve our culture." AK
- NSA lawsuit's day in court: Outlets including the Baltimore Sun, The Guardian and usnews.com report on the NSA lawsuit's first day in court (Sept. 25). The Obama administration has asked that the case, brought by a coalition of organizations including the Wikimedia Foundation, be dismissed, arguing that it was "speculative" and the plaintiffs lacked standing; lawyers representing the plaintiffs have asked the judge to rule against this request. The judge's ruling on the motion to dismiss is expected in a few weeks' time. The Wikimedia Foundation also provided a write-up on its blog (Sept. 28). AK
Wikipedia needs more administrators
A brief history of RfA reform
RfA was founded on 14 June 2003 by Camembert, and the first promotion via the system, that of Quercusrobur, occurred on the same day. Before the invention of RfA, admin promotions took place through mailing lists. The first discussion on WT:RFA was started by Tim Starling six days later on 19 June, which was a humorous discussion. Discussions similar to the ones we have today began soon after, with the first apparent one concerning election standards. The first serious complaint about the process appears to have been made by Greenmountainboy on 8 January 2004, in a thread called "Attacked by everybody", in which he stated that RfA had turned into a place where everyone attacked each other. Most disagreed with the assertion.
As long ago as 2006, Aaron Schulz had recognised in an essay the same issues that have been perennially discussed for nearly a decade. The first serious RfA reform project, known as WP:RFA2011, was launched in 2011. It was created by Kudpung in his userspace on 25 March, and upon encouragement by others he subsequently moved it to Wikipedia space. The project accumulated a task force of over forty established editors, including senior Wikimedia Foundation staff. The launch of this project followed a comment made by Jimmy Wales that March, in which he stated that RfA was a "horrible and broken process". This comment was in response to the retirement of My76Strat (now John Cline) due to his failed RfA. Large amounts of data were compiled, but unfortunately no proposals were put forth as a result of the project.
Following RFA2011, the next serious reform project occurred in 2013. It consisted of a series of three RfCs, starting in late January and ending in early April. All proposals which survived to Round 3 failed. To my knowledge, there have been no large-scale reform projects since.
We need more admins
Why?
Wikipedia currently has about 1,330 users with the sysop user right. At a glance, this seems like a large number. Therefore, some might say that we have more than enough admins. What's with all this fuss over the years about needing more? It is important to realize that the raw admin count is a deceiving number. Using the AdminStats tool, I determined that assuming an activity standard of at least 30 admin actions in 2 months (adapted from this standard, except that I changed it to admin actions, which is more relevant, rather than simply edits), only about 250 of our admins are active! This means that of our 1,330 admins, only about one-fifth (20%) actively contribute to administrative work. To look at this another way, 80% of users who have the sysop bit are (semi-)inactive as admins. It may occur to some that we can fix this problem by getting inactive admins to return to activity. However, many users become inactive for reasons beyond our control, such as loss of interest or inability to continue editing.
Now, some might even feel that 250 is sufficient, but the size of this website must be considered. For a small wiki, 250 admins would be more than enough. However, Wikipedia has almost five million articles, dozens of vandals to block every day, numerous noticeboards to monitor, and administrative backlogs that are always growing. According to Alexa, we are the seventh most popular website in the world, even surpassing Twitter, which ranks ninth. We have a relatively tiny group of a couple of hundred admins to handle all this. Many of these active admins have performed hundreds, or sometimes even thousands, of admin actions within the past two months. Yet the backlogs still exist. What must this mean? It can only mean that we don't have enough admins. By depending upon a relatively small group of admins to perform hundreds or thousands of actions in a short time, we first of all put too much burden upon these individuals. Secondly, the retirement of even a few of these admins, especially those who perform many thousands of actions within a short period of time, would cause a noticeable increase in work for the others. This is a WP:VOLUNTEER service. It is more fair to all users to distribute the workload more evenly.
Stats
Since January 1, 2015, there have been 47 closed RfAs as of October 3, 2015. A mere 15 of these (about 32%) were successful, and 32 were unsuccessful. This means that, on average, RfA has been responsible for only 1.7 promotions per month. Such a low number was unheard of a few years ago. In fact, months with no promotions at all are becoming more common. The first month with no promotions in recent years was September 2012, and that was the first in over a decade. However, just over the past year, 3 out of 12 months (25%) have been without any promotions. The problem is simply becoming worse. If you look at WereSpielChequers' chart, you will see a total of four empty months under the "2014" and "2015" columns.
However, we have another method of getting "new" admins: when ones who have previously resigned request a resysopping. Since the beginning of the year, 10 users have requested resysopping at WP:BN for adminship they had lost before the start of 2015, not counting the three who regained their adminship via RFA. So, by adding this number to the number of admins sysopped via RfA (10 + 15), we get 25.
But there are two other questions to be asked. Namely, these questions are: (1) How many admins have we lost? (2) How many (re)sysopped users are actually active admins? To answer to first question, about 65 users have been desysopped this year, for varying reasons. Secondly, it turns out that although 25 users have been sysopped, only about 20 meet the activity standard of 30 actions over the past 2 months. Therefore, we are losing admins three times faster than we are really gaining them. (After all, we really haven't gained an admin if they contribute very little.)
Back a few years ago, this was not a problem at all. For instance, a record 408 admins were promoted in 2007. Even before that, the promotion of a few hundred admins per year was the norm. However, since 2008, the number of promotions has been perpetually declining. The chart at the top of this article, based upon WereSpielChequers' data that I previously mentioned, shows the number of RfA promotions per year since 2002. The number of promotions decreased sharply in 2008 and has been in a state of perpetual decline ever since. There has not been a single year since 2007 in which there was a considerable increase in promotions. The last year in which there was an increase was 2013, and even that was only by 6. The difference mostly seemed to be in February and March, which was when the reform RfCs were occurring. It therefore looks as if it may have merely been a brief surge inspired by the reform efforts.
What happened?
Why has this decrease happened? In my opinion, two of the most likely reasons are: (1) Higher standards; (2) Hostile/stressful environment. It could also be a combination of these two.
I will start with the first possibility. Current (Oct. 3, 2015) data from User:Everymorning/RFA study shows that the median successful 2015 RfA candidate has eight years of experience and 41,000 edits. The average for 2015 candidates is 7.2 years of experience and 36,500 edits. (Note that I excluded Ser Amantio di Nicolao's RfA, since he had over one million edits and therefore would have a disproportionate impact upon the average.) Although the details may fluctuate slightly, using these statistics we can broadly conclude that the typical 2015 RfA candidate has around six to nine years of experience and 30,000–50,000 edits. If this really is the standard, this is much too high. However, simple statistics such as this might not be of much worth. After all, we have no way of knowing whether or not the numbers I gave in the paragraph above are reflective of the actual standards. It might, or it might not. Perhaps it's just a coincidence that users with such high statistics choose to run. The only way to find out what the experience standards are is to get a less experienced user to run.
However, simple tenure and edit count stats are far from being the only things measured at RfA. Some users who have even more edits and experience than the range I mentioned above have failed. Performance, such as scope of participation, accuracy rates, and behavior, is considered as well. And of course, these things should be considered to a certain extent. However, when these things are scrutinized to an unreasonably high degree, the standards will become higher, and when the standards become higher, fewer candidates will pass. For instance, it is relatively common to oppose a candidate because their "hit rate" at AfD isn't good enough (how does a "hit rate" affect their ability to judge consensus?), or they haven't made (number) of edits to a particular administrative page (even if they don't want to work there, or have said they will proceed very cautiously). Some users have quite stringent requirements concerning content. This has been a rather major theme as of late, so I will discuss it in some detail.
It has been becoming more apparent that lack of substantial content work will actually cause an RfA to fail. For instance, a certain user recently said, "The purpose of admins should be to keep the riff-raff away from the content creators." Although he is partially correct, this isn't entirely true. The purpose of admins is to keep order throughout the site. If this means blocking a content creator who is in some way causing disorder, that is also part of an admin's job. All good-faith editors have a beneficial function. Gnomes and copy editors fix errors and formatting issues that a content creator might not notice, while users dedicated to anti-vandalism (including admins) do indeed keep the riff-raff away from the content creators' articles by reverting and blocking vandals who harm articles they have written. In the early days of Wikipedia, it is true, content creation was more important than anything else. However, as the website has grown in size and popularity, the importance of maintaining it has increased as well. Without admins, uncivil users would be unrestricted and could do or say whatever they wanted, vandals could just continue vandalizing articles no matter how many times they were reverted, etc. Without anti-vandals, the content creators would have to be online 24/7 to monitor all their articles. In short, Wikipedia would plunge into ruin. Now, before I'm misunderstood, I fully support content creation, but what I am opposed to is the notion that other user groups are unimportant. I fully appreciate and in fact admire the tireless content work of some users.
My ultimate point with the paragraphs above is that high standards do not do anything to fix our obvious admin shortage problem. If we are to gain more admins, we must not be so restrictive as to who becomes one.
If the !voters' opinion cannot be changed, one way to neutralize overly-stringent criteria is to lower the percentage bar for passing. This is a solution I very strongly advocate. I know this has been proposed and rejected several times before, but it's high time that we start again with fresh and open minds to seriously debate and consider it. Remember, RfA is currently in a condition drier than it has ever been in almost all the history of Wikipedia. We must face the facts: currently, our bar is unlike that of virtually any other group. In practice, it seems to be somewhere around 75%, since most RfAs which get more support than that tend to pass. 70–75% (and rarely, 75–79%) sometimes results in a 'crat chat (a decision by Bureaucrats), but 'crat chats are in fact quite rare. In any case, an RfA usually doesn't pass if it concludes in the low 70s. The United States Congress passes laws by simple majority (50%+1), and even the 67% requirement to overturn the President of the United States' veto is less than this bar. Of course, electing an admin for an online encyclopedia is nowhere near as important as making binding laws for one of the most powerful nations existing. As another example, very few users in the ArbCom elections get 75%+ support. If that would have been the standard for last year's election, only two candidates would have passed. Furthermore, the position of arbitrator holds many more responsibilities, some of which can impact the project in a manner far greater than any individual admin ever could. Arbitrators also gain automatic access to the checkuser and oversight tools, which can have serious privacy implications.
Even if the contrasts above are inaccurate for some reason or another, there is one final issue, which is arguably the most important. Oppose !votes currently carry about three times more weight than support !votes. For instance, for every six opposers, at least eighteen supporters are required to cancel them out. Why should opposers have so much power? We should assume that the candidate is running in good faith; therefore, why give so much weight to the negative side? It makes more sense for every !vote to be given equal consideration, which would mean a 50%+1 bar for passing. Or, to preserve the discretionary range, maybe the bar could be 60% with 50%+1–59% being the discretionary range. In any case, the point here is that in comparison to virtually every body outside us, our bar is very high, and in the interest of truly giving more equal weight to both opinions, our system should not give three times as much power to a single dissenting opinion.
But some people object that we cannot be more lenient in passing candidates at RfA, because if they misuse the tools or are abusive, it is virtually impossible to remove them. This is simply false. There are multiple venues by which admins can be held accountable. If they are being uncivil, they can be blocked like any other user. If they are bothering a particular user incessantly (e.g., WP:HOUNDING them), they can be interaction banned like any other user. If they are generally abusing their tools, they can be taken to ArbCom. ArbCom almost never completely dismisses a good admin abuse case. They can choose to deal with it quickly by motion, or they might choose a complete case in more unclear situations. Of course, realize that ArbCom doesn't have to desysop every admin brought before them, so the dismissal of some cases cannot be used as an example of "failure". They may, for instance, decide that the incident was isolated and not part of a general abusive pattern. We all make isolated mistakes. Now, I would prefer that the wider community have the ability to desysop admins, but since no one can ever fully agree on a satisfactory method, we'll have to use ArbCom for now. ArbCom may sometimes take a considerable time to authorize the desired result, but it is generally effective at holding continually troublesome admins accountable. Whenever evidence is requested from those who assert that there is no effective method by which admins can be desysopped, there never seems to be a clear answer. If the assertion was really true and worthy of consideration, its proponents should be able and willing to present real, solid evidence that ArbCom is chronically ineffective at dealing with patterns of abuse.
On to the second point, it is possible that potential candidates might be discouraged from running because of what they perceive to be a hostile and/or stressful environment at RfA. Some recent RfAs, such as that of Montanabw, Wbm1058 and Liz, were the subject of much contention and accompanied by very lengthy talk pages. Wbm's, in particular, was one of the most intense in a long time. Virtually all recent candidates have also been asked dozens of questions within literally a day or two. This environment might very well be a factor in our admin shortage.
How do we fix the problem?
Fixing our admin election system would be a three-step process. First of all, we must discuss, and reach a consensus upon, what the major problems are. Next, we determine how to fix the problems. These two steps, of course, might require a long time and several discussions per issue. But, if this discourages you, read the last paragraph of this section. I personally see three main solutions for reforming our admin election process: (1) Have the voters see that their standards must be changed; (2) Lower the passing bar, as I suggested above; (3) Completely change the process. Then, we implement the solutions. The current method is very disorganized (e.g., "maybe this is it ... well, maybe not/perhaps it is ... [discussion eventually dies]"). If anything is to be done, it must be in an orderly manner.
Secondly, what is done must be for the long-term. Last year (around this time, in fact), there was a surge of nominations following some discussion of revolutionizing the process. Short-term surges do nothing to fix the long-term issue. We always get into a vicious cycle: Discuss changes → More nominations → People say, "It really does work after all!" → Number of nominations dies down again → Cycle repeats. No, it is not working. The current condition of our admin election process is resulting in its long-term failure. We must not be deceived when brief rises in the number of nominations and passes come about.
Remember that the problem will simply grow worse if we give up easily; we must continue until we find a solution. Otherwise, we might not have time to undertake a organized, reasoned RfA reform process if the problem ever forces us to realize that there really is a problem and therefore take action in a relatively short period of time.
Notes
- Almost all of the promotion data was taken from User:WereSpielChequers/RFA by month.
Wiktionary special; newbies, conflict and tolerance; Is Wikipedia's search function inferior?
A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.
"Teaching Philosophy by Designing a Wikipedia Page"
Wikipedia research still is not often seen in the book form. Here's one of the rare exceptions: a book chapter on "Teaching Philosophy by Designing a Wikipedia Page".[1] It is an essay in which the author describes his experiences in teaching a class with a "write a Wikipedia article" assignment; specifically starting the Collective intentionality page. The students worked in teams, each tasked with improving a different part of the article (from separate parts of the literature review to ensuring that the article conforms to different elements of Wikipedia's manual of style). The end result was quite successful: a well-written new Wikipedia entry (see here revision as of the time the article was last edited by the instructor in January 2013) and the students seemed to have expressed positive assessments, particularly with regards to having an impact on the real world (i.e. creating a publicly visible Wikipedia article). The author concludes that the students benefit both from contributing to public knowledge, and by learning how public knowledge is created.
Unfortunately, it appears that (as is still too often the case) the author (Graham Hubbs of the University of Idaho, presumably Phil442 (talk · contribs)) was not aware of the Wikipedia:Education Program, as no entry for the course was created at the Wikipedia:School and university projects. It may therefore be wise for the editors associated with the Wiki Education Foundation (some of whom, I hope, are reading this) to pursue this and contact the author - as someone who was quite happy with his first experiment in teaching with Wikipedia, he may be happy to learn we offer extensive support for this (at least, as far the US goes). On a final note, I do observe, sadly, that neither the instructor, nor any of the students seem to have kept editing Wikipedia after the course was over (outside a single edit here), which seems to be a too-common case with educational assignments in general.
Wikipedia Search Isn't Necessarily Third BESt
- Review by Trey Jones (WMF Discovery department)
What's the best way to use Wikipedia to answer questions like, "How tall is Claudia Schiffer?" or "Who has Tom Cruise been married to?"—and what tools can make this easier?
In their paper, "Expressivity and Accuracy of By-Example Structured Queries on Wikipedia,"[2] Atzori and Zaniolo seek to compare their query-answering system—"the By-Example Structured (BESt) Query paradigm implemented on the SWiPE system through the Wikipedia interface"—against "Xser, a state-of-the-art Question Answering system", and against "plain keyword search provided by the Wikipedia Search Engine." Their results on a standard set of question answering tasks from QALD put SWiPE on top, with F-measure scores for SWiPE, Xser, and Wikipedia at 0.88, 0.72, and a dismal 0.18, respectively.
Their approach is based on transforming Wikipedia infoboxes into editable templates that serve as a front end for SPARQL queries run against RDF triples (subject–predicate–object expressions) stored in DBpedia. It is a novel approach that suggests a number of other avenues for improving search and discovery on Wikipedia and elsewhere. However, their methods and results are incommensurable both to Xser and to Wikipedia's native keyword search.
In an earlier paper on SWiPE,[supp 1] the authors describe the need for custom ("page-dependent") mappings from any given infobox element to the appropriate internal representations for mapping to SPARQL/RDF. These mappings appear to have all been created manually. Given these behind-the-scenes mappings from infoboxes to RDF elements, a user, working by analogy from an existing infobox, maps query concepts to the appropriate infobox element.
BESt/SWiPE thus pushes much of the language and conceptual processing—tasks at which humans excel—into the human user: the human chooses an existing Wikipedia entry on an appropriate analogous topic, pulls out relevant entities and relationships from the text of the query, and maps them to appropriate infobox components. These tasks can be non-trival. Answering a question like "Who has Tom Cruise been married to?", for example, requires mapping "Tom Cruise" to the relevant category of, say, actor, finding another actor to use as a template, and mapping the "married to" relationship in the query to the "Spouse(s)" element of the infobox.
Contrast this with Xser,[3] which uses natural language processing to automatically parse a given query and convert it into a structured format, which is then automatically mapped to a structured query (e.g. SPARQL) against a knowledge base (e.g., DBpedia)—all independent of any human posing or reading a given query or mapping to KB elements. The comparison is thus more properly between BESt/SWiPE + a human and Xser, in which case it is less surprising that BESt/SWiPE comes out on top.
The comparison to the "plain keyword search provided by the Wikipedia Search Engine" is similarly disingenuous. The authors extracted search terms from the set of queries they investigated, apparently manually, but without the level of insight into natural language (or Wikipedia!) that is required in the BESt/SWiPE workflow, given the mappings of infobox elements to conceptual categories and the parsing of queries to map them to infobox elements.
The authors' translation of questions into keywords for Wikipedia queries is sophisticated from a language processing point of view, but naive from a search point of view. "How tall is Claudia Schiffer?" became search terms (Claudia Schiffer, tall), though any sophisticated searcher should know that height is usually listed under "height", not "tall". (The query still works because it gets to the Claudia Schiffer wiki page, despite the distractor term "tall".) They drop the word "produce" from a question about where beer is produced, but leave it in for a producer (but don't use "producer", which is the expected specific title to be found on that person's wiki page).
More generally, when searching Wikipedia, the authors fail to note when a question is fundamentally about the basic properties of a given entity, and so any search terms other than the name of that entity is a distraction in that search. (E.g., "How tall is Claudia Schiffer?" is about Claudia Schiffer, "Which river does the Brooklyn Bridge cross?" is about the Brooklyn Bridge, "In which U.S. state is Mount McKinley located?" is about Mount McKinley.) No human user familiar with Wikipedia (or even a dead-tree encyclopedia) would search for "Claudia Schiffer, tall" when asked to find out how tall she is.
The authors also fail to take advantage of any knowledge about the typical structure and content of Wikipedia, and so don't search for the obvious "list of X" articles that often answer the questions with sortable tables that any frequent user of Wikipedia (much less an editor and contributor) would be very familiar with. As an example, mapping the question "Which U.S. state has the highest population density?" to the search "list of U.S. states by population density" is natural—and it happens to be an exact match to a page I'd never seen before, but surmised was likely to exist.
The authors do afford considerable sophistication to their hypothetical BESt/SWiPE user, who knows, for example, to model the query to answer "Which books by Kerouac were published by Viking Press?" on a book, rather than on an author. It makes sense in retrospect, considering the information available in a book infobox, but my first inclination was that this was a question about an author, and an author infobox is insufficient for this question.
Again, the results attained by Wikipedia + naively extracted queries and BESt/SWiPE + a sophisticated human are incommensurate. A sophisticated Wikipedian + Wikipedia would fair much better than the poor 18% F-measure reported by Atzori & Zaniolo. And, it seems likely that a relatively sophisticated Wikipedian is easier to come by than someone who can map queries to example entities and their infobox components after having mapped infobox components to RDF entities and relationships.
To be fair, keyword searches on Wikipedia can't readily answer questions that do not appear on a single page in Wikipedia. Some answers would be very tedious indeed to determine, such as "Give me all people that were born in Vienna and died in Berlin", because they require collating information across many pages. But that's exactly the kind of information about relationships between entities—and even chains of relationships among different kinds of entities—that one expects to be extracted via SPARQL from RDF triples in a data store such as DBpedia or Wikidata.
Finding ways for users to productively access such structured information—be it through natural language processing as with Xser, through structured by-example queries as with BESt/SWiPE, or other approaches—is a worthy goal; but it is only fair to compare approaches that operate in the same general realm in terms of available automation and necessary user sophistication.
More newbies mean more conflict, but extreme tolerance can still achieve eternal peace
An article titled "Modeling social dynamics in a collaborative environment",[4] published last year in the Data Science section of the European Physical Journal, describes a simplified numerical model for how Wikipedia's coverage of contentious topics may develop over time. It presents evidence that this model matches some aspects of real-life edit wars and debates.
The opinions of editors on a particular issue are modeled as a one-dimensional variable: "In the Liancourt Rocks territorial dispute between South Korea and Japan, for example, the values x=0,1 represent the extreme position of favoring sovereignty of the islets for a particular country". Somewhat contrary to Wikipedia's neutral point of view (NPOV) policy, which is never mentioned in the paper, the authors assert that an article's coverage of such a topic always expresses a particular opinion too, likewise modeled as a point on this scale.
The paper first considers pairwise encounters between editors ("agents") where "people with very different opinions simply do not pay attention to each other, but similar agents debate and converge their views" by a certain amount that is governed by a parameter describing how "stubborn" opinions are. This is a well-studied model of opinion dynamics, known as "bounded confidence" (for the "confidence" or "tolerance" parameter that describes the limit until which agents are similar enough to still influence each other). It also matches the description of inelastic collisions of two particles in certain kinds of gases in statistical physics.
To describe the interaction of an editor ("agent") with an article ("medium"), a second kind of dynamic is introduced in the model. Here, the equations state that editors will change an article if it differs too much from their own opinion (as defined by a second tolerance parameter), but will change their opinion towards the article's if they already have a similar opinion.
The numerical simulation of an article's history consists of discrete steps combining both dynamics: interactions between editors (for example on talk pages) and an edit made by an editor to the article.
For a "fixed agent pool" where no editors join or leave, it can be shown that "the dynamics always reaches a peaceful state where all agents' opinions lie within the tolerance of the medium". The authors note that this "contrasts drastically with the behavior of the bounded confidence mechanism alone, where consensus is never attained" (unless the tolerance parameter is large). In other words, the interaction on the article as the shared medium sets Wikipedia apart from systems that only support discussion (Usenet flamewars come to mind).
However, depending on the values of the tolerance and stubbornness parameters, this eventual "peaceful state" can take a long time to reach, with various possible dynamics - see the figure from the authors' simulations on the right. They note that "Quite surprisingly, the final consensual opinion [does not need to lie in the middle, or match] that of the initial mainstream group, but [can sometimes be] some intermediate value closer to the extremist groups at the boundaries."
Going beyond the simplified "fixed agent pool" assumption, the authors note that "in real WP articles the pool of editors tends to change frequently ... Such feature of agent renewal during the process or writing an article may destroy consensus and lead to a steady state of alternating conflict and consensus phases, which we take into account by introducing thermal noise in the model." Whether permanent consensus is still eventually reached, or how long it lasts before it is interrupted by periods of conflict, depends on the parameters, including the rate at which newbies enter the community.
In the last part of the paper, the authors compare their theoretical model with actual revision histories of articles on the English Wikipedia. They use a numerical measure of an article's "controversiality", introduced by some of the group in an earlier paper (see review: "Dynamics of edit wars"). It basically counts reverts, but weighs reverts between experienced editors higher. The development of this number over time describes periods of conflict and peace in the article. The authors state that using this metric, almost all controversial articles can be classified by three scenarios:
- (i) Single war to consensus: In most cases controversial articles can be included in this category. A single edit war emerges and reaches consensus after a while, stabilizing quickly. If the topic of the article is not particularly dynamic, the reached consensus holds for a long period of time [... Example: Jyllands-Posten Muhammad cartoons controversy]
- (ii) Multiple war-peace cycles: In cases where the topic of the article is dynamic but the rate of new events (or production of new information) is not higher than the pace to reach consensus, multiple cycles of war and peace may appear [... Example: Iran].
- (iii) Never-ending wars: Finally, when the topic of the article is greatly contested in the real world and there is a constant stream of new events associated with the subject, the article tends not to reach a consensus [... Example: Barack Obama]
For their theoretical agent/medium model, the authors define an equivalent of this controversiality measure, and find that it "closely reproduce[s its] qualitative behavior [...] for different war scenarios" in numerical simulations.
A preprint letter with the same title involving the same authors, announcing some of the paper's results, was covered in our July 2012 issue.
Predicting Wikimedia pageviews with 2% accuracy
A 2014 conference paper[5], recently republished as part of a dissertation in computer science, analyzed more than five years of hourly traffic data published by the Wikimedia Foundation, as part of an effort to develop methods for better predicting workloads of web servers. The authors call it "the longest server workload study we are aware of". From the abstract:
- "With descriptive statistics, time-series analysis, and polynomial splines, we study the trend and seasonality of [Wikimedia traffic], its evolution over the years, and also investigate patterns in page popularity.
- Our results indicate that the workload is highly predictable with a strong seasonality. Our short term prediction algorithm [one week ahead] is able to predict the workload with a Mean Absolute Percentage Error of around 2%.
The study decomposed the time series of pageview numbers into several components:
- a seasonality component with daily and weekly periods (without yearly parts in the presented example, as it covered only a little over a month), estimated by fitting cubic splines
- a trend line approximated with a piecewise linear function
- and a remainder modeled with an ARIMA (Autoregressive Integrated Moving Average) model using the R forecast software package.
(Readers might also be interested in a recently announced online traffic forecast application by the WMF Research and Data team, which likewise uses an ARIMA model, and allows predicting traffic for individual projects, but is based on coarser monthly time series.)
The study acknowledges that both the website's content and its server setup changed a lot over the examined timespan (May 2008 - October 2013), with the number of Wikipedia articles roughly tripling, and e.g. the main hosting site moving from Florida to Virginia and a separate server site in Korea closing. The authors also observe that traffic "dynamics changed tremendously during the period studied with visible steps, e.g., at the end of 2012 and early 2013" (where their diagram - Figure 2(a) on p III.4 - shows large upwards spikes), which "suggests a change in the underlying process of the workload. In this case trying to build a single global model can be deceiving and inaccurate. Instead of building a single global model, we modeled smaller periods of the workload where there was no significant step." This makes the work somewhat less interesting for those who are interested in longer-term strategic predictions rather than short-term allocation of server resources. On the other hand, such deviations from a prediction model could potentially be used in reverse to identify such "a change in the underlying process" (e.g. software changes affecting reader experience or a web censorship effort), or provide evidence for its impact on traffic. The authors' own use case requires such detection for the case of short-term upward outliers (those that increase server load), enabling a quick change of the prediction model.
The paper discusses two examples of such unexpected spikes, the 2009 death of Michael Jackson that overloaded WMF servers, and the Super Bowl XLV in 2010.
Another chapter concerns the list of the 500 most popular pages. The authors found that it is highly volatile, with "41.58% of the top 500 pages joining and leaving the top 500 list every hour, 87.7% of them staying in the top 500 list for 24 hours or less and 95.24% of the top-pages staying in the top 500 list for a week or less."
The freely available traffic data from the Wikimedia Foundation also features prominently in a draft publication included in the same dissertation as "Paper VI"[6] which examines the performance of algorithms that automatically scale server resources to changing traffic, "using 796 distinct real workload traces from projects hosted on the Wikimedia foundations' servers". Having found in that paper that "it is not possible to design an autoscaler with good performance for all workloads and all scenarios", another draft publication (included as "Paper VII" in the dissertation)[7] "proposes WAC, a Workload Analysis and Classification tool for automatic selection of a number of implemented cloud auto-scaling methods." Using machine learning methods, this classifier is trained on several datasets including again "798 workloads to different Wikimedia foundation projects" (mentioning the French "Wikitionary" [sic] as one example). The authors remarks that "we have performed a correlation analysis on the selected [Wikimedia] workloads, and we found that they are practically not correlated".
The dissertation was defended this week at Umeå University in Sweden. A startup has been founded based on the research results, which has also patented some of them.
Wiktionary special
While most of the research featured in this newsletter examines Wikipedia, other Wikimedia Foundation projects have attracted researcher attention too. Below we present a roundup of recent research about Wiktionary, plus one older paper. See also our earlier coverage of Wiktionary-related research.
"Online dictionaries in Web 2.0 platform – Wikiszótár and Wiktionary"
This Hungarian-language paper[8] (with an English abstract) was published in December 2010 in the "Review" section of the Journal of Hungarian Terminology. Its aim is to provide an introduction to online collaborative dictionaries, Web 2.0, and the wiki platform using the Hungarian and English Wiktionary as an example. In the section that discusses dictionary criticism, the author notes that a systematic, generally agreed upon set of criteria for evaluating online dictionaries has yet to be developed, so he conducts the evaluation based on methods originally designed for printed dictionaries. The article describes the elements of Wiktionary's structure and content in detail, and compares the two Wiktionaries to each other and to printed dictionaries. This can be useful information for someone who is not familiar with online collaborative dictionaries and specifically with Wiktionary. Some of the menu items have changed in the past five years, and a huge amount of content was added, but the overall structure - while more refined - is still the same.
The detailed analysis include: the megastructure (the navigation menu items, each listed and briefly explained); the macrostructure (the arrangement of words, finding an entry by search or by browsing categories or by clicking hyperlinks); the microstructure (the composition of a lemma entry, the sections within the entry, the quality and content of each section); the mesostructure (the system of hyperlinks, internal and external references, as perhaps the most important advantages of an online dictionary). Two screen shots are provided: one for the Hungarian word "ablak" ("window") from the Hungarian Wiktionary, and one for the English word "window" from the English Wiktionary. The examples chosen are similar in their level of detail to make the comparison valid.
The paper states that the biggest challenge of online collaborative dictionaries is the reliability of information. The content of printed dictionaries is created and reviewed by professionals. Online collaborative dictionaries can be edited by anyone. It is added, however, that even printed dictionaries contain inaccuracies, not to mention that the addition of new terminology can take years.
The conclusion is that the innovative nature of online dictionaries compared to traditional dictionaries is epoch-making, and their practical value is indisputable. Not necessarily in content (although the quantity of processed information is enormous), but more in the hyperlinks (including audio files and images), ease of use, wide availability, and free access. Compared to printed dictionaries, they are dynamic. Their content can be increased theoretically without limit and the information can be updated any time.
"GLAWI, a free XML-encoded Machine-Readable Dictionary built from the French Wiktionary"
The paper[9] recaps some previous publications of the same authors and reports on the publication of yet another dataset extracted from Wiktionary, but one of unusual size. The authors, across six years, mapped six thousand templates of the French Wiktionary (Wiktionnaire) and implemented various mechanisms to standardize its content, which, together with some manual correction, allowed them to produce a machine-readable dictionary of over 1.3 million entries under free license.
According to the authors, the dataset can be used to easily produce specialised lexicons and thesauri superior not only to the rather neglected French WordNet but even to a monstre sacré like the digital Trésor de la langue française. In fact, they report that Wiktionnaire contains only sixty-five entries with contradictory irreconcilable information. According to the authors, Wiktionary editors may want to adopt some of their standardizations and corrections, but need not be pushed to do so, because Wiktionary serves its purpose well by having little constraints and maximising participation, while standardization can be performed downstream.
Sadly, it's hard to assess the added value provided by this effort, as the paper features no comparison to other efforts and proposals, such as DBpedia Wiktionary or Wikidata's own proposal for a Wiktionary data mapping. However, it's useful as confirmation of (the French) Wiktionary's quality and as promotion/redistribution of its content.
"IWNLP: Inverse Wiktionary for Natural Language Processing"
This conference paper[10] reports on the more engineering-oriented IWNLP free software project. It is an XML dump parser which is in earlier stages of development than GLAWI, and specifically focused on the German Wiktionary (unlike a predecessor, the "Java Wiktionary Library" known as JWKTL). From 400k entries of the 2015-04 dump, 74k words and 281k word forms were extracted, reaching higher accuracy than previous resources for the lemmatization of nouns but low accuracy for adjectives and verbs; a thesaurus was not created yet. Interestingly, the authors made 200 edits to German Wiktionary entries in the process.
"knoWitiary: A Machine Readable Incarnation of Wiktionary"
This paper (pre-print?)[11] presents another attempt at producing an XML dump parser for Wiktionary superior to JWKTL. This effort focuses on a 2014 dump of the English Wiktionary, from which about 530k words and 550k meanings are extracted for Italian, about 580k and 700k respectively for English. However, there is no mention of code or dataset release, nor of whether the parser was an improvement on previous ones; DBpedia Wiktionary is not mentioned at all. The English WordNet is shown to cover only half of said terms, with lower comprehensiveness and small overlap. Wiktionary offers some unique strengths which allow novel applications: in particular information on etymology, compounding and word derivation.
In short, unclear reusability but one more point in the long list of papers showing that Wiktionary is a mature or superior competitor for most expert-built dictionaries, lexicons, thesauri etc.
"Zmorge: A German Morphological Lexicon Extracted from Wiktionary"
This conference paper[12] again features an extraction from the German Wiktionary. This time the objective is a German lexicon/finite state morphology analyser to replace Morphisto, an unfree German resource built on SMOR. Building upon an existing module (SLES), a fully automated extraction produces a SMOR grammar lexicon with about 70 thousands entries; quality is higher than in past work, which was based on raw text, because Wiktionary features information like part-of-speech, stem, gender or case. The lexicon's results are assessed against a manually annotated resource and succeed in overcoming the Morphisto lexicon, while the Stuttgart lexicon is still better by a few percentage points.
The precision achieved is 1.3 percentage points higher than it would be with a dump 15 months older and most errors are simply caused by the lack of a certain word in the German Wiktionary; this suggests such a Wiktionary-based approach will soon overcome its unfree competitors in yet another field of linguistic resources. Datasets and code are published.
"Dbnary: Wiktionary as Linked Data for 12 Language Editions with Enhanced Translation Relations"
This conference paper[13] presents a free software (LGPL) tool[supp 2] to extract a lexical ontology from Wiktionary.[note 1] (See also our 2012 coverage of an earlier paper about the project: "Generating a lexical network from Wiktionary")
Non-inflected terms in twelve languages are extracted from the respective Wiktionaries and linked by their relation (being a translation one of the other, being a synonym etc.). The authors claim their parsing is general enough to work in those twelve languages and resist to changes in markup, but it's not clearly explained how and quality was not assessed at all.[note 2] The work can be considered a conversion from wikitext format to RDF of the most basic linguistic information in Wiktionary, interesting insofar extensible to all languages, but the resulting dataset is not usable as is without further research.
"Observing Online Dictionary Users: Studies Using Wiktionary Log Files"
This paper[14] is based on the familiar pageviews data used by stats.grok.se (see FAQ) for the German Wiktionary, possibly the only general-purpose dictionary for which such data is publicly available. Out of 350k entries, of which 200k were classified as German words, authors work on a set of 56k which satisfy several criteria: being lemmas of the German corpus DeReKo, having more than 11 monthly visits and a sufficient definition. The excluded entries were checked and found to be mostly inflected forms, geographic and proper names and terminological nouns. High frequency in the corpus is confirmed to be associated to high pageviews/look-ups; a set of entries selected by corpus frequency is found to have more pageviews than a set of random entries (quite a weak finding).
This method tells us little about Wiktionary, as we're not even told what portion of pageviews is covered by the set of entries in question. However, it's useful to confirm some assumptions used in compiling traditional dictionaries. Conversely, Wiktionary covers nearly all the words a traditional dictionary would. The only useful finding for Wiktionary is that dozens of words of the basic German vocabulary (as compiled by the Goethe-Institut for B1[supp 3]) are still missing from this Wiktionary set. The list of red links should be placed on wiki.
The authors then attempt to prove that entries with more than one definition ("polysemic") are more visited that entries with a single definition ("monosemic"), by noting that in groups of words with similar corpus frequency the "polysemic" entries are on average more visited than the "monosemic" entries. This reviewer's statistical knowledge is insufficient to determine whether normalizing pageviews by corpus frequency would have been more reliable than this "parallelization strategy". However, it's natural for entries to grow in size and number of definitions proportionally to their number of visits, whatever the merit of such a growth, so this "result" is dubious.
Finally, authors unsurprisingly show that some entries have bursts of visits far beyond their trend, linked to events and news.
In brief
"Multilingual Open Relation Extraction Using Cross-lingual Projection"
Short paper[15] on the extraction of semantic statements à la Wikidata (like "Ottawa", "is capital of", "Canada") from free text, or open domain relation extraction. Text is translated from the source language to English, then existing English parsers for the purpose are used; Wikipedia in French, Hindi and Russian was used as example source and the results manually annotated to verify accuracy: 82, 64 and 64 % respectively. It's not reported how wikitext was transformed into plain text. A dataset of samples in 60 languages was released under free license, but accuracy is still far from Wikidata's AI ingester, Kian (arguably a closed domain extractor hence "easier").
Other recent publications
A list of other recent publications that could not be covered in time for this issue – contributions are always welcome for reviewing or summarizing newly published research.
- "Methods in collaborative dictionaries"[16] Based on an examination of English and German Wiktionary. From the abstract: "We are particularly interested in the question to what extent they differ from the methods of expert lexicographers and how editorial dictionaries can leverage the user-generated data. .... For collaborative dictionaries, it is [...] essential to encourage discussion, define transparent decision workflows, and continually motivate the authors. The large user communities provide a high coverage of language varieties, translations, neologisms, as well as personal and spoken language, which often lack corpus evidence. [...] we see great potential in the cooperation between expert lexicographers and collaborative user communities."
- "How news media trigger searches and edits in Wikipedia"[17]
- "Adding High-Precision Links to Wikipedia"[18] From the abstract: "... we study how to augment Wikipedia with additional high-precision links. We present 3W, a system that identifies concept mentions in Wikipedia text, and links each mention to its referent page. ... Our experiments demonstrate that 3W can add an average of seven new links to each Wikipedia article, at a precision of 0.98."
- "Editorial Bias in Crowd-Sourced Political Information"[19] From the abstract: "By randomly assigning factually true but either positive or negative and cited or uncited information to the Wikipedia pages of U.S. senators, we uncover substantial evidence of an editorial bias toward positivity on Wikipedia: Negative facts are 36% more likely to be removed by Wikipedia editors than positive facts within 12 hours and 29% more likely within 3 days. Although citations substantially increase an edit's survival time, the editorial bias toward positivity is not eliminated by inclusion of a citation. We replicate this study on the Wikipedia pages of deceased as well as recently retired but living senators and find no evidence of an editorial bias in either. Our results demonstrate that crowd-sourced information is subject to an editorial bias that favors the politically active." (See also comments on the Wiki-research-l mailing list)
References
- ^ Graham Hubbs: "Teaching Philosophy by Designing a Wikipedia Page" book chapter of Experiential Learning in Philosophy, edited by Julinna Oxley, Ramona Ile, Routledge 2015, ISBN 9781138927391, 222-227
- ^ Maurizio Atzori & Carlo Zaniolo (2015). "Expressivity and Accuracy of By-Example Structured Queries on Wikipedia". Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), 2015 IEEE 24th International Conference on: 239–244.
- ^ Kun Xu; Yansong Feng & Dongyan Zhao (2014). "Xser@QALD-4: Answering Natural Language Questions via Phrasal Semantic Parsing" (PDF). CLEF2014 Working Notes: 1260–1274.
- ^ Iñiguez, Gerardo; János Török; Taha Yasseri; Kimmo Kaski; János Kertész (2014-09-24). "Modeling social dynamics in a collaborative environment". EPJ Data Science. 3 (1): 1–20. doi:10.1140/epjds/s13688-014-0007-z. ISSN 2193-1127.
- ^ A. Ali-Eldin, A. Rezaie, A. Mehta, S. Razroevy, S. Sj ̈ostedt-de Luna, O. Seleznjev, J. Tordsson, and E. Elmroth. How will your workload look like in 6 years? analyzing wikimedia's workload. In: Proceedings of the 2014 IEEE International Conference on Cloud Engineering (IC2E), pages 349-354, IEEE Computer Society, 2014. Reproduced in: Ahmed Ali-Eldin Hassan. Workload Characterization, Controller Design and Performance Evaluation for Cloud Capacity Autoscaling. PhD thesis, 2015, Department of Computing Science, Umea University. PDF, p.77
- ^ A. Papadopoulos, A. Ali-Eldin, J. Tordsson, K.E. Arźen, and E.Elmroth. PEAS: A Performance Evaluation framework for Auto-Scaling strategies in cloud applications. "Submitted for Journal Publication." Reproduced in: Ahmed Ali-Eldin Hassan. Workload Characterization, Controller Design and Performance Evaluation for Cloud Capacity Autoscaling. PhD thesis, 2015, Department of Computing Science Umea University PDF
- ^ A. Ali-Eldin, J. Tordsson, E. Elmroth, and M. Kihl. WAC: A Workload analysis and classification tool for automatic selection of cloud auto-scaling methods. "To be submitted". Reproduced in: Ahmed Ali-Eldin Hassan. Workload Characterization, Controller Design and Performance Evaluation for Cloud Capacity Autoscaling. PhD thesis, 2015, Department of Computing Science Umea University PDF
- ^ Gaál, Péter (2010-12-21). "Online szótárak a Web 2.0 platformon – A Wikiszótár és a Wiktionary" [Online dictionaries in Web 2.0 platform – Wikiszótár and Wiktionary]. Magyar Terminológia (Journal of Hungarian Terminology). 3 (2): 251–268. doi:10.1556/MaTerm.3.2010.2.7. ISSN 2060-2774.
- ^ Franck Sajous and Nabil Hathout: GLAWI, a free XML-encoded Machine-Readable Dictionary built from the French Wiktionary https://elex.link/elex2015/proceedings/eLex_2015_27_Sajous+Hathout.pdf http://redac.univ-tlse2.fr/lexicons/glawi.html
- ^ Matthias Liebeck and Stefan Conrad: IWNLP: Inverse Wiktionary for Natural Language Processing. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), pages 414–418, Beijing, China, July 26-31, 2015. PDF
- ^ Vivi Nastase & Carlo Strapparava. "knoWitiary: A Machine Readable Incarnation of Wiktionary" (PDF). FBK-irst, Trento, Italy.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Rico Sennrich; Beat Kunz. "Zmorge: A German Morphological Lexicon Extracted from Wiktionary" (PDF).
{{cite journal}}
: Cite journal requires|journal=
(help) dataset and code - ^ Gilles Serasset, Andon Tchechmedjiev. Dbnary: Wiktionary as Linked Data for 12 Language Editions with Enhanced Translation Relations. 3rd Workshop on Linked Data in Linguistics: Multilingual Knowledge Resources and Natural Language Processing, May 2014, Reyjkjavik, Iceland. https://hal.archives-ouvertes.fr/hal-00990876/document
- ^ Müller-Spitzer, Carolin; Sascha Wolfer; Alexander Koplenig (2015-02-10). "Observing Online Dictionary Users: Studies Using Wiktionary Log Files". International Journal of Lexicography: 029. doi:10.1093/ijl/ecu029. ISSN 0950-3846.
- ^ Manaal Faruqui & Shankar Kumar (2015). "Multilingual Open Relation Extraction Using Cross-lingual Projection". Proceedings of NAACL., also blog.
- ^ Meyer, Christian M.; Iryna Gurevych (2014). "Methoden bei kollaborativen Wörterbüchern [Methods in collaborative dictionaries / Méthodes dans le domaine des dictionnaires collaboratifs]". Lexicographica. 30 (1): 187–212. doi:10.1515/lexi-2014-0007. ISSN 1865-9403. (in German, with English abstract)
- ^ Stefan Geiß, Melanie Leidecker, and Thomas Roessing: The interplay between media-for-monitoring and media-for-searching: How news media trigger searches and edits in Wikipedia. New Media & Society 1461444815600281, first published on August 21, 2015 doi:10.1177/1461444815600281
- ^ Thanapon Noraset, Chandra Bhagavatula, Doug Downey: Adding High-Precision Links to Wikipedia. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 651–656, October 25-29, 2014, Doha, Qatar. PDF
- ^ Kalla JL, Aronow PM (2015) Editorial Bias in Crowd-Sourced Political Information. PLoS ONE 10(9): e0136327.doi:10.1371/journal.pone.0136327
- Supplementary references and notes:
- ^ Maurizio Atzori & Carlo Zaniolo (2012). "SWiPE: Searching Wikipedia by Example". Proceedings of the 21st World Wide Web Conference, WWW 2012, Lyon, France, April 16-20, 2012 (Companion Volume): 309–312.
- ^ https://forge.imag.fr/projects/dbnary
- ^ http://www.goethe.de/lhr/pro/daz/dfz/dtz_Wortliste.pdf [bare URL PDF]
- Remarks and annotations:
- ^ The wikitext in the XML dumps is accessed with the Bliki engine and parsed by dbnary to produce a LMF structure stored in RDF.
- ^ Wiktionary interwikis, used by the authors, don't give any information on words: they merely link entries with identical titles i.e. homographs.
Tech news in brief
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Recent changes
- In UploadWizard, the dialog to see an image preview has been removed. You already see the images in the thumbnails when you upload them.[1]
- UploadWizard dialogs look a bit different now. They have been updated to the new OOUI look.[2]
- You can now edit music scores in VisualEditor. You can add new sheet music scores and get live updates when you edit one.[3]
- When you send an e-mail to another editor using Special:EmailUser, that user will now get a notification on the wiki as well.[4]
- You can now see 500 images when you upload images from Flickr with UploadWizard. Before this change the limit was 50.[5]
- The Wikimedia mailing lists have been upgraded.[6]
- MediaWiki developers spent a day looking at proposed code changes in Gerrit. The goal was to clean up the backlog and give feedback to volunteer developers.[7]
Problems
Changes this week
- The new version of MediaWiki will be on test wikis and MediaWiki.org from September 29. It will be on non-Wikipedia wikis from September 30. It will be on all Wikipedias from October 1 (calendar).
- There will be a new beta feature that allows editors to use Flow on their user talkpage if they want to. Each wiki can decide if they want to enable it.[10]
- The Content Translation tool can give translation suggestions. This feature will now be available in more languages: English, French, Spanish, Russian, Chinese, Turkish, Japanese, Italian, and Catalan.[11]
Meetings
- You can join the strategy process of the Reading department of Wikimedia Engineering.[12]
- You can join the next meeting with the VisualEditor team. During the meeting, you can tell developers which bugs are the most important. The meeting will be on 29 September at 19:00 (UTC). See how to join.
Future changes
- The Wikimedia Foundation developers want the community to decide who can use OAuth in the future. You can discuss it on Meta.
Tech news prepared by tech ambassadors and posted by bot • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.