Shortcuts: WD:PC, WD:CHAT, WD:?

Wikidata:Project chat

From Wikidata
(Redirected from Wikidata:Project Chat)
Jump to navigation Jump to search

Mass-import policy

[edit]

Hi I suggested a new policy similar to OpenSteetMap for imports in the telegram chat yesterday and found support there.

Next steps could be:

  • Writing a draft policy
  • Deciding how many new items users are allowed to make without seeking community approval first.

The main idea is to raise the quality of existing items rather than import new ones.

I suggested a limit of 100 items or more to fall within this new policy. @nikki: @mahir256: @ainali: @kim: @VIGNERON: WDYT? So9q (talk) 12:11, 31 August 2024 (UTC)[reply]

@So9q: 100 items over what time span? But I agree there should be some numeric cutoff to item creation (or edits) over a short time period (day, week?) that triggers requiring a bot approval at least. ArthurPSmith (talk) 00:32, 1 September 2024 (UTC)[reply]
QuickStatements sometimes returns the error message Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes.
M2k~dewiki (talk) 02:07, 1 September 2024 (UTC)[reply]
Time span does not really matter, intention does.
Let me give you an example: I recently imported less than 100 banks when I added Net-Zero Banking Alliance (Q129633684). I added them during 1 day using OpenRefine.
Thats ok. It's a very limited scope, we already had most of the banks. It can be discussed if the new banks which are perhaps not notable should have been created or just left out, but I did not do that as we have no policy or culture nor space to talk about imports before they are done. We need to change that.
Other examples:
  • Importing all papers in a university database or similar totaling 1 million items over half a year using automated tools is not ok without prior discussion no matter if QS or a bot account was used.
  • Importing thousands of books/monuments/whatever as part of a GLAM project over half a year is not ok without prior discussion.
  • Importing all the bridges in the Czech Republic e.g. Q130213201 during whatever time span would not be ok without prior discussion. @ŠJů:
  • Importing all hiking paths of Sweden e.g. Bohusleden (Q890989) over several years would not be ok.
etc.
The intention to import many object without prior community approval is what matters. The community is your boss, be bold when editing but check with your boss before mass-imports. I'm pretty sure most users would quickly get the gist of this policy. A good principle could be: If in doubt, ask first. So9q (talk) 05:16, 1 September 2024 (UTC)[reply]
@So9q: I'm not sure that the mention of individually created items for bridges belongs to this mass import discussion. There exist Wikidata:Notability policy and so far no one has questioned the creation of entries for physically existing objects that are registered, charted, and have or may/should have photos or/and categories in Wikimedia Commons. If a community has been working on covering and processing any topic for twenty years, it is probably not appropriate to suddenly start questioning it. I understand that what is on the agenda in one country may appear unnecessarily detailed in another one. However, numbered roads, registered bridges or officially marked hiking paths are not a suitable example to question; their relevance is quite unquestionable.
The question of actual mass importation would be moot if the road administration (or another authority) published the database in an importable form. Such a discussion is usually led by the local community - for example, Czech named streets were all imported, but registered addresses and buildings were not imported generally (but individual items can be created as needed). Similarly, the import of authority records of the National Library, registers of legal entities, etc. is assessed by the local community; usually the import is limited by some criteria.. It is advisable to coordinate and inspire such imports internationally, however, the decision is usually based on practical reasons, i.e. the needs of those who use the database. It is true that such discussions could be more transparent, not just separate discussions of some working group, and it would be appropriate to create some formal framework for presenting and documenting individual import projects. For example, creating a project page that will contain discussion, principles of the given import, contact for the given working group, etc., and the project page should be linked from edit summaries. --ŠJů (talk) 06:03, 1 September 2024 (UTC)[reply]
Thanks for chipping in. I do not question the notability of the items in themselves. The community in telegram has voiced the opinion that this whole project has to consider what we want to include and not and when and what to give priority.
Millions of our current items are in a quite sad state as it is. We might not have the man-power to keep the level of quality at an acceptable level as is.
To give you one example Wikidata currently does not know what Swedish banks are still in operation. Nobody worked on the items in question, see Wikidata:WikiProject_Sweden/Banks, despite them being imported many years ago (some are from 2014) from svwp.
There are many examples to draw from where we only have scratched the surface. @Nikki mentioned in Telegram that there are a ton of items with information in descriptions not being reflected by statements.
A focus on improving what we have, rather than inflating the total number of items, is a desire by the telegram community.
To do that we need to discuss imports whether already ongoing or not, whether very notable or not that notable.
Indeed increased steering and formality would be needed if we were to undertake having an import policy in Wikidata. So9q (talk) 06:19, 1 September 2024 (UTC)[reply]
Just as a side note, with no implications for the discussion here, but "The community in telegram has voiced" is irrelevant, I understood. Policies are decided here on the wiki, not on Telegram. Or? --Egon Willighagen (talk) 07:58, 1 September 2024 (UTC)[reply]
It is correct that policies are created on-wiki. However, it may also be fair to use that as a prompt to start a discussion here and transparent to explain if that is the case. It won't really carry weight unless the same people also voice their opinions here, but there is also no reason to belittle it just because people talked somewhere else. Ainali (talk) 08:13, 1 September 2024 (UTC)[reply]
+1. I'll add that the Wikidata channel is still rather small compared to the total number of active Wikidata editors (1% or less is my guess). Also the frequency of editors to chat is very uneven. A few very active editors/chatmembers contribute most of the messages (I'm probably one of them BTW). So9q (talk) 08:40, 1 September 2024 (UTC)[reply]
Sorry, I did not want to imply that discussion cannot happen elsewhere. But we should not assume that people here know what was discussed on Telegram. Egon Willighagen (talk) 10:36, 1 September 2024 (UTC)[reply]
Terminology matters, and the original bot policies are probably not clear anymore to the current generation Wikidata editors. With tools like OpenRefine and QuickStatements), I have the impression it is no longer clear what is "bot" and what is not. You can now easily create hundreds of items with either of these tools (and possibly others) in a editor-driven manner. I agree it is time to update the Wikidata policies around important. One thing to make clear is the distinction mass creation of items and mass import (the latter can also be mass importing annotations and external identifiers, or links between items, without creating items). -- Egon Willighagen (talk) 08:04, 1 September 2024 (UTC)[reply]
I totally agree. Since I joined around 2019 I have really struggled to understand what is okay and not when it comes to mass-edits and mass-imports. I have had a few bot requests declined. Interestingly very few of my edits have ever been questioned. We should make it simple and straightforward for users to learn what is okay and what is not. So9q (talk) 08:43, 1 September 2024 (UTC)[reply]
I agree that we need an updated policy that is simple to understand. I also really like the idea of raising the quality of existing items. Therefore, I would like the policy to recommend that, or to even make an exception for preapproval if, there is a documented plan to weave the imported data into the existing data in a meaningful way. I don't know exactly how it could be formulated, but creating inbound links and improving the data beyond the source should be behavior we want to see, whereas just duplicated data on orphaned items is what we don't want to see. And obviously, these plans need to be completed before new imports can be made, gaming the system will, as usual, not be allowed. Ainali (talk) 08:49, 1 September 2024 (UTC)[reply]
@ainali: I really like your idea of "a documented plan to weave the imported data into the existing data in a meaningful way". This is very similar to the OSM policy.
They phrase it like so:
"Imports are planned and executed with more care and sensitivity than other edits, because poor imports have significant impacts on both existing data and local mapping communities." source
A similar phrasing for Wikidata might be:
"Imports are planned and executed with more care and sensitivity than other edits, because poor imports have significant impacts on existing data and could rapidly inflate the number of items beyond what the community is able or willing to maintain."
WDYT? So9q (talk) 08:38, 5 September 2024 (UTC)[reply]
In general, any proposal that adds bureaucracy that makes it harder for people to contribute should start wtih explaining what problem it wants to solve. This proposal contains no such analysis and I do consider that problematic. If there's a rule 10,000 items / year seems to me more reasonable than 100. ChristianKl15:37, 2 September 2024 (UTC)[reply]
Thanks for pointing that out. I agree. That is the reason for me to raise the discussion here first instead of diving right into a writing a RfC.
The community in Telegram seems to agree that a change is needed and has pointed to some problems. One of them mentioned by @Nikki: was: most of the manpower and time in WMDE for the last couple of years seem to be spent on trying to avoid a catastrophic failure of the infrastructure rather than improving the UI, etc. At the same time a handful of users have imported mountains of half-ass quality data (poor quality import) and show little or no sign of willingness to fix the issues pointed out by others in the community. So9q (talk) 08:49, 5 September 2024 (UTC)[reply]

There are many different discussions going on here on Wikidata. Anyone can open a discussion about anything if they feel the need. External discussions outside of Wikidata can evaluate or reflect Wikidata project, but should not be used to make decisions about Wikidata.

This discussion scope is a bit confusing. By mass import I mean a one-time machine conversion of an existing database into wikidata. However, the examples given relate to items created manually and occasionally over a long period of time. In relation to this activity, they do not make sense. If 99 items of a certain type are made in a few years, everything is fine, and as soon as the hundredth item have to be made, we suddenly start treating the topic as "mass import" and start demanding a previous discussion? That makes absolutely no sense. For this, we have the rules of notability, and they apply already for the first such item, they have no connection with "mass imports".

As I mentioned above, I would like to each (really) mass import have its own documentation project page, from which it would be clear who did the import, according to what policies, and whether someone is taking care of the continuous updating of the imported data. It is possible to appeal to mass importers to start applying such standards in their activities. It is also possible to mark existing items with some flags that indicate which specific workgroup (subproject) takes care of maintaining and updating the item. --ŠJů (talk) 18:06, 1 September 2024 (UTC)[reply]

Maybe using the existing "bot requests" process is overkill for this (applying for a bot flag shouldn't be necessary if you are just doing QS or Openrefine work), but it does seem like there should be either some sort of "mass import requests" community approval process, or as ŠJů suggests, a structural prerequisite (documentation on a Wikiproject or something of that sort). And I do agree if we are not talking about a time-limited threshold for this then 100 is far too small. Maybe 10,000? ArthurPSmith (talk) 22:55, 1 September 2024 (UTC)[reply]
There are imports based on existing identifiers - these should be documented on property talkpages (e.g. new mass import of newly created identifiers every month, usually using QS). Next big group is import of existing geographic features (which can be photographed) - these have coordinates, so are visible on maps. Some of them are in focus of few people only. Maybe document them in country wikiproject? JAn Dudík (talk) 15:49, 2 September 2024 (UTC)[reply]


My thoughts on this matter :
  • we indeed need a page (maybe a policy, maybe just a help page, a recommandation, a guideline, etc.) to document how to do good mass-import
  • mass-import should be defined in more precise terms, is it only creation? or any edits? (they are different but both could be problematic and should be documented)
  • 100 items is very low
    • we are just the 2nd of September and 3 people already created more than 100 items ! in August, 215 people created 100 or more items, the community can't process that much
    • I suggest at least 1000, maybe 10 000 items (depends if we focus only on creations or on any edits)
  • no time span is strange, is it even a mass-import if someone create one item every month for 100 months? since most mass-import are done by tools, most are done in a small period, a time span of a week is probably best
  • the quantity is a problem but the quality should also be considered, also the novelty (it's not the same thing to create/edit items following a well know model and to create a new model rom scratch, the second need more review)
  • could we act on Wikidata:Notability, mass-import should be "more" notable? or at least, notability should be more thoroughly checked?
    • the 2 previous point are based on references which is often suboptimal right now (most imports are from one source only, when crossing multiple references should be encouraged if possible)
  • the bot policies (especially Wikidata:Requests for permissions/Bot) probably need an update/reform too
  • finally, there is a general problem concerning a lot of people but there is mainly a few power-user who are borderline abusing the ressources of Wikidata ; we should focus the second before burden the second, it would be easier and more effective (ie. dealing with one 100 000 items import rather than with 1000 imports of 100 items).
Cheers, VIGNERON (talk) 09:20, 2 September 2024 (UTC)[reply]
In my opinion, items for streets are very useful, because there are a lot of pictures with categories showing streets. There are street directories. Streets often have their own names, are historically significant, buildings/cultural heritage monuments can be found in the respective street via street categories and they are helpful for cross-referencing. So please keep items for streets. Triplec85 (talk) 10:26, 5 September 2024 (UTC)[reply]
As streets are very important for infrastructure and for structuring of villages, towns, cities, yes, they are notable. Especially if we consider the historical value of older streets or how often they are used (in the real world). Data objects of streets can be combined with many identificators from origanizations like OpenStreetView and others. And they are a good element for categorizing. It's better to have images categorized in Hauptstraße (Dortmund) (with a data object that offers quick facts) than only Dortmund. Also, streets are essential for local administrations, which emphasizes the notability. And you can structurize where the street names come from (and how often they are used) etc. etc., with which streets they are connected etc. For me, I see many good reasons for having lists of streets in Commons, Wikidata, whatever; it gives a better overview, also for future developments, when streets are populated later or get cultural heritage monuments, or to track the renaming of streets... --PantheraLeo1359531 (talk) 10:37, 5 September 2024 (UTC)[reply]
Regarding the distribution / portion of content by type (e.g. scholary articles vs. streets / architectal structure) see this image:
Content on Wikidata by type
M2k~dewiki (talk) 10:41, 5 September 2024 (UTC)[reply]
Better to consider the number of potential items. ScienceOpen has most studies indexed and stands at 95 M items (getting to the same degree of completion would unlock several use-cases of Wikidata like Scholia charts albeit not substituting all the ones the mentioned site can be used for as the abstract text content as well as altmetrics scores etc are not included here). 100 M isn't that large and I think there are more streets than there are scholarly articles – having a number there would be nice.
-
I think of Wikidata usefulness beyond merely linking Wikipedia articles in this way: what other widely-used online databases exist and can we do the same but better and more? Currently, Wikidata can't be used to fetch book metadata into your ebook reader or song metadata into your music player/library, can't be used for getting food metadata in food tracking apps, can't tell you the often problematic ingredients of cosmetics/hygiene products, can't be used to routinely monitor new studies of a field or search them, or pretty much anything else that is actually useful to real people so I'd start working on the data coverage of such data first before importing lots of data with unknown questionable potential future use or manual item creation/editing. If we got areas covered that people actually use and need, then we could still improve areas where no use-cases yet exist or which if at all only slightly improve the proprietary untransparent-algorithm Google Web search results (that don't even index files & categories on Commons). I'd be interested in how other people think about WD's current and future uses but discussing that may be somewhat outside the scope of this discussion. Prototyperspective (talk) 13:35, 7 September 2024 (UTC)[reply]
I use streets a lot to qualify locations, especially in London - see Ambrose Godfrey (Q130211790) where the birth and death locations are from the ODNB. - PKM (talk) 23:16, 5 September 2024 (UTC)[reply]
Disagree on books and papers then – they need to be imported to enable all sorts of useful things which are not possible or misleading otherwise such as the statistics of Scholia (e.g. research field charts, author timeline/publications, etc etc).
I think papers are mostly a (nearly-)all-or-nothing thing – they aren't that useful before where I don't see much of a use-case. Besides charts, one could query them in all sorts of interesting ways once they are fairly complete, embed results of queries (e.g. studies by author sortable by citations & other metrics on the WP article about the person).
When fairly complete and unvandalized, they could also be analyzed, e.g. for AI-supported scientific discovery (there's studies on this) and be semantically linked & queried and so on.
It's similar for books. I don't know how Wikidata could be useful in that space if it doesn't contain at least as many items with metadata than other websites. For example one could then fetch metadata from Wikidata instead of from these sites. In contrast to studies, I currently see an actual use-case for WD items for streets – they may be useful at some point but I don't see why now or in the near future or how. Prototyperspective (talk) 00:31, 6 September 2024 (UTC)[reply]

Hello, from my point of view, there would be some questions regarding such a policy, like:

AutoSuggestSitelink-Gadget

Who will connect them to existing objects (if existing) or create new objects if not yet existing and when (especially, if there is a new artificial limit to create such objects)? Will someone implement and operate a bot for all 300 wikipedia language versions and all articles, all categories (including commonscats), all templates, all navigation items, ... to connect sitelinks to existing objects or create new objects if not yet existing?

From my point of view, time and ressources should be spent on improving processes and tools and help, support and educate people in order to improve data quality and completeness. For example, in my opionion the meta:AutosuggestSitelink-Gadget should be activated for all users on all language versions per default in the future.

Some questions and answers which came up over the last years (in order to help, educate and support users) can be found at

M2k~dewiki (talk) 19:24, 2 September 2024 (UTC)[reply]
For example, the functionality of
could also be implemented as bot in the future by someone. M2k~dewiki (talk) 21:13, 2 September 2024 (UTC)[reply]
This wasn't written but when the discussion started here, but here is a summary of the growth of the databases, that this policy partly addresses: User:ASarabadani (WMF)/Growth of databases of Wikidata. There are also some relevant links on Wikidata:WikiProject Limits of Wikidata. For an extremely high overview summary, Wikidata is growing so quick that we will hit various technical problems and slowing down the growth (perhaps by prioritizing quality over quantity) is a way to find time to address some of the problems. So the problem is wider than just new item creations, but slowing that would certainly help. Ainali (talk) 08:09, 3 September 2024 (UTC)[reply]
This has been also recently discussed at
Possible solutions could be:
M2k~dewiki (talk) 08:18, 3 September 2024 (UTC)[reply]
Just a note that the split seems to have happened now, so some more time is bought.
Ainali (talk) 21:14, 3 September 2024 (UTC)[reply]
Please note that "Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes." is not a error message or limit imposed from QuickStatement; it is a rate limit set by Wikibase, see phab:T272032. QuickStatement should be able to run a large batch (~10k commands) in a reasonable (not causing infra issue) speed. If QuickStatements does not retry when rate limit is hit, I consider it a bug; batches should be able to be run unattended with error recovery mechanism GZWDer (talk) 14:50, 4 September 2024 (UTC)[reply]
Thanks for linking to this report. I see that revisions is a very large table. @Nikki linked to this bot that does 25+ single edits to the same astronomical item before progressing to the next. This seems very problematic and given the information in that page, this bot should be stopped immediately. So9q (talk) 09:12, 5 September 2024 (UTC)[reply]
For what reason? That he is doing his job? Matthiasb (talk) 00:53, 6 September 2024 (UTC)[reply]
No, because it is wasting resources when doing the job. The bot could have added them in one edit, and instead it added unnecessary rows to the revisions table, which is a problem. Ainali (talk) 06:00, 6 September 2024 (UTC)[reply]
Well, non-bot users aka humans cannot add those statements in one single edit, right? That said, human editors by definition are wasting resource and probably should not edit at all? Nope. But that's only one side of the medal. It might be a surprise: the limes of the number of rows in version histories is ∞. For such items like San Francisco (Q62) that might go faster and for such times like asteroids slower because of no other editors are editing them. If we talk about the source of a river it cannot edited in one single edit, but we would have two for latitude and longitude, one for the altitude, one for a town nearby, the county, the state and the country each, at least. Several more if editors forgot to include proper sourcing and the language of the source. Until this point hundred of millions statements with "0 sources" are yet missing. So where is the problem? The edit behaviour of the single editor or did we forgot something to regulate or to implement back in 2015 or so what affects edits of every single user? Matthiasb (talk) 22:31, 7 September 2024 (UTC)[reply]
No, to me, a waste is when resources could have been saved but were not. So we're not talking manual editing here (which also accounts for a lot less edits in total). And while I agree that an item in theory may need almost limitless revisions, practically, we lack the engineering for it now. So let's do something smarter than Move Fast and Break Things (Q18615468). Ainali (talk) 07:30, 8 September 2024 (UTC)[reply]

I would echo the points raised above by M2k~dewiki. My feeling is that when we actually think about the problem we are trying to solve in detail, there will be better solutions than placing arbitrary restrictions on upload counts. There are many very active editors who are responsible and actually spend much of their time cleaning up existing issues, or enriching and diversifying the data. Many GLAMs also share lots of data openly on Wikidata (some exclusively so), helping to grow the open knowledge ecosystem. To throttle this work goes against everything that makes Wikidata great! It also risks fossilising imbalances and bias we know exist in the data. Of course there are some folks who just dump masses of data into Wikidata without a thought for duplication or value to the wider dataset, and we do need a better way to deal with this but I think that some automated checks of mass upload data (1000s not 100s) to look for potential duplication, inter connectivity and other key indicators of quality might be more effective at flagging problem edits and educating users, whilst preserving the fundamental principals of Wikidata as an open, collaborative data space. Jason.nlw (talk) 08:34, 3 September 2024 (UTC)[reply]

We definitely need more clarity on the guidelines, thanks for putting that up! Maybe we can start with some very large upper boundary so we can at least agree in the principle and enforcement tooling? I suggest we start with a hard limit for 10k new items / month for non-bot accounts + wording saying creation of over 1000 items per month should be preceded by a Bot Request if items are created based on the same source and any large scale creation of items (e.g. 100+ items in a batch) should be at least discussed on Wiki, e.g. on the related WikiProject. Also, I think edits are more complicated than new item creations; author-disambiguator.toolforge.org/, for example, allows users to make 100k edits in a year semi-manually. It may be a good idea for simplicity to focus only on item creation at this point. TiagoLubiana (talk) 21:00, 3 September 2024 (UTC)[reply]

User statistics can be found for example at:
Recent batches, editgroups, changes and creations can be found for example at:
Also see
M2k~dewiki (talk) 21:41, 3 September 2024 (UTC)[reply]
Including bots:
M2k~dewiki (talk) 21:45, 3 September 2024 (UTC)[reply]
@DaxServer, Sldst-bot, Danil Satria, LymaBot, Laboratoire LAMOP: for information. M2k~dewiki (talk) 13:23, 4 September 2024 (UTC)[reply]
@Kiwigirl3850, Romano1920, LucaDrBiondi, Fnielsen, Arpyia: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@1033Forest, Brookschofield, Frettie, AdrianoRutz, Luca.favorido: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@Andres Ollino, Stevenliuyi, Quesotiotyo, Vojtěch Dostál, Alicia Fagerving (WMSE): for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@Priiomega, Hkbulibdmss, Chabe01, Rdmpage, Aishik Rehman: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@Cavernia, GZWDer, Germartin1, Denelson83, Epìdosis: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@DrThneed, Daniel Mietchen, Matlin: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
Just a first quick thought (about method and not the content): apart from pings (which are very useful indeed), I think that the best place to make decisions on such an important matter would be an RfC; IMHO a RfC should be opened as soon as possible and this discussion should be moved to in its talk page, in order to elaborate there a full set of questions to be then clearly asked to the community. Epìdosis 13:49, 4 September 2024 (UTC)[reply]
Also see
M2k~dewiki (talk) 16:55, 4 September 2024 (UTC)[reply]
I don't really understand what the proposal here is or what problem it is trying to solve, there seem to be quite a number of things discussed.
That said, I am absolutely opposed to any proposal that places arbitrary limits on individual editors or projects for new items or edits simply based on number rather than quality of edits. I think this will be detrimental because it incentivises working only with your own data rather than contributing to other people's projects (why would I help another editor clean up their data if it might mean I go over my quota so can't do the work that is more important to me?).
And in terms of my own WikiProjects, I could distribute the new item creation to other editors in those projects, sure, but to what end? The items still get created, but with less oversight from the most involved and knowledgeable editor and so likely with greater variability in data quality. How is that a good thing? DrThneed (talk) 23:50, 4 September 2024 (UTC)[reply]
  • For some reason I think that some/more/many of the discutants on this have a wrong understanding on how Wikidata works – not technically, but in coresspondence with other WM projects as Wikipedia. In fact we have the well established rule that each Wikipedia article deserves an item. I don't know how man populated places articles LSJ bot created or how many items we have which are populated places, but citing earlier discussions in the German Wikipedia years ago about how far Wikioedia can expand I calculated the the number of populated places on earth might exceed 10 millions. So would creating 10 million WParticles on populated places cause a mass upload on Wikidata since every WP article in any language is to be linked to other language versions via Wikidata? (I also kind of calculated the possible number of geographic features on earth with nearly two millions of it in the U.S. alone. I think it are up to 100 million on the earth totally. We consider all of them notable, so at some point we will have items on 100 million geoagraphic features caused by interwiki. Is this mass upload? Shall we look on cultural heritage? So, in the U.S., the United Kingdom, France and Germamy together exist about one million buildings which are culturally protected in one way or another. At this time some 136.080 of them or elsewhere in the world have articles in the German WP, and yes, they are linked in Wikidata. Counting all other countries together some more millions will add to this.
  • When I started in German WP, some 18 years ago, it hat some 800.000 articles or so. At that time we hat users who tried hardly to refrain the sice of the German WP back to 500.000 articles. They failed. At some point before the end of this year the number of articles in the German WP will exceed 3.000.000, with the French WP following to the same marker some four or five months later. Though many of those items might already exist in one language version or more, many of those articles to the three million mark might not have a Wikidata item yet. Are these, say, 100.000 items mass upload?
  • And, considering a project I am involved with, GLAM activities on Caspar David Friedrich (Q104884) that is, the Hamburg Caspar David Friedrich. Art for a New Age Hamburger Kunsthalle 2023/24 (Q124569443) featured some 170 or so works of the artist, all of the considered notable in the WP sense of notability but we need all of them in Wikidata anyways for reasons of proveniency research for which WD is essential. So if on the way I am creating 50, 70 or 130 items and linking them up with their image files on Commons and individual WP languages which can be found, so I am committing the crime of mass upload, even if the process take weeks and weeks because catalogues of different museums use different namings and identifying can be done only visually?

Nope. When talking about the size of Wikidata then we must take it as given that in the 2035 Wikidata is at least ten times bigger then today. If we are talking about better data, I agree with this as an important goal, which needs to add data based on open sources which do not rely on Wikis but original data from the source, e.g. statistical offices and which get sourced in a proper way. (But when I called for sources some months ago one laughed and said Wikidata is not about sourcing data but collecting them. Actually that user should have been banned for eternity and yet another three days.) Restricting upload won't work and even might prevent adding high quality data. --Matthiasb (talk) 00:48, 6 September 2024 (UTC)[reply]

@Matthiasb In an ideal world, your points make a lot of sense. However, the technical infrastructure is struggling (plenty of links to those discussions above), and if we put ideals over practicalities, then we will bring Wikidata down before the developers manage to solve them. And then we will definitely not be ten times bigger in 2035. Hence, the thoughts about slowing the growth (hopefully only temporarily). If we need to prioritize, I would say that maintaining sitelinks goes above any other type of content creation and would be the last one to slow down. Ainali (talk) 06:08, 6 September 2024 (UTC)[reply]
No objections to the latter part. It was in the second half of the 2000 years when Mediawiki was struggling by its own success, users getting time outs all the time. Yet Tim Starling told us something like "dont't care about resources". Well he said something slightly differrent, but I don't remember the actual wording. The core of his remarks was that we as a community should not make our head aches on it. When it comes to lacking resources it would be his work and that of the server admins and technical admins to fix it. (But if it got necessary to act then we should do what they say.) Matthiasb (talk) 11:31, 6 September 2024 (UTC)[reply]
You probably refer to these 2000s quotes: w:Wikipedia:Don't worry about performance. --Matěj Suchánek (talk) 17:29, 6 September 2024 (UTC)[reply]
Maybe we need the opposite for Wikidata?
If that mindset is carrying over to use of automated tools (e.g. to create every tree in OpenStreetMap, there are 26,100,742 as of 2024-09-07, and link them to other features in OSM/Wikidata) that would very quickly become totally technically unsustainable. Imagine every tree in OSM having a ton of statements like the scientific articles. That quickly becomes a mountain of triples.
I'm not saying we could not do it, perhaps even start today, but what is the combined human and technical cost of doing it?
What other items do we have to avoid importing to avoid crashing the system?
What implications would this and other mass-imports have on the current backend?
Would WMF split the graph again? -> Tree subgraph?
How many subgraphs do we want to have? 0?
Can they easily (say in QLever in less than an hour) be combined again or is that non-trivial after a split?
Is a split a problem in itself or perhaps just impetus for anyone to build a better graph backend like QLever or fork Blazegraph and fix it?
We need to discuss how to proceed and perhaps vote on new policies to avoid conflict and fear and eventually perhaps a total failure of the community with most of the current active users leaving.
Community health is a thing, how are we doing right now? So9q (talk) 08:30, 7 September 2024 (UTC)[reply]
Yes, thank you @Matěj Suchánek! --Matthiasb (talk) 11:22, 7 September 2024 (UTC)[reply]
Do we have a community? Or better asked, how many communities do we have? IMHO there are a least three different commmunites:
  • people formerly being active on Wikipedia who came over while their activity as interwiki bot owner wasn't needed anymore in Wikipedia; most of them will likely have tens of thousands of edits each year.
  • Wikipedia users occasionally active in Wikidata; some only might fix issues, some other might prepare further usage of Wikidata in their own Wikipedia. Most of them have from several hundreds or a few thousand edits.
  • Users which don't fit in the former two groups but use Wikidata for some external project, far away from WMF. I mentioned above groups of museums world-wide for whom WD is a easy accessable data bass providing the infrastructure need in provenience research. I don't think that this group is big, and they might edit selected items. Probably hundreds to few thousands edits. This group might or might not colloborate with the former.
Some months back I saw a visualization of how in the English Wikipedia WikiProjects create sub-communities, part of them overlapping, other not overlapping at all, about 50 of theme are of notable size. Maybe in this three classes of communities as I broke it down above several subcommunities exists also a set of sub-communities, with more or less interaction. I can't much comment your other questions. I don't know about OSM more than looking up the map. I have no clue how wikibase works.
Peeking over the horizon we see some performance issues at Commons for some time now. People begin to turn away from Commons it seems because the for example have been photographing at an event hundreds of photographs but batch upload doesn't work. Wikinews editors are waiting on the uploads which do not come. Well, they won't wait of course. They won't write articles on those events. So the commons issue also affects Wikinews. By the way, restrictions drive away users as well. That's the reason why or how German Wikiquote has killed itself several months ago.
As I said, I don't know what and which effects the measures you mentioned will have on Wikidata users and on users of other WMF projects and on this "external" user community I mentioned above. Matthiasb (talk) 11:58, 7 September 2024 (UTC)[reply]
And just another thought: If we want to have better data we should prohibit uploading data based on some Wikipedia only and maybe even remove statements sourced with WP only, after some transition, say end of 2026. We don't need to import population data for U.S. settlements, for example, from any Wikipedia, if the U.S. Census Bureau is offering ZIP files for every state containing all that data. (However the statement URL does not need a source as it is its source itself.) We should also enforce that users add the language of the sources, many users are neglecting it. (And I see some wrong directed mentality that "english" as source language is not needed at all but Wikidata isn't an english language database, is it?) Matthiasb (talk) 14:05, 7 September 2024 (UTC)[reply]
I totally agree with @Ainali. We need a healthy way for this community to operate and make decisions that take both human and technical limits into account. So9q (talk) 08:16, 7 September 2024 (UTC)[reply]
Rather than attempting to put various sorts of caps on data imports, I would put the priority on setting up a process to review whether existing parts of Wikidata really ought to be there or if they should rather be hosted on separate Wikibases. If the community decides that they should rather not be in Wikidata, they'd be deleted after a suitable transition period. For instance, say I manage to create 1 million items about individual electrical poles (spreading the import over a long period of time and done by many accounts from my fellow electrical pole enthusiasts). At some point, the community needs to wake up and to make a decision about whether this really ought to be in Wikidata (probably not). It's not something you can easily deal with in WD:RFD because it would be about many items, created from many different people over a long time, so I would say there should be a different process for such deletions. At the end of such a process, Wikidata's official inclusion guidelines would be updated to mention that the community has decided that electrical poles don't belong in Wikidata (except if they have sitelinks or match other criteria making them exceptional).
The reason why I'd put focus on such a process is that whatever caps you put on imports, people will find ways to work around them and import big datasets. If we work with the assumption that once it's in Wikidata, it deserves to stay there for eternity, it's a really big commitment to make as a community.
To me, the first candidate for such a process would be scholarly articles, because I'm of the opinion that they should rather be in a separate Wikibase. This would let us avoid the query service split. But I acknowledge that I'm not active on Wikidata anymore and may be out of touch on such topics (I stopped being active precisely because of this disagreement over scholarly articles) − Pintoch (talk) 11:44, 8 September 2024 (UTC)[reply]
@Pintoch Does a regular deletion of items really reduce the size of the database? Admins can still view all revisions, suggesting that this procedure would not mitigate the problem. Ainali (talk) 12:22, 8 September 2024 (UTC)[reply]
According to
currently we have
  • 2,2 million items marked as deleted (but can be "restored", i.e. made visible again for everyone, not only for admins)
  • 4,4 million items which currently are redirects
  • 11 million omitted Q-IDs, i.e. which have not been assigned (therefore we have QIDs > 130 million, but only 112,3 million objects)
M2k~dewiki (talk) 12:31, 8 September 2024 (UTC)[reply]
From my point of view deletion of items increases the size, since also the history of deletions will be stored (who marked the object as deleted and when, addtional comments, ...) M2k~dewiki (talk) 12:44, 8 September 2024 (UTC)[reply]
@Ainali: deleting entities removes their contents from the query service. If the community decided to delete the scholarly articles, the WDQS split wouldn't be needed anymore. It would also remove it from the future dumps, for instance. The size of the SQL database which underpins MediaWiki is much less of a concern. − Pintoch (talk) 14:09, 8 September 2024 (UTC)[reply]
@Pintoch: Less, but not by much if I read this problem statement correctly. Ainali (talk) 18:40, 8 September 2024 (UTC)[reply]
@Ainali: my bad, I hadn't read this yet. Indeed, this is also concerning. I would say it should be possible to remove all revisions from a big subset of items (such as scholarly articles) if they were to be deleted in a coordinated way. That should surely help quite a bit. − Pintoch (talk) 18:43, 8 September 2024 (UTC)[reply]
I agree, that would help. A separate Wikibases with scholarly papers would probably not take a lot of resources or time to set up.
If anyone wants to go that route I recommend against choosing federated properties.
Also authors and similar information needed besides the papers themselves should be kept in Wikidata.
Such a Wikibases could set up in less han a month. The import would take longer though because Wikibase is rather slow. (Think 1 item a second so 46 mio seconds = 532 days) So9q (talk) 09:45, 11 September 2024 (UTC)[reply]
OpenAlex has about 250 mio papers indexed, you do the math how long an import of them all would take 😉 So9q (talk) 09:47, 11 September 2024 (UTC)[reply]
If the import is done based on a SQL dump from Wikidata the loading would be much faster. Perhaps less than a month.
But AFAIK access to the underlying mariadb is not normally available to users of wikibase.cloud.
Also I'm not sure we have a sql dump of the Wikidata tables available anywhere currently. So9q (talk) 09:53, 11 September 2024 (UTC)[reply]
I don't think it would be good to transfer them over the Internet instead of copying them physically. That could be done by cloning the hard drives for example, and what I proposed here for Wikimedia Commons media could also be made possible for Wikidata. So it would take a few days if done right. One can e.g. use hashsums to verify things are indeed identical but if it's done via HDD mirroring then it should be anyway...a problem there would be it would contain lots of other Wikidata items at first which would have to be deleted afterwards if they can't be cloned separately (which they probably could if those scholarly items are on separate storage disks). Prototyperspective (talk) 10:43, 11 September 2024 (UTC)[reply]

IMHO, of course we can stop mass import or even mass editing, if there is an inminent risk of failure of the database/systems. Anyway, I am sure that the tech team can temporarly lock the database or increase maxlag, etc, in such urgent situation, if community is slow taking decisions. But I don't think that the stuff I am reading in this thread are long term solutions (removing items, removing revisions, increasing notability standards, etc). I think Wikidata should include all scholar articles, all books, all biographies and even all stars in the galaxy, everything described in reputable sources. I think that Wikidata (and Wikipedia), as an high quality dataset curated by humans and bots, is used more and more in AI products. Wikidata is a wonderful source for world modelling. Why don't we think some kind of partnership/collaboration/honest_synergy with those AI companies/research_groups to develop a long term solution and keep and increase the growth of Wikidata instead of reducing it? Thanks. Emijrp (talk) 14:54, 9 September 2024 (UTC)[reply]

Yes, agreed. Everyone is rushing to create insanely big AI models. Why can't Wikimedia projects have a knowledgebase about everything? Don't be conservative, or it will be like Abstract Wikipedia that is made obsolete by language models even before the product launch! And we must be ready for next emergency like COVID-19 pandamic, in which lots of items must be added and accessed by everyone in emergency! And splitting WDQS into two means our data capacity is potentially doubled, nice! Midleading (talk) 17:30, 9 September 2024 (UTC)[reply]
Interesting view that AW is made obsolete by AI models. The creators seem to think otherwise, ie it's a tool for people of marginalized languages that are not described by mountains of text anywhere to gain access to knowledge currently in Wikipedia.
It's also a way to write Wikipedia articles in one place in abstract forms and then generate the corresponding language representations. As such it is very different and might be complementary to GenAI. Google describes reaching any support for 1000 language as a challenge because of lack of readily available digital resources in small languages or dialects (even before reaching support for 300).
I'm predicting it will get increasingly harder as you progress toward supporting the long tail of smaller/minority languages. So9q (talk) 08:48, 10 September 2024 (UTC)[reply]
I agree with Midleading except that I don't think Wikidata is currently useful in emergencies. Do you have anything backing up ie it's a tool for people of marginalized languages that are not described by mountains of text anywhere? That would be new to me. It's also a way to write Wikipedia articles in one place in abstract forms doesn't seem to be what AW is about and is not scalable and not needed: machine translation of English (and a few other languages of the largest WPs) is good enough at this point that this isn't needed and if you're interested in making what you describe real, please see my proposal here (AW is also mentioned there). Also as far as I can see nothing supports the terminology of "marginalized languages" instead of various languages having little training data available. I think the importance of a language roughly matches how many speak them and Wikimedia is not at well-supporting the largest languages so I don't see why focus on small languages in particular would be due...and these could also be machine translated into once things progress further and models and the proposed post-machine-translation system improve. I think AW is like a query tool but unlike contemporary AI models reliable, deterministic, transparent, and open/collaborative so it's useful but I think not nearly as useful as having English Wikipedia available in the top 50-500 languages (next to their language Wikipedias, not as a replacement of these). Prototyperspective (talk) 16:46, 10 September 2024 (UTC)[reply]
Who said it will be a replacement? I have followed AW quite closely and have never heard anyone suggesting it. Please cite your sources. Ainali (talk) 06:58, 11 September 2024 (UTC)[reply]
There I was talking about the project I'm proposing (hopefully just co-proposing), I see how especially the italics makes it misunderstandable. AW is not about making English Wikipedia/the highest-quality article available in the top 50-500 languages so it's not what I was referring to. Prototyperspective (talk) 10:47, 11 September 2024 (UTC)[reply]
Ah, I see. I am also curious what makes you think that AW is not about writing Wikipedia articles in an abstract form? As far as I have understood it, that is the only thing it is about (and why we first need to build WikiFunctions). Ainali (talk) 14:15, 11 September 2024 (UTC)[reply]
Why would it be – please look into Wikifunctions, these are functions like exponent of or population of etc. You could watch the video on the right for an intro. The closest thing it would get to is have short pages that say things like London (/ˈlʌndən/ LUN-dən) is the capital and largest city of both England and the United Kingdom, with a population of 8,866,180 in 2022 (and maybe a few other parts of the WP lead + the infobox) which is quite a lot less than the >6,500 words of en:London. I had this confusion about Abstract Wikipedia as well and thought that's what it would be about and it could be that many supporters of it thought so too...it's not a feasible approach to achieving that and I think it's also not its project goal. If the project has a language-independent way of writing simple Wikipedia-like short lead sections of articles then that doesn't mean there's >6 million short articles since all of them would need to be written anew and it can't come close to the 6.5 k words of the ENWP article which is the most up-to-date and highest-quality for reasons that include that London is a city in an English-speaking country. Wikifunctions is about enabling queries that use functions like 'what is the distance between cities x and y'. That there is this project should not stopgap / delay / outsource large innovation and knowledge proliferation that is possible right now. Prototyperspective (talk) 15:27, 11 September 2024 (UTC)[reply]
As far as I have understood it, WikiFunctions is not the same as Abstract Wikipedia. It is a tool to build it, but then it might be built somewhere else. Perhaps on Wikidata in connection to the items, where the sitelinks are, or somewhere else completely. So just lack of evidence that you cannot do it on WikiFunctions now does not mean it will not be done at all in the future. Also, we don't need to write each abstract article individually. We can start with larger classes and then refine as necessary. For example, first we write a generic for all humans. It will get some details, but lack most. Then that can be refined for politicians or actors. And so on, pending on where people have interest, down to individual detail. I might have gotten it all wrong, but that's my picture after interviewing Denny (audio in the file). Ainali (talk) 15:38, 11 September 2024 (UTC)[reply]
I find claiming an open wiki project has become obsolete because of (proprietary) LLMs very dangerous. And promoting wiki projects as the place-to-go during an emergency, too. --Matěj Suchánek (talk) 16:23, 11 September 2024 (UTC)[reply]

Dating for painting at Q124551600

[edit]

I have a painting that is dated by the the museum as "1793 (1794?)", so it seems it was likely made in 1793 but there is a small chance that it was only made in 1794. When I type them both I get an error report. How to fix that? How to give one the preffered rank, I don't find a fitting field. Carl Ha (talk) 06:42, 1 September 2024 (UTC)[reply]

Mark the one with the highest chance as "preferred"? And add a 'reason' qualifier to indicate it preference is based on higher chance? The "deprecated" qualifier (reason for deprecated rank (P2241)) for the statements has hundreds of reasons (there is list of Wikidata reasons for deprecation (Q52105174) but I am not sure it is complete; I think my SPARQL query earlier this week showed many more). Similarly, there is reason for preferred rank (P7452) and maybe most probable value (Q98344233) is appropriate here. Egon Willighagen (talk) 08:09, 1 September 2024 (UTC)[reply]
How do I mark it as preferred? Which qualifier do I use? Carl Ha (talk) 08:11, 1 September 2024 (UTC)[reply]
Ranking is explained here: https://www.wikidata.org/wiki/Help:Ranking
I would suggest the qualifier property reason for preferred rank (P7452) with the value most probable value (Q98344233). Egon Willighagen (talk) 10:42, 1 September 2024 (UTC)[reply]
Thank you! Carl Ha (talk) 11:12, 1 September 2024 (UTC)[reply]
What should be do if we have a work where there is no consensus by art historians what the "preferred" dating is? The dating of Q570188 is disputed but Wikidata wants me to prefer one statement. Carl Ha (talk) 08:38, 2 September 2024 (UTC)[reply]
@Carl Ha I don't think Wikidata constraints are written in stone and the real world sometimes brings challenges that no constraint can predict. In this case, in my view, you can disregard the exclamation marks, just leave it as it is for now. Wikidata constraints are here to serve us, not the other way round. Vojtěch Dostál (talk) 06:38, 5 September 2024 (UTC)[reply]

Could we include things that have P31 as "icons" (Q132137) as this is a type of painting. I don't know how to include that technically in the wiki code. Carl Ha (talk) 08:36, 1 September 2024 (UTC)[reply]

It now works, I had a typo. Carl Ha (talk) 18:35, 5 September 2024 (UTC)[reply]
I tried and now it seems that it just includes all elements including sculptures etc.

Would it be possible to have in the column "inventory number" has just the inventory numbers connected with Art Culture Museum Petrograd and not the ones connected with other institutions (in this case always the Russian Museum) that later owned the paintings? Because now it is not sortable after the inventory number of the Art Culture Museum. Thanks! Carl Ha (talk) 09:40, 1 September 2024 (UTC)[reply]

I don't know if I am understanding your question but f.ex. St. George (II) (Q3947216) has both the museums Russian Museum (Q211043) and Art Culture Museum Petrograd (Q4796761) with the values ЖБ-1698 resp. 353 as inventory number (P217) with the museums as qualifier. And we have collection (P195) with both the museums and startin years for both and their respective P217. Matthiasb (talk) 18:17, 7 September 2024 (UTC)[reply]
Yes I know, but how can I change the table on this page, so that it only shows the inventory numbers connected with Art Culture Museum? Is that possible? Carl Ha (talk) 18:20, 7 September 2024 (UTC)[reply]
Why would you want to do so? As I understood the paintng firstly was in Q4796761 and after the museum was dissolved was brought to Q211043 where it got a new inventary number. So before 1926 it was 353, later ЖБ-1698. Assuming that this informations are correct all appears good to me. Well I don't know how to query, but I guess you must restrict the query not to include inventary numbers valid before 1926 respectively only them after 1926 if you need it the other way around. Matthiasb (talk) 21:52, 7 September 2024 (UTC)[reply]
I want to sort the table after the inventory numbers of the Art Culture Museum what now is not possible. My question is exactly how to change the query this way. Carl Ha (talk) 21:57, 7 September 2024 (UTC)[reply]
Hello, help regarding queries could be found for example at
M2k~dewiki (talk) 10:14, 11 September 2024 (UTC)[reply]
M2k~dewiki (talk) 10:16, 11 September 2024 (UTC)[reply]

WDQS infrastructure

[edit]

I am curious over currently used infrastructure for running WDQS and its costs? Where could I find this information? Zblace (talk) 18:05, 1 September 2024 (UTC)[reply]

Try here this page has a summary https://diff.wikimedia.org/2023/02/07/wikimedia-enterprise-financial-report-product-update/ Baratiiman (talk) 05:37, 2 September 2024 (UTC)[reply]
That has nothing to do with the Query Service Zblace is asking for. LydiaPintscher (talk) 07:40, 2 September 2024 (UTC)[reply]
I'm also interested in that information. It would be interesting to see expenditures over time and number of machines/VMs. So9q (talk) 08:13, 7 September 2024 (UTC)[reply]
@So9q exactly. I am much involved in few projects (some super fragile like Wikispore) and I am curious how much is invested in different components and why we are not having more (as in options) and better (as in modern) infrastructure situation now (as in recent years). I understand that running old software until full substitute is in place is a must, but do not understand the accumulated technical debt, while having such a huge finnancial reserve. Zblace (talk) 06:38, 8 September 2024 (UTC)[reply]
The WikiData Query Service (WDQS; the part that lets you do queries) should be load-balanced on 18 bare-metal servers if Grafana is any indication.
The problems Wikidata have can't be fixed by throwing more servers at the problem. The graph is already unwieldingly big, and that's the core issue. Graphs doesn't scale all that well when you get to Wikidata's size. There are other ways to store and process data however, maybe in 20 years Wikidata will run on a document-oriented database. Maxlath should have scripts to filter and import Wikidata into CouchDB IIRC for those that would like to experiment. Infrastruktur (talk) 20:35, 8 September 2024 (UTC)[reply]
also wikitech:Wikidata Query Service#Hardware is related —MisterSynergy (talk) 20:51, 8 September 2024 (UTC)[reply]
Here is the repo with the WD->CouchDB code by @Maxlath. It seems to have been archived years ago though, can anyone confirm that it is working and document the benefits of storing the information in CouchDB if any? So9q (talk) 13:05, 9 September 2024 (UTC)[reply]
I asked GPT-3.5 and it output the following:
Storing Wikidata's JSON dump in CouchDB can offer several benefits, especially when dealing with the challenges posed by large graph databases like Blazegraph. Here are some potential advantages:
  1. Scalability: CouchDB is designed to handle large amounts of data and can scale horizontally. This means you can add more nodes to your CouchDB cluster to accommodate growing data sizes without significant performance degradation.
  2. Document-Oriented Storage: CouchDB is a NoSQL database that stores data in a document-oriented format (JSON). This aligns well with the JSON dump of Wikidata, allowing for easy ingestion and retrieval of data without the need for complex transformations.
  3. Flexible Schema: CouchDB's schema-less design allows for flexibility in data structure. As Wikidata evolves, you can easily accommodate changes in the data model without needing to perform extensive migrations.
  4. Replication and Availability: CouchDB offers built-in replication features, which can enhance data availability and fault tolerance. This is particularly useful for distributed applications or when you need to maintain multiple copies of the data across different locations.
  5. RESTful API: CouchDB provides a RESTful HTTP API, making it easy to interact with the database using standard web protocols. This can simplify integration with web applications and services.
  6. MapReduce for Queries: CouchDB supports MapReduce for querying data, which can be beneficial for aggregating and processing large datasets. You can create views to efficiently retrieve specific subsets of data.
  7. Versioning and Conflict Resolution: CouchDB has built-in support for versioning and conflict resolution, which can be advantageous when dealing with collaborative data updates or when multiple sources are contributing to the dataset.
  8. Ease of Use: For developers familiar with JSON and RESTful APIs, CouchDB can be easier to work with compared to more complex graph databases. This can speed up development and reduce the learning curve.
  9. Offline Capabilities: CouchDB can be used in offline applications, allowing for local data storage and synchronization when connectivity is restored. This can be useful for mobile or distributed applications.
  10. Community and Ecosystem: CouchDB has a strong community and a variety of tools and libraries that can facilitate development and integration with other systems.
I then asked for an example how to query the Wikidata CouchDB for all documents with label = Sweden. Here is the response
As is evident in the response having an updated CouchDB instance with all the data from Wikidata would be very valuable as one could more reliably query and manage it compared to only having the dump and Blazegraph.
I suggest we ask WMF to provide a CouchDB hosted updated copy of Wikidata. I would be happy to have it updated weekly like QLever and the dumps.
So9q (talk) 13:16, 9 September 2024 (UTC)[reply]
I was surprised GPT was able to answer that as well as it did. The upside is the horizontal scalability. One major drawback hosting on a document-oriented database is that you loose the advanced query possibilities you have with SPARQL, something that is so nice that I'm sure people will be unwilling to let it go until they have to. The data will also loose it's graph nature, I have no idea about what the implications of that will be, but presumably it will impact the areas that graphs are good for. When you query with mapreduce you only get one aggregation step so it is very limited from what we are used to. Much more labor involved in processing the data into a useable format in other words. Things like following property chains will probably be impossible. No graph traversals. Don't think you can even do joins. To loose all that will be painful.
Modest federation and a new query engine should buy us plenty of time, it will probably be very long until Wikidata is forced to switch to a different model. A document-oriented database instance of Wikidata could be interesting as a research project however, whether it runs CouchDB, Hadoop or something else. Infrastruktur (talk) 16:56, 9 September 2024 (UTC)[reply]
In inventaire.io (Q32193244), we use CouchDB as a primary database, where Wikidata uses MariaDB. Like Wikidata, we then use Elasticsearch as a secondary database to search entities (such as the "label=Sweden" query above). CouchDB doesn't solve the problem of the graph queries bottleneck, and I'm very much looking forward to what Wikimedia will come up with to address the issue. The one thing I see where CouchDB could be useful to Wikidata is indeed to provide a way to mirror the entities database, the same way the npm (Q7067518) registry can be publicly replicated.
As for https://github.com/maxlath/import-wikidata-dump-to-couchdb, it was a naive implementation; if I wanted to do that today, I would do it differently, and make sure to use CouchDB bulk mode:
- Get a wikidata json dump
- Optionally, filter to get the desired subset. In any case, turn the dump into valid NDJSON (drop the first and last lines and the comma at the end of each lines).
- Pass each entity through a function to move the "id" attribute to "_id", using https://github.com/maxlath/ndjson-apply, to match CouchDB requirements.
- Bulk upload the result to CouchDB using https://github.com/maxlath/couchdb-bulk2 Maxlath (talk) 17:58, 9 September 2024 (UTC)[reply]
Speaking of mirroring the entities database. I see CouchDB as a far superior alternative to the Linked DataFragments [1] instance of Wikidata. What LDF promises and what it delivers is two different things. Downloading a set of triples based only on a single triple pattern is close to useless, it is also a bandwidth hog. If we just want to download a subset of Wikidata then CouchDB will allow us to specify that subset more precisely, and might be a welcome addition to doing the same with SPARQL CONSTRUCT or SELECT queries. Syncing a whole copy of Wikidata will also save tons of bandwidth compared to downloading the compressed dump file very week. Infrastruktur (talk) 16:54, 11 September 2024 (UTC)[reply]

Conflation

[edit]

These need help: Charles-Louis-Achille Lucas (Q19695615) Wolf Laufer (Q107059238) Fakhr al-Dīn Ṭurayḥī (Q5942448) RAN (talk) 08:54, 4 September 2024 (UTC)[reply]

The date of death of Q19695615 has now been changed to 1905[2]; the source says 20th September - is there a reason it should be 19th? I reverted the recent additions to Q107059238 - I would have moved them to a new item, but one of the references for a 1601 date of death is claimed to have been published in 1600, and the links don't work for me ("The handle you requested -- 21.12147/id/48db8bef-31cf-4017-9290-305f56c518e9 -- cannot be found"). Q5942448 just had an incorrect date (1474 should have been 1674 - I removed it and merged with an item that already had 1674). Peter James (talk) 13:07, 4 September 2024 (UTC)[reply]
Regarding Charles-Louis-Achille Lucas (Q19695615), the death certificate has been established on September 20, but the death happened the day before, on September 19. Ayack (talk) 14:46, 4 September 2024 (UTC)[reply]
We have that happen with obituaries all the time, people add the date of the obituary rather than the date of death. --RAN (talk) 19:17, 4 September 2024 (UTC)[reply]

Property for paused, interrupted etc.

[edit]

Trying to model "Between 1938 and 1941 it was reunited with Lower Silesia as the Province of Silesia" in Upper Silesia Province (Q704495). Is there any quantifier that says "not from ... to ...." or something? --Flominator (talk) 11:50, 4 September 2024 (UTC)[reply]

Wikidata Query Service graph split to enter its transition period

[edit]

Hi all!

As part of the WDQS Graph Split project, we have new SPARQL endpoints available for serving the “main” (https://query-main.wikidata.org/) and “scholarly” (https://query-scholarly.wikidata.org/) subgraphs of Wikidata.

As you might be aware we are addressing the Wikidata Query Service stability and scaling issues. We have been working on several projects to address these issues. This announcement is about one of them, the WDQS Graph Split. This change will have an impact on certain uses of the Wikidata Query Service.

We are now entering a transition period until the end of March 2025. The three SPARQL endpoints will remain in place until the end of the transition. At the end of the transition, https://query.wikidata.org/ will only serve the main Wikidata subgraph (without scholarly articles). The query-main and query-scholarly endpoints will continue to be available after the transition.

If you know to want more this change, please refer to the talk page on Wikidata.

Thanks for your attention! Sannita (WMF) (talk) 13:41, 4 September 2024 (UTC)[reply]

I would very much like to avoid a graph split. I have not seen a vote or anything community related in response to the WMF idea of splitting the graph. This is not a good sign.
It seems the WMF have run out of patience for this community to try to mitigate (e.g. by deleting the part of the scholarly graph not used by any other Wikimedia project) and thus freeing up resources for items that the community really care about and that has a use for other Wikimedia projects.
This is interesting. I view this as the WMF technical management team has in the absence of a timely response and reaction from the Wikidata community themselves decided how to handle the issues that our lack of e.g. a mass-import policy has created.
This sets a dangerous precedent for the future of more WMF governance which might impact the project severely negative.
I urge therefore the community to:
  • address the issue with the enourmous revision table (e.g. by suggesting to WMF to merge or purge the revision log for entries related to bots so that e.g. 20 edits in a row from a bot on the same date get squashed into 1 edit in the log)
  • immediately stop all bots currently importing items no matter the frequency until a mass-import policy is in place.
  • immediately stop all bots making repetitious edits to millions of items which inflate the revision table (e.g. User:LiMrBot)
  • immediately limit all users to importing x items a week/month until a mass-import policy is in place no matter what tool they use.
  • put up a banner advising users of the changes and encourage them to help finding solutions and discuss appropriate policies and changes to the project.
  • take relevant community steps to ensure that the project can keep growing in a healthy and reliable way both technically and socially.
  • assign a community liason that can help communicate with WMF and try avoid the graph split becoming a reality.
WDYT? So9q (talk) 09:36, 5 September 2024 (UTC)[reply]
Also see
M2k~dewiki (talk) 09:42, 5 September 2024 (UTC)[reply]
Also see
M2k~dewiki (talk) 09:57, 5 September 2024 (UTC)[reply]
Regarding the distribution / portion of content by type (e.g. scholary articles vs. streets / architectal structure) see this image:
Content on Wikidata by type
M2k~dewiki (talk) 11:46, 5 September 2024 (UTC)[reply]

Just to say that the problems with the Wikidata Query Service backend are being discussed since July 2021, and that the split in the graph has been introducted as a possibility in October 2023, and that we communicated about it periodically (maybe not to the best of our possibilities, for which I am willing to take the blame, but we've kept our communication open with the most affected users during the whole time).

This is not a precedent for WMF telling the community what to do, the community is very much in its own right to make all the decisions it wants, but we need to find a solution anyway to a potential failure of the Wikidata Query Service that we started analysing in mid-2021. I want to stress that the graph split is a technical patch to a very specific problem, and that in no way WMF or WMDE are interested in governing the community. Sannita (WMF) (talk) 13:02, 5 September 2024 (UTC)[reply]

I understand, thanks for the links. I have no problem with WMF or any of the employees. I see a bunch of people really trying hard to keep this project from failing catastrophically and I'm really thankful that we still have freedom as a community to decide what is best for the community even when we seem to be on a reckless path right now.
What I'm trying to highlight is the lack of discussion about the growth-issue and how to steer the community to grow by quality instead of quantity overall. Also I'm missing a discussion and information e.g. to bot operators of the technical limitations that we have because of hard- and software and a governance that ensures that our bots do not break the system.
A perhaps horrifying example is the bot I linked above which makes 25+ edits in a row to the same items for millions of items potentially.
In that specific case we failed:
  • to inspect and discuss the operations of the bot before approval.
  • failed as a community to clearly define the limits for Wikidata so we can make good decisions about whether a certain implementation of a bot is desired (in this case make all the changes locally to the item, then upload = 1 revision).
Failings related to responsible/"healthy" growth:
  • we have failed as a community to ask WMF for input on strategies when it comes to limiting growth.
  • we have failed as a community to have discussions with votes on what to prioritize when WMF is telling us we cannot "import everything" without breaking the infrastructure.
  • we have failed as a community to implement new or update existing policies to govern the growth and quality of the project in a way that the community can collectively agree on effectively manages the issues the WMF have trying to tell us about for years.
We really have a lot of community work to do to keep Wikidata sound and healthy! So9q (talk) 17:21, 5 September 2024 (UTC)[reply]
@So9q I see your points, and I agree with them. So much so, that I'm writing this message with my volunteer account on purpose and not my work one, to further stress that we need as a community to address these points. For what it's worth, I'm available (again, as a volunteer) to discuss these points further. I know for a fact that we'll have people who can provide us with significant knowledge in both WMF and WMDE, to take an informed decision. Sannita - not just another it.wiki sysop 17:53, 5 September 2024 (UTC)[reply]
See this. There is yet another reason/thing to take in consideration: Most of Wikipedia language versions refrained from using Wikidata in infoboxes or articles. Doubting on Wikidata was some reason. Now as WP communities see that WD works and data can be used they will use WD more and more. For one example: We started to use Wikidata within the infobox for american settlements several items, e.g. time zone, FIPS and GNIS, inhabitants, area and several more. We might add telephone area code and ZIP code in the next future. Some of the are still crosschecking only if specific data in Wikidata and Wikipedia are the same but might switch on Wikidata only every time. All the language versions will make greater use of Wikidata in the future. If WMF tells us we're breaking the infrastructure they didn't do their job or did it wrong. Matthiasb (talk) 01:14, 6 September 2024 (UTC)[reply]

Brazilian Superior Electoral Court database

[edit]

Hello everyone!

At Wikimedia Commons, I made a proposal of batch uploading all the candidates portraits from Brazilian elections (2004-2024). The user @Pfcab: has uploaded a big chunk and while talking with @DaxServer:, he noticed that since the divulgacandcontas.tse.jus.br has so much biographical data (example), it could be a relevant crosswiki project.

Would this be possible? There's any bot that could - firstly - look for missing names in the Wikidata (while completing all the rest), exporting the missing Superior Electoral Court biographical data, and adding the respective images found in Category:Files from Portal de Dados Abertos do TSE?

Thanks, Erick Soares3 (talk) 16:37, 4 September 2024 (UTC)[reply]

symmetric for "subclass of (P279)" ?

[edit]

Hi, why is there not a symmetric "has subclass" for the property "subclass of (P279)" ? I contribute on social science concepts and it is honestly complicated to build concepts hierarchies when you are not able to see the "subclasses item" from the "superclass item" page. Making a property proposal is a bit beyond my skills and interests, anyone interested to look into the question ?

Thanks Jeanne Noiraud (talk) 17:37, 4 September 2024 (UTC)[reply]

We're unlikely to ever make an inverse property for subclass of because there are items with hundreds of subclasses.
If you're using the basic web search you can search with haswbstatement. To see all subclasses of activity (Q1914636) you would search haswbstatement:p279=Q1914636. Or you could use the Wikidata class browser (Q29982490) tool at https://bambots.brucemyers.com/WikidataClasses.php . And finally you could use Wikidata Query Service at https://query.wikidata.org/. William Graham (talk) 18:26, 4 September 2024 (UTC)[reply]
Thanks, the class browser is helpful ! The others are a bit too technical for me. Jeanne Noiraud (talk) 13:13, 6 September 2024 (UTC)[reply]
Relateditems gadget is useful for a quick look of subclasses. Though sometimes it gets overcrowded if there's too many statements about an item. You can enable it here. Samoasambia 19:01, 4 September 2024 (UTC)[reply]
Thanks, useful tool indeed ! Jeanne Noiraud (talk) 13:13, 6 September 2024 (UTC)[reply]

Islamic dates versus Christian dates

[edit]

See: Ibrahim Abu-Dayyeh (Q63122057) where both dates are included. Do we include both or just delete the Islamic one? It triggers an error message. RAN (talk) 19:12, 5 September 2024 (UTC)[reply]

@Richard Arthur Norton (1958- ): it's an hard question. We don't have a clear and easy way to indicated Islamic dates (which is a big problem in itself), right now the Islamic dates are stored as Julian or Gregorian dates, this is wrong and so they should probably (and sadly) be removed. Cheers, VIGNERON (talk) 12:30, 6 September 2024 (UTC)[reply]
I deleted the Islamic date it was interpreted as a standard AD date and was triggering an error message. --RAN (talk) 00:25, 7 September 2024 (UTC)[reply]

Question about P2484

[edit]

Dear all,

Please see this question about property P2484: Property_talk:P2484#Multiple_NCES_IDs_are_possible

Thank you WhisperToMe (talk) 21:09, 5 September 2024 (UTC)[reply]

Merging items (duplicates)

[edit]

Hello, could someone please merge the following items or explain to me how I can do this? I have tried it via Special:MergeItems, but it won't load for me (the gadget is already enabled in the preferences). The entries are duplicates.

Thanks Аныл Озташ (talk) 22:58, 5 September 2024 (UTC)[reply]

It looks like User:RVA2869 has taken care of most of these. On long-focus lens (Q11022034) vs telephoto lens (Q516461) - they each have many separate sitelinks, so a variety of languages seem to think they are distinct. For example en:Long-focus lens vs en:Telephoto lens which seems to clarify the distinction. ArthurPSmith (talk) 21:27, 9 September 2024 (UTC)[reply]

Merge?

[edit]

These are the same supercomputer with performance measured at different times, perhaps with slight mods at each performance measurement, should they be merged? Ranger (Q72229332) Ranger (Q73278041) Ranger (Q72095008) Ranger (Q2130906) RAN (talk) 23:45, 5 September 2024 (UTC)[reply]

Yes. I used it Vicarage (talk) 04:35, 6 September 2024 (UTC)[reply]
Not entirely sure but probably yes, at least some of them (maybe not the last one?) ; and if not merged, these items should more clearly differentiable. It needs someone who understand exactly what this is about. The first three were created the bot TOP500_importer maybe Amitie 10g can tell us more. Cheers, VIGNERON (talk) 12:16, 6 September 2024 (UTC)[reply]

How to add units to a thermal power plant?

[edit]

I think I asked this question before but if so it was so long ago I have forgotten the answer.

In a thermal power plant, such as a coal-fired power plant, the number of units and their capacity in megawatts are very basic pieces of information. For example https://globalenergymonitor.org/projects/global-coal-plant-tracker/methodology/ “ database tracks individual coal plant units”.

I am writing on Wikipedia about coal-fired power plants in Turkey and I pick up the infobox data automatically from Wikidata. At the moment I am editing https://en.wikipedia.org/wiki/Af%C5%9Fin-Elbistan_power_stations. Ideally I would like the infoboxes to show that the A plant has 3 operational units of 340 MW each and one mothballed unit of 335 MW all of which are subcritical and 2 proposed units each of 344 MW, and that the B plant has 4 units each of 360 MW all operational.

If that is too ambitious just the number of units would be a step forward, as shown in infobox params ps_units_operational, ps_units_planned etc. Is that possible? Chidgk1 (talk) 09:09, 6 September 2024 (UTC)[reply]

https://www.wikidata.org/wiki/Q85967587#P2109 ? Bouzinac💬✒️💛 15:29, 6 September 2024 (UTC)[reply]
And you might use this query to list power wikidata items by MW power : https://w.wiki/B7U9 Bouzinac💬✒️💛 19:27, 6 September 2024 (UTC)[reply]
@Bouzinac Thanks for quick reply but I don’t quite understand - maybe my question was unclear? Chidgk1 (talk) 08:18, 7 September 2024 (UTC)[reply]
@Chidgk1: I think Bouzinac thought you were asking about "units" in the sense of the "MW" part of 340 MW, not "units" in the sense of components. It probably doesn't make sense to create separate wikidata items for each "unit" of a power plant; rather I think the property to use is has part(s) of the class (P2670) with a suitable item for a power plant unit (not sure there is one right now so maybe that does need to be added) with qualifier quantity (P1114) and any other qualifiers you think appropriate. ArthurPSmith (talk) 21:35, 9 September 2024 (UTC)[reply]

Merging items (duplicates)

[edit]

Can someone please merge these two pages, since they are about the same hospital. The current name of the hospital is AdventHealth Daytona Beach.

Florida Hospital Memorial Medical Center (Q30269896) to Florida Hospital Memorial Medical Center (Q130213551)

Catfurball (talk) 20:25, 6 September 2024 (UTC)[reply]

✓ Done Ymblanter (talk) 19:01, 7 September 2024 (UTC)[reply]

How best to model a long development project for a property

[edit]

For Harbor Steps (Q130246591), [3] page 7 gives a good table summarizing the process of assembling the land, planning, and construction, especially of the dates associated with three different phases of work. I imagine this can be appropriately modeled using existing properties, but I do not know how, not a sort of thing I've ever seen modeled here. - Jmabel (talk) 20:33, 6 September 2024 (UTC)[reply]

KUOW

[edit]

KUOW (Q6339681) is just a separately licensed transmitter for KUOW-FM (Q6339679). No programming content of its own, really just a repeater. I suppose it still merits a separate item, but I suspect the two items should somehow be related to one another, which they seem not to be currently. - Jmabel (talk) 05:42, 3 August 2024 (UTC)[reply]

(Reviving the above from archive, because no one made any suggestions on how to do this. - Jmabel (talk) 20:36, 6 September 2024 (UTC))[reply]

Would Property:P527 work for this case? Ymblanter (talk) 19:03, 7 September 2024 (UTC)[reply]

Q107019458 and Q924673 merge?

[edit]

The New York Herald (Q107019458) and New York Herald (Q924673) or separate incarnations? RAN (talk) 00:24, 7 September 2024 (UTC)[reply]

The different identifiers are because the New York Herald was combined with the New York Sun to form The Sun and the New York herald (Q107019629) from February to September 1920 (https://www.loc.gov/item/sn83030273). The New York Herald (Q107019458) is from October 1920, when it became a separate newspaper again, to 1924. I don't know if the P1144/P4898 identifiers and 8-month gap are enough for separate items. Peter James (talk) 11:03, 7 September 2024 (UTC)[reply]
I added the dates to help distinguish the two. --RAN (talk) 04:30, 9 September 2024 (UTC)[reply]

How to find specific properties?

[edit]

Hello, I am looking for some specific properties or a way to enter certain technical data in items. This includes, for example:

  • resolution (e.g. 6,000 x 4,000 px or 24 megapixels) or burst mode speed (e.g. 10 shots per second) for digital cameras
  • angle of view, maximum magnification, number of aperture blades and number of lens groups and elements for camera lenses
  • nominal impedance and signal-to-noise ratio for microphones

Unfortunately, I have not found any suitable properties or do not know whether there is a way to search for these other than by name (as there are no categories for items that could be searched - are there perhaps other ways?).

Thanks --Аныл Озташ (talk) 15:02, 8 September 2024 (UTC)[reply]

Besides looking for the properties directly the other way is to look at existing items to see what properties those items use. Ideally, there would be a model item (P5869) for digital cameras/camera lenses/microphones where you can see what properties get used to model those. ChristianKl15:27, 8 September 2024 (UTC)[reply]

Hello, I created an item for Die Ikonographie Palästinas/Israels und der Alte Orient (Q130261727), which is a series of books, or more accurately 4 volumes of a book, each contain a catalogue of ancient near eastern art from a different period. I wonder what is the right instance of (P31) for this item. From comparison with other similar series, I think the possibilities are: publication (Q732577), series (Q3511132), editorial collection (Q20655472), collection of sources (Q2122677), catalogue (Q2352616), collection (Q2668072), book series (Q277759) and research project (Q1298668). They all fit, but it seems too much to put all of them. How should I know what are the right items to choose here? פעמי-עליון (talk) 17:41, 8 September 2024 (UTC)[reply]

I guess it depends. If you create items for each of the 4 volumes, then I would suggest book series (Q277759). That only makes sense if they are needed as seperat items. If there will be no items for each volume, I would suggest publication (Q732577). catalogue (Q2352616) could be added additionally if there are numbered items in the book.
collection of sources (Q2122677), editorial collection (Q20655472) and research project (Q1298668) doesn't seem to be ontological correct. Carl Ha (talk) 20:43, 8 September 2024 (UTC)[reply]
Thank you! פעמי-עליון (talk) 16:26, 11 September 2024 (UTC)[reply]

Deletion request for Q95440281

[edit]

The painting is not at all in the style of the credited author. It even looks if the person who uploaded it on Commons maybe even painted it by themself. Carl Ha (talk) 20:34, 8 September 2024 (UTC)[reply]

@Carl Ha: an author can have several very different styles (compare Picasso first and last paintings for an obvious case) and here the style is not that different (and the theme is the same). But indeed Three master on vivid Sea (Q95440281) has no references which is bad. @Bukk, Bybbisch94: who could probably provide reliable sources. Cheers, VIGNERON (talk) 14:09, 9 September 2024 (UTC)[reply]
Yes but that painting is very low quality. There is no way a Academical trained artist of the 19th century painted something like that. There are just too many basic flaws somebody with a professional training wouldn‘t do. 2A02:8109:B68C:B400:AC37:147E:C9F3:A2E5 20:35, 9 September 2024 (UTC)[reply]
Again, there could be a lot of explanation, it could be a preparation work, a study, etc. Sadly, without references there is no way to know... Cheers, VIGNERON (talk) 09:43, 10 September 2024 (UTC)[reply]

Creating double entries

[edit]

@Beleg Tâl: We have Death of an Old Pilot (Q130262449) and Joseph Henderson (1826-1890) obituary (Q114632701). I think the duplication is not needed. It seems that the rules at Wikisource are that is you add a hyperlink to the text, you have created a new "annotated" version that now requires a separate entry at both Wikidata and Wikisource. Wikisource has their own rules, but perhaps we can stop the unnecessary duplication on this end, and merge the two. A quick ruling will decide if more are going to be made. I do not see any utility, at all, in duplicating every obituary because Wikisource insists on keeping two copies. RAN (talk) 04:28, 9 September 2024 (UTC)[reply]

As cs.wikisource user I would say that's one news article. Is known another edition (in another newsapaper)? if not, this should be merged. JAn Dudík (talk) 12:27, 9 September 2024 (UTC)[reply]
Have a look at s:en:Wikisource:Wikidata#Works.
  • "Works" and "Editions" are not the same thing, and are modelled separately on WD. The obituary itself is a "work"; the copy of it in the October 8, 1890 edition of the New York Evening Post is an "edition".
  • If you only want to create one item on Wikidata, I recommend that you create the "edition" item (instance of (P31) of version, edition or translation (Q3331189)). If you only create the "work" item, you should not link it to enWS unless it is to a Versions page or a Translations page.
  • The version of the obituary that you annotated with wikilinks, can probably be considered the same "edition" as the clean version. This is kind of a grey area and might need to be discussed at s:en:WS:S. You will note that the annotated version is NOT currently linked to Wikidata, and I personally would leave it that way.
Beleg Tâl (talk) 14:00, 9 September 2024 (UTC)[reply]
I agree with Beleg Tâl. It's maybe a bid ridiculous for mono-edition work but that the way we do it on Wikidata for ages, see Wikidata:WikiProject_Books. Also, some data are duplicated on both item and should be only in one. Cheers, VIGNERON (talk) 14:12, 9 September 2024 (UTC)[reply]
  • The only annotation is linking some of the people to their Wikidata entry. Imagine if I clipped the obit from all three editions of that days newspaper and created a Wikidata entry for each. The text has not changed. We have multiple copies of engravings from different museums because they have different wear and tear. --RAN (talk) 18:30, 10 September 2024 (UTC)[reply]

Empty page

[edit]

Why Special:MostInterwikis is empty? It is very useful to me to know what is the pages with the most interwikis... 151.95.216.228 07:55, 9 September 2024 (UTC)[reply]

Presumably because there are no other language versions of Wikidata; as the message at en:Special:MostInterwikis points out, the special page really counts interlanguage links, not (other) interwiki links, so (IMHO) it’s arguably misnamed. Lucas Werkmeister (WMDE) (talk) 10:44, 9 September 2024 (UTC)[reply]
If you want to get the items with the most sitelinks, you can do that with a query. Lucas Werkmeister (WMDE) (talk) 13:58, 9 September 2024 (UTC)[reply]
Most special pages are not designed for Wikidata (phab:T245818).
As for a replacement, Special:PagesWithProp works, too. --Matěj Suchánek (talk) 15:52, 9 September 2024 (UTC)[reply]

Statement for Property "Forced Labor"

[edit]

On the new WD item Q130221049 (Dressmakers of Auschwitz), the top statement instance of (P31) could do with the qualifier Q705818 -- Deborahjay (talk) 07:55, 9 September 2024 (UTC)[reply]

Adding Para-swimming classification to a para-swimmer's WD item

[edit]

Detailed this difficulty on the Talk:Q129813554 page. -- Deborahjay (talk) 08:00, 9 September 2024 (UTC)[reply]

Eldest child

[edit]

How do we represent eldest child and/or a certain order of the child in the lineage? DaxServer (talk) 12:59, 9 September 2024 (UTC)[reply]

@DaxServer: just add the birthdate (place of birth (P19)) on each child item, then you can easily retrieve the eldest one. Cheers, VIGNERON (talk) 13:48, 9 September 2024 (UTC)[reply]
@VIGNERON The date of birth (P569) is not known, only that he's the eldest one among the six sons DaxServer (talk) 14:29, 9 September 2024 (UTC)[reply]
@DaxServer: ah I see, then you could use series ordinal (P1545) = 1, 2, 3 etc. in qualifier, like on Q7810#q7810$2B9F1E2C-7583-4DD8-8CEB-F367FC0641E1 for instance. Cdlt, VIGNERON (talk) 15:01, 9 September 2024 (UTC)[reply]

Wikidata weekly summary #644

[edit]

Issues with rules for WDQS graph split

[edit]

In case no-one is watching that page, please note that I raised some potential issues at Wikidata talk:SPARQL query service/WDQS graph split/Rules. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:57, 10 September 2024 (UTC)[reply]

How to reference an old book

[edit]

i wanna use c:Category:國立北京大學畢業同學錄 as reference for a statement. how do i do that? RZuo (talk) 06:16, 11 September 2024 (UTC)[reply]

You should create an item for that book. GZWDer (talk) 11:24, 11 September 2024 (UTC)[reply]

Dump mirrors wanted

[edit]

WMF is looking for more mirror hosts of the dumps. They also rate limit downloads because the demand is high and few mirrors exist. See https://dumps.wikimedia.org/ So9q (talk) 10:13, 11 September 2024 (UTC)[reply]

New WikiProject CouchDb

[edit]

I launched a new project Wikidata:WikiProject_Couchdb. Feel free to join and help explore CouchDb as a new type of scalable backend for Wikidata. So9q (talk) 17:10, 11 September 2024 (UTC)[reply]

कल

[edit]

2401:4900:7CC3:CCFE:F07B:A082:E256:75DF 00:45, 12 September 2024 (UTC)[reply]