Hi Magnus, could you take a look at this and see if there is an easy fix for Cradle? Thanks!
https://github.com/magnusmanske/cradle/issues/15
Previous discussion was archived at User talk:Magnus Manske/Archive 9 on 2024-01-01.
Hi Magnus, could you take a look at this and see if there is an easy fix for Cradle? Thanks!
https://github.com/magnusmanske/cradle/issues/15
Are you sure about this? The BioMed Central journal ID (P13124) identifiers appear to be alphanumeric, wheres the catalog IDs are numeric.
I made a new catalog with a new scrape, linked it to the property, synced from Wikidata, and set the remaining ones from the old catalog where available, preserving user and timestamp.
Not sure why it's only 300 and not 328 though.
For example: World Allergy Organization Journal is no longer published by BMC. The journal is continuing in cooperation with a new publisher, Elsevier.
Hi, could you please remove autoscrape settings from these catalogues? I've been monitoring them for a long time and they stopped having autoscrape working a long time ago, but people are constantly restarting autoscrape job which hangs up in the queue for quite a long without any results.
There is also still the problem I mentioned in this post. I've launched through api autoscrape start for a set of catalogues. And the ones where autoscrape doesn't exist hang in the queue with a constant restarts via schedule. The catalogues are:
This probably needs handling with a response and stopping the task if autoscrape is requested when it can't be executed.
(I also tried running a "pause" task in https://mix-n-match.toolforge.org/#/jobs/5923 , which is also saved in the tasks. This probably needs handling too to avoid potential vandalism)
Done, and thanks for the list. I made a new job status "BLOCKED" that can not be started from the web interface.
Thanks for the quick resolution. Btw I also saw PAUSE status in the queue a few days ago. I think it would be useful to have it in the web interface (or ABORT button) to stop at least your own tasks to avoid misclicks or unnecessary schedules as in the case of music genres above.
Hi. Could you please also block autoscrape on these catalogs. Most of them have either changed their layout and no longer work, or are stuck permanently with no new results for over a year.
UPD: Oops, it appears I've reported them all above. They've probably been unblocked and are back in the queue?
And by the way, is it possible to remove the "automatch by search" task from the regular repetition here https://mix-n-match.toolforge.org/#/catalog/3789 and https://mix-n-match.toolforge.org/#/jobs/5195 ? I hit "purge automatches" every quarter, as without types syncing is more harmful due to clogging than helpful. It would probably be useful to have a button to remove regular tasks from the schedule in such cases (at least for those who have admin rights).
Hi, I can't see any problem with the https://mix-n-match.toolforge.org/#/catalog/1011 WNS number . The URL and IDs seems to be the same. The web site is updated periodically, last time 9 October 2024. The web has now 116 680 postage stamps registered but in the catalog there are only 90 241. What can be the problem? Thank you.
Hi, Gerwoman. Autoscrape doesn't work in this catalogue. Given that the catalogue is relatively small, autoscrapping should take no more than half an hour, by my experience. But in this catalogue autoscrape regularly falls into the queue and hangs there for several days/weeks without adding anything new for years. This can be verified in this way: The latest mix-n-match ID in this catalogue is https://mix-n-match.toolforge.org/#/entry/91669282. The latest mix-n-match IDs in recent catalogues have 172699175 ID. # 91m vs # 172m means that none of the autoscrape jobs for 2-3 years has not added a single new ID in this catalog. Thus, it does not work and constantly hangs in the queue.
If you know that IDs can still be autocollected from the site, you just need to reconfigure autoscrape from scratch by specifying 1011 in Catalog ID here https://mix-n-match.toolforge.org/#/scraper/new and configuring everything as you did the first time. If you can't reconfigure it again, it's better to just disable the autoscraper in it.
Thank you Solidest, but I can't remember how configured it the first time. I don't have access to the URL or regex...
Yeah, it worked, the catalogue entries are now at 116k. Thanks! I've crossed that catalogue off the list.
https://www.wikidata.org/w/index.php?title=Q55862034&action=history
The users that made the duplicate visible via DDB/GND IDs:
have been blocked.
https://www.wikidata.org/w/index.php?title=Q113446597&action=history
Magister should probably not be in the name field, cf. GND - not doing it that way
https://www.wikidata.org/w/index.php?title=Q108811951&action=history
The users that made the duplicate visible via DDB/GND IDs:
have been blocked.
https://www.wikidata.org/w/index.php?title=Q119999189&action=history
The users that made the duplicate visible via DDB/GND IDs:
have been blocked.
Sir,
Your bot just generated Q130738075 "P. australis" on 30 Oct 2024
I think in response to the wikispecies which i had made a few days ago [the botwikidata linked to that]
https://species.wikimedia.org/wiki/Phocyx_australis
However, already a wikidata existed Q130650095 Phocyx australis
I had exited that further, but it seems I forgot to add the link to the wikispecies
I'm curious why your bot failed to see the existing wikidata and make new?
I'm curious why the bot did not at least add the genus name. Such a basic name formation makes it nearly impossible to find/associate with the correct other info - there's likely 10,000s of taxa called "P. australis"!
Can you fix the tool to include the ISNI on items about humans? The quality for items about organizations maybe isn't sufficient.