Jump to content

Manual talk:Job queue

Add topic
From mediawiki.org
Latest comment: 5 years ago by Ciencia Al Poder in topic RefreshLinksJob TypeError
The following discussion has been transferred from Meta-Wiki.
Any user names refer to users of that site, who are not necessarily users of MediaWiki.org (even if they share the same username).

Meaning of the numbers

[edit]

Is 6,999 a long or a short job queue? Are there rough estimates how long it takes to work through a job length of length X? I guess there is no perfect answer, but some examples would help already (like what the maximal length was and how long it took to work through that). Kusma 21:40, 4 June 2006 (UTC)Reply

Per default, one page request to the wiki will take one item out of the queue. So, it'll take 6,999 page requests to empty it - how long that takes depends on how many visitors the wiki has. On the english wikipedia, this will be a few minutes, i guess. -- Duesentrieb 23:51, 4 June 2006 (UTC)Reply

Is the queue common for all WMF projects, or do each have a queue of its own? \Mike(z) 16:59, 1 November 2007 (UTC)Reply

One job per request?

[edit]

This seems not to be true, using version 1.6.7. I have got a private wiki and I have tested the jobs queue by accessing my Special:Statistics page and watching the queue. It seems to me, that every request runs several sub-requests, maybe while the css are being loaded (MonoBook.php):

 @import "/w/index.php?title=MediaWiki:Common.css&action=raw&ctype=text/css&smaxage=2678400";

I had to use something like $wgJobRunRate = 0.1; to have only one job runned each time I pressed F5 to reload.

--jsimlo 13:48, 21 July 2006 (UTC)Reply

How to empty job queue?

[edit]

In Thai Wikipedia, the number of the queue now are about 14,000 and never decreases. Anyone know how to empty the job queue. Please see at w:th:Special:Statistics. --Manop 21:53, 7 August 2006 (UTC)Reply

Run maintenance/runJobs.php --68.142.14.71 14:57, 31 August 2006 (UTC)Reply
It doesn't work to me neither http://th.wikipedia.org/w/runJobs.php, http://th.wikipedia.org/w/maintenance/runJobs.php nor http://th.wikipedia.org/maintenance/runJobs.php --Manop 21:20, 1 September 2006 (UTC)Reply
Because you are not (and probably never will be :) allowed to do it. Running such scripts is reserved for the system administrators only. --jsimlo 21:31, 1 September 2006 (UTC)Reply
Thank you, the queue is empty now --Manop 17:08, 6 September 2006 (UTC)Reply

changing noinclude text on a template

[edit]

On Commons we have some very very highly used templates like commons:template:GFDL.

I just added some interwiki links to the noinclude section and then the job queue is over a million. :o

I would have thought changes only inside a noinclude shouldn't really add jobs to the jobqueue? Or is that a special case not worth implementing separately?

--pfctdayelise 06:16, 25 October 2006 (UTC)Reply

I think this sort of thing is starting to be a real cornern. At time of writing, the job queue is about eight hundred thousand, and has been for a while. I have no idea what in particular was responsible (not me, I promise!), but one suspects this is more likely to be due to a small numbers of edits to very high use templates, than a huge number of separate edits. Perhaps some sort of priority queue would also be better, so that the larger jobs go to the back, and swamp small ones less. If it's heuristically possible to guess which are more likely to have a significant effect, that would be handy too. Alai 05:15, 11 November 2006 (UTC)Reply

I have a related question. Say in noinclude part of template A there is a call to template B. Now, if I change template B, will all pages including A be added to the job queue? --Paul Pogonyshev 21:07, 6 December 2006 (UTC)Reply


A record?

[edit]

I was going to ask if 146,716 was the longest the job queue had ever been, as I've not seen it above 35,000 ish before. Then it leapt up to 315,451 - is there something odd going on? To almost quote Withnail (or I): "I demand to have more graphs!" 193.134.170.35 14:19, 13 December 2006 (UTC)Reply

That's quite impressive/alarming, all right. It seemed to be regularly in six figures for a while, but aside from alarming graphs, what would be interesting to see is some analysis of why the job queue is as high as it on such occasions. Some sort of per-edit cost attribution, ideally, especially if it's something preventable, such as people making "documentation tweaks" to the transcluded code of high-use templates, and other silliness. Alai 05:53, 14 December 2006 (UTC)Reply
Usually, because someone edits a template included in half the wiki, or something similar. I've seen the en.wikipedia job queue rise to 900,000 items on occasion, usually as the result of several users editing high-profile templates at the same time. Titoxd(?!?) 05:10, 16 January 2007 (UTC)Reply
Again, it's hit 141,290. Why are there no records for this? Graphs really would be fantastic, there must be some way to do this? 129.215.149.97 11:12, 25 February 2007 (UTC)Reply
0.9M is... impressive. I understand that's the likely cause, but unless we can find out which templates are being edited to cause this, it doesn't really help us, does it? (If we knew which, we might be able to, say, upgrade their protection, dope-slap the people editing them unduly, restructure them to not be such severe "single points of failure", etc, etc.) Alai 04:44, 14 March 2007 (UTC)Reply

It was about 700,000 last night, and has been over 300,000 all day today. That's either a lot of template edits, or some templates that are absurdly heavily used. If there are any devs hanging around here, could they comment on the feasibility about adding a "job queue by top ten templates" breakdown, or something along those line? Alai 21:13, 18 March 2007 (UTC)Reply

To reduce these "single points of failure", the wiki I usually work on uses the code {{SUBST:<choose><option>{{High-Risk Template1}}</option><option>{{High-Risk Template2}}</option><option>High-Risk Template3}}</option>....</choose> We have around 30 of these "High-Risk TemplateN" Statements. This way, we seperate the load into 30 identical templates. We edit one, and wait for the Job Queue to finish it. Then we do the next one, then the next, then the next, etc. This way, we can give the server breaks to do important edits, and there's always the ability to stop it halfway if we need to. I think the <choose> needs an extra exstension, but it's default on Wikia, where my usual wiki is. Thought this might be an interesting solution to share. Timeroot 00:47, 27 January 2009 (UTC)Reply

Over 2000k

[edit]

Right now it's over 2000k, and there's no sign its decreasing. This is amazing because it's more than the number of articles (I know the job queue includes user pages, etc., but still ... ). Some sort of breakdown from how these jobs initiated would be great. CMummert 16:19, 4 April 2007 (UTC)Reply

See also Wikipedia:Village pump (technical)#Job queue. - Jc37 23:14, 4 April 2007 (UTC)Reply
You mean Wikipedia:Village pump (technical)/Archive#Job_queue, revision 124277233 --193.11.177.69 23:25, 19 September 2007 (UTC)Reply
As for how it can be larger than the number of articles, per haps some of the transclusions are to some of the same pages. For example, in the discussion I linked to above, there are some user pages which have (due to template transclusion) categories of Wikipedian programmer, User bas, User bas-1. If I remove the parent cats from the template, then two separate actions will be listed in the queue (I presume), just for that page alone. I am, of course, guessing : ) - Jc37 23:14, 4 April 2007 (UTC)Reply
Currently 2,302,658 ... seems awful big. -- ProveIt 23:52, 4 April 2007 (UTC)Reply
How long does it take for the system to work through this sort of job queue? My bot's starting to report high rates of outdated category information. --67.185.172.158 02:15, 7 April 2007 (UTC)Reply
It can take forever - if you have no traffic on your wiki. I.e. it depends on the amount of traffic you have on your wiki. --Sebastian.Dietrich 08:12, 31 January 2008 (UTC)Reply

I'm going for a new record. Right now I got it up to 3.7M. I heard reports that some wikis have hit 4M already! (Commons and en.wp) Hint: Lots and lots of edits to your most used templates in a very short period of time is the best way to do it. Another idea is to edit a bunch of templates than use Special:Nuke to revert them all and then repeat as needed. 75.4.160.23 10:57, 17 March 2008 (UTC)Reply

Let me see if I have this right

[edit]

It's good to get your facts straight before calling design into question. So let me see if I understand this system:

  • Edit a template and save it and the system puts all the necessary page updates (Pages which transclude the template) into a job queue.
  • Mediaiwki's job queue empties based upon surfing activity. "By default, each time a request runs, one job is taken from the job queue and executed."

This means that, the more pages viewed by a visitor, the more jobs are called from the queue. One view one job! So Mediawiki adds load to high load period!

Oh we can tune it down with that exciting variable, but that is the way it's designed, right? It depends on surfers visiting pages for the job to be taken off the shelf in the dusty cupboard called a queue.

Now, if I'm wrong then my comments are also wrong. Take that as a given:

  • Any sane designer would then run a cron job that looked at load factors and emptied the queue fast in low load times and slowed down in high load times.
Since cron jobs don't exist by default and can't really be set up by a PHP process running as a web server user, they aren't used by default -- a run-as-you-go method is used as a least-common-denominator, works-everywhere default. If you have the ability to set up cron jobs on your server, you can in fact set up runJobs.php as a cron job. --brion 17:25, 22 December 2008 (UTC)Reply
  • No visitors = no queue emptying, but that actually does not matter at all, because they don't need to see anything if they aren't there.

I have several embryo wikis using this software, solely because of the user interface. Last night I added a 3,000 (approx) job item to the job queue as an experiment. I have a template that is on 50% of my pages and I edited it. I wanted to see what happened because I did not believe the way the queue worked.

My server was lazing about drinking pina coladas and generally getting a tan. There were minimal wiki visitors (I chose deliberately my lowest load period), and it didn't even break sweat, it just asked for more sun cream and a refill of its glass. It should have taken the low load opportunity to empty the queue like a rocket!

Oh no. 18 hours!

So, where do I raise this?

No point going to Bugzilla, but this is the most unusual, cockamamie (is that how you spell it) piece of alleged design I've seen since I started as a programmer in Algol in 1975!

The justification will be a pseudo justification of "We need this to run on servers where the admin has no access to create cron jobs" or "But it doesn't just run in a *IX environment", but that just isn't an answer, it's a justification for a massive kluge.

Now, let's just suppose I'm wrong. Then I apologise for my current comments and ask "Why is it so slow? Why doesn't it take advantage of low load periods?"

And if I am right? Guys, some designer needs to fall on his sword.

Ah yes. I don;t have an account here, sorry. 91.84.170.194 16:40, 29 November 2007 (UTC)Reply

Seems as if you are right. I've changed a template yesterday evening --> job queue length was ~ 3.000. And after ~ 9hours (with no traffic an the serer just idling around) still ~ 3.000.
Job queues hold jobs and jobs are asynchron. Jobs should therefore be always executed in a background task. One can always assign this task a higher priority when the queue gets too long...
--Sebastian.Dietrich 08:10, 31 January 2008 (UTC)Reply
The issue is not that the jobs are held in a queue. That is good design. The issue is the idiotic design that says "empty the queue fastest when the site is at its busiest!". This is imbecilic design. In a commercial operation I would be retraining the idiot who designed it. 82.152.248.89 11:50, 2 April 2008 (UTC)Reply

Designers

[edit]

Sane designers have better things to do than implement a load-aware scheduling system that requires no installation effort and works in safe mode PHP. Use cron, fool. -- Tim Starling 05:26, 21 December 2008 (UTC)Reply

I can't make up my mind if that is irony or rudeness. So I'll assume you are from the USA. That means it's rudeness. And pretty much shows you failed to understand the point being made. Team America Feck yeah! Timtrent 22:28, 26 June 2010 (UTC)Reply

Semi-protection

[edit]

Could somebody Semi-protect this page, because here's often ip-vandalism -Fujnky 05:41, 18 June 2008 (UTC)Reply

Spanish

[edit]

Hi. I've tried to reach this page from the Spanish version of Wikipedia and all I got was a redirect page to this one. where a message tells me not to edit it. It seems like something wrong happened while translating these contents into Spanish. Is it possible for me to translate it, show it to you and later copy it into the corresponding Spanish page? Thanks. --Dalton2 11:20, 4 July 2008 (UTC)Reply

Hebrew

[edit]

Do you want a translation to Hebrew of this page? I will be more then happy to help. Contact me at my talk page. 82.81.44.31 19:29, 7 December 2008 (UTC)Reply

Who kills runJobs.php process?

[edit]

a length of queue in my wiki is about 2000-3000. I tried to run runJobs.php, but after couple of seconds the process is finished and writes to console "Killed". The task removed only about hundred items from the queue.

What is it and how i can remove all jobs from queue? --Dnikitin 02:28, 5 March 2009 (UTC)Reply

showJobs.php and Special:Statistics

[edit]

Sometimes showJobs.php and Special:Statistics return different values. showJobs.php return 0, but on Special:Statistics shows 7 or 8 jobs in queue. Is it feature? If not how I can fix it? --Dnikitin 15:11, 6 March 2009 (UTC)Reply

Me too. showJobs = 0 , Special:Statistics = 21. Why it's different? Benzwu 23:58, 18 April 2011 (UTC)Reply
Me three, can someone please clear this up?

Running runJobs.php concurrently

[edit]

I imported content into my local development version of MW. I then ran maintenance/refreshLinks.php. Now I have about 250,000 jobs in the job queue. I ran runJobs.php, but noticed it only processed about 1,000 jobs/hour. This means it would take about 250 hours (over 10 days) to clear the job queue.

Is it possible to run runJobs.php concurrently? That is, start up, say, 10 instances of runJobs.php. Or is there no concurrency control on the job queue, meaning doing this could clobber the job queue state? Dnessett 20:29, 27 November 2009 (UTC)Reply

Answering my own question, it appears there is concurrency control in removing jobs from the job queue (I had to look at the code). So, it should be possible to run multiple instances of runJobs.php. Dnessett 18:10, 11 December 2009 (UTC)Reply
From the runJobs.php script: procs: Number of processes to use. The sintax is: php maintenance/runJobs.php --conf LocalSettings.php --procs=2 (for example, to doing 2 simultaneous jobs). But also if you run it in two different shells (as I did before I discover this option, like 5 minutes ago XD), the script does what you expect to process the Jobs but also skippin the ones that are running or have runned; though I think that --procs is much nicer :) --194.140.58.44 14:16, 12 May 2010 (UTC)Reply

User ID used when running jobs

[edit]

What user id does the script running the jobs use?

Will the changes be logged?

This might be specific to SMW, but everytime a page is moved, an SMW extension to the Job class logs a change in the page's change log and triggers watchlist emails. I've seen it use user ids of people who did not touch the page at all and also use "anonymous user".

This is the errant SMW class: SMW_UpdateLinksAfterMoveJob

Weird... --magalabastro

Jobqueue stuck in loop

[edit]

Hi. It seems my job queue is stuck in a loop. Due to earlier performance issues we reduced the job rate to 0.1 and run a cron job at night with maxJobs = 10000. However when I look into the logfile i find the same articles in the exact same order repeating themselves every 6 minutes. After 10000 jobs it stops (although there are only about a thousand unique jobs), but the number of jobs in the queue never really goes down anymore. How would I proceed to find the problem or (even better) a solution? Would a corrupt database be a possible cause? would running the update.php be a possible solution in that case? I'm rather hopeless :( thanks in advance for any help. MW 1.20.6 SMW 1.8.0.1 --Simon Fecke (talk) 10:51, 8 November 2013 (UTC)Reply

[edit]

Reason:

I'm using a script to check whether or not a sub page with a special title exists.

Say the pages are: main/status reports/2013-11-15

If so, the main page ist put into a category if the page main/status reports/2013-11-15 doesn't exist (status report:overdue) and is supposed to be removed as soon as the sub page is created. Right now, the problem is that I first have to do a null-edit to the used template before I can see the page in the category list.

my code: {{#ifexist: {{SUBJECTPAGENAME}}/Status Report/{{#time: Y-m-d |{{#expr:{{#var:d}}}}.{{#var:m}}.{{#var:y}}}} |<!-- -->[[{{SUBJECTPAGENAME}}/Status Report/{{#time: Y-m-d | {{#expr:{{#var:d}}}}.{{#var:m}}.{{#var:y}}}} | {{#time: Y-m-d | {{#expr:{{#var:d}}}}.{{#var:m}}.{{#var:y}}}}]] | <!-- -->[{{fullurl:{{SUBJECTPAGENAME}}/Statusbericht/{{#time: Y-m-d | {{#expr:{{#var:d}}}}.{{#var:m}}.{{#var:y}}}}|action=edit&preload=Template:Status Report}} Create Status Report]<!-- --><includeonly>{{#ifexpr: {{LOCALTIMESTAMP}} > {{#time: YmdHis | {{#expr:{{#var:d}}+1}}.{{#var:m}}.{{#var:y}}}}| [[Category:Status Report:Overdue]] |}}</includeonly><!-- -->}}

Basically, it's enough to refresh the links to/from the main site. Any suggestions?

--Valentin

Category:Pages with script errors

[edit]

When a page (or many pages) includes a template that invokes a Lua module, and the module has an error, the page (or pages) is categorized in "Pages with script errors". However when the error is fixed in the module the page remains in the category (until the page itself is edited). Is there a way to fix this behaviour? --Rotpunkt (talk) 19:25, 4 March 2014 (UTC)Reply

How to avoid multiple runJobs.php from cron

[edit]

Turns out Special:Code/MediaWiki/81519#c13938 has never found an answer and is still current. I can't find one in docs nor above, either. --Nemo 22:17, 19 September 2014 (UTC)Reply

Special:RunJobs over HTTPS

[edit]

I am using MW 1.25.1. I see that the per-request job queue trigger is to POST to http://<host>/Special:JobQueue. Our wiki runs under https and named virtual host, but the POST is to http and the local hostname.

Looking at includes/MediaWiki.php for RunJobs (about line 642), I see that there is a call to fsockopen() with the name of the host. There are a couple of problems here.

  1. $info['host'] is the name of the host, not the name of the wiki from $wgServer. On servers that use virtual hosting, the hostname and $wgServer may be different, so fsockopen() may result in a request that routes to the wrong virtual server.
  2. The call to fsockopen() does not account for $info['scheme'], which may be https.

Rather than attempt to fix this right now, I am electing to set $wgJobRunRate to 0 and set a cron job for php maintenance/runJobs.php --maxjobs NNN and figure out the balance between frequency and maxjobs to provide rapid job execution while keeping performance high.

I believe the longer term solution is to rework the call the fsockopen() to either use a different method, or at least to use the info from $wgServer more closely.--Chiefgeek157 (talk) 17:27, 23 June 2015 (UTC)Reply

An update with more testing. I tried setting $wgRunJobsAsync = false; to force in-process execution like older versions of MediaWiki. This did not work. The job queue was not processed during requests. So I currently have to use the cron job with --maxjobs 100 from my previous post, plus a nightly job without maxjob to clear any remaining queue.
I would really like to figure out how to get the asynchronous invocation to work correctly in WikiMedia.php for a server with $wgServer set to https://mydnsalias.example.com. --Chiefgeek157 (talk) 19:35, 23 June 2015 (UTC)Reply
Setting a debug log may help you diagnose the problem for $wgRunJobsAsync = false;. The problem with $wgServer should be reported on phabricator. --Ciencia Al Poder (talk) 20:48, 13 July 2015 (UTC)Reply
Created a Phabricator task: https://phabricator.wikimedia.org/T107290. --Chiefgeek157 (talk) 15:08, 29 July 2015 (UTC)Reply
This issue might also be related if you redirect HTTP to HTTPS: https://phabricator.wikimedia.org/T68485 --84.119.132.202 07:38, 6 November 2015 (UTC)Reply

Giant Job table

[edit]

Hello. My Job table contains (don't laugh) 13M of records. Well I forgot to runJobs for a (long) while. Now, even runJobs would take days to empty it. I wonder if I can empty the table from the database or if it may corrupt it. My problem is that it has filled the disk and I cannot admin the server very well ... Thank you. --Gborgonovo (talk) 10:50, 24 February 2016 (UTC)Reply

There should be a very large list of duplicate jobs there. Executing the job queue would catch them but still there shouldn't make sense to trigger things like email notifications from changes made a year ago and the like... You can empty the table, and then run Manual:RefreshLinks.php since most jobs would be of that kind. --Ciencia Al Poder (talk) 10:28, 25 February 2016 (UTC)Reply
Oh, that is a dangerous advice. While it may be true that in a default MediaWiki installation, most of the entries are cache invalidations and/or updates to the link tables, there may also be other things, including page content changes in the queue. Removing those items effectively prevents the changes, which the user already has done, from actually happening inside the database. With other words: You would efffectively lose (future) revisions. So without knowing what is in this table, I would not recommend just killing it. --87.123.45.48 21:14, 23 August 2016 (UTC)Reply
You would never lose revisions from the database just from clearing out the job queue. The danger is that things like backlinks and categories could be out-of-sync, requiring that various maintenance scripts be run to update everything (which would, in turn, fill the job queue up again). Robin Hood  (talk) 22:32, 23 August 2016 (UTC)Reply
It is not so simple. The refreshLinks.php script, which you mean, can update the links tables again, but that by far is not all there is to it. E.g. the Extension:ReplaceText stores its planned replacements inside the job queue to get them executed at a later point. If you remove these entries, the according content changes, although commissioned through the wiki interface, will not be done effectively making you lose the according page revisions (as these never get created). That way the content change gets lost efffectively making you lose (future) revisions. --87.123.27.202 09:59, 24 August 2016 (UTC)Reply
If an extension uses the job queue in an unusual way, that's its business. I would hope that it would warn server admins that it does that, though. In Core MediaWiki, the only things I can think of where you might lose data irrecoverably are private e-mails and chunked uploading. I'm not saying I recommend the procedure, not by a long shot, but there should be very little completely irrecoverable data in the queue in most common scenarios. Robin Hood  (talk) 16:46, 24 August 2016 (UTC)Reply
Actually the job table is not made to be purged away. It is not one of the tables, which usually only have temporary data in them, where it is not a problem, if you lose it. You say it yourself: Deleting things from the job table can make you lose data. Preventing this however is possible - and in fact simple: The field job_cmd inside the job table for each row contains a key to the according job name. With this key it is possible to determine, what type of job you have for each row. Based on the different values from the job_cmd column, you can DELETE only those rows, which are e.g. updating the links tables. This should make the very most rows go away already - without loosing anything else. --87.123.27.202 19:15, 24 August 2016 (UTC)Reply
Agreed, that would be the better way to do it. Ideally, the job queue would never contain duplicate jobs in the first place (which is mentioned on the job queue redesign page), but I gather that's a design choice, so unless someone comes up with a better approach, that's not going to change. Robin Hood  (talk) 00:55, 25 August 2016 (UTC)Reply

How to clean queued abandoned jobs?

[edit]

My showStatus : refreshLinks: 0 queued; 397 claimed (0 active, 397 abandoned); 0 delayed

In api : "batchcomplete": "",

   "query": {
       "statistics": {
           "pages": 9101,
           "articles": 1840,
           "edits": 33503,
           "images": 3871,
           "users": 3647,
           "activeusers": 55,
           "admins": 18,
           "jobs": 397
       }

How can I empty queued jobs, despite it's in abandoned status?

TypeError from line 47 of includes/jobqueue/jobs/RefreshLinksJob.php

[edit]

I consistently errors in one of my wikis "TypeError from line 47 of includes/jobqueue/jobs/RefreshLinksJob.php" running MW 1.31.0

I managed to work around it by changing ./includes/jobqueue/job.php

 $job = new $handler( $title, $params );

to

 $job = new $handler( $title, array( $params ) );

hth - Arent (talk) 13:58, 7 August 2018 (UTC)Reply

You should bugreport this. The fix doesn't look right, however, or can cause problems with all other jobs. --Ciencia Al Poder (talk) 20:01, 7 August 2018 (UTC)Reply
T201541 - Arent (talk) 19:37, 8 August 2018 (UTC)Reply

RefreshLinksJob TypeError

[edit]

runJobs fails with the following error (in MW 1.31):

  # php maintenance/runJobs.php
  ...
  [b0221408193b822913c1bda6] [no req]   TypeError from line 136 of /var/www/html/includes/jobqueue/jobs/RefreshLinksJob.php: Argument 1 passed to RefreshLinksJob::runForTitle() must be an instance of Title, null given, called in /var/www/html/includes/jobqueue/jobs/RefreshLinksJob.php on line 122
  Backtrace:
  #0 /var/www/html/includes/jobqueue/jobs/RefreshLinksJob.php(122): RefreshLinksJob->runForTitle(NULL)
  #1 /var/www/html/includes/jobqueue/JobRunner.php(296): RefreshLinksJob->run()
  #2 /var/www/html/includes/jobqueue/JobRunner.php(193): JobRunner->executeJob(RefreshLinksJob, Wikimedia\Rdbms\LBFactorySimple, BufferingStatsdDataFactory, integer)
  #3 /var/www/html/maintenance/runJobs.php(89): JobRunner->run(array)
  #4 /var/www/html/maintenance/doMaintenance.php(94): RunJobs->execute()
  #5 /var/www/html/maintenance/runJobs.php(122): require_once(string)
  #6 {main}

Any clue why this error occurs and how to get rid of it? — Preceding unsigned comment added by S0ring (talkcontribs)

What extensions have you installed? This looks like one of the extensions is causing a problem (it may be incompatible with 1.31). --Ciencia Al Poder (talk) 12:41, 26 September 2019 (UTC)Reply
Here is the list of the installed extensions: — Preceding unsigned comment added by S0ring (talkcontribs)
 Interwiki	3.1 20160307
 MassEditRegex	8.3.0 (61d3d16) 20:53, 17. Dez. 2018
 Replace Text	1.4.1 (a027ec9) 17:05, 16. Mai 2018
 VisualEditor	0.1.0 (6854ea0) 00:33, 6. Nov. 2018	
 DisplayTitle	2.0.0 (b216925) 05:19, 14. Apr. 2018
 Wiki Category Tag Cloud	1.3.3 (8bc4678) 23:14, 10. Mär. 2018
 BreadCrumbs	0.5.0 (4609bf0) 00:10, 14. Apr. 2018
 DynamicSidebar	1.1 (41f9fcc) 02:43, 14. Apr. 2018
 Lockdown	- (381da77) 18:35, 1. Dez. 2018
 PluggableAuth	5.7 (a69f626) 05:03, 7. Feb. 2019
 SimpleSAMLphp	4.5
 TitleKey	1.0 (a0948c5) 08:34, 10. Mär. 2018
I don't know what could be causing the problem. Maybe MassEditRegex or Replace Text if the error happens when you use them, but that's difficult to track down because the error happens when the job queue is run and not when jobs are inserted by the extension... If you find a consistent steps to reproduce the problem (usually editing a page and then running the job queue), you can try disabling all extensions and repeat the process enabling one by one until you find the culprit. --Ciencia Al Poder (talk) 09:28, 27 September 2019 (UTC)Reply