User Details
- User Since
- Aug 27 2018, 1:02 PM (323 w, 6 d)
- Roles
- Disabled
- LDAP User
- Banyek
- MediaWiki User
- Unknown
Dec 3 2019
Jan 10 2019
Thank you <3
Jan 9 2019
As the tables which blocked using the view management tool were cleaned up (T210693) , I think we could to the view updates normally.
I cleaned up the tables, so I close the ticket
Jan 8 2019
It seems I can clean up the materialized views tables which prevent the tool working correctly (see T210693) , so I guess this ticket will be solved right after. Thanks for patience
Ok, then we are all agree, I'll drop those tables tomorrow in the morning
@Bstorm so, what do you think, should I drop these?
No, I didn't
on db1062 (s7 master) every database is done, except eswiki I have to retry this later. (Lock wait timeout exceeded)
Jan 7 2019
The comparison finished, and the data is OK.
Jan 4 2019
On Cumin2001 I have a comparison screen running inside of a screen in /home/banyek
The script is used the following:
On labsdb1010 this would be the quickest (with depooled host)
#!/bin/bash
+1 on removing these tables as mentioned in T210693 too.
@Bstorm Yeah, that is the case, I just wanted to ping you about it in the ticket :)
So, I think I'll drop the materialized views (as their data is already outdated anyway) instead of modifying the script.
Do you agree?
hm, then I close the ticket, thanks @Marostegui
Jan 3 2019
I'll start with this in the morning
Probably the problem is caused as the comment_mat is not a view but a table, I'll check the maintain-views script to understand how it works
Ok, I guess I understand the problem now, I'll check the scripts
@Andrew as far as I know we manually have to drop the views as dropping the underlying tables doesn't clean them up. What shall I investigate exaclty?
The mariadb started without any problem, and replication is resumed
after hard reset, I didn't find anything in the logs
/var/log/syslog
Jan 3 07:35:01 es2019 CRON[16225]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) Jan 3 07:35:01 es2019 CRON[16226]: (prometheus) CMD (/usr/local/bin/prometheus-puppet-agent-stats --outfile /var/lib/prometheus/node.d/puppet_agent.prom) ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@Jan 3 09:07:53 es2019 systemd-modules-load[1062]: Inserted module 'nf_conntrack' Jan 3 09:07:53 es2019 systemd-modules-load[1062]: Inserted module 'ipmi_devintf' Jan 3 09:07:53 es2019 systemd-sysctl[1081]: Couldn't write '65' to 'net/netfilter/nf_conntrack_tcp_timeout_time_wait', ignoring: No such file or directory
according to https://wikitech.wikimedia.org/wiki/Platform-specific_documentation/Dell_PowerEdge_RN30 I reset the host with
racadm serveraction hardreset, now the console is available
I triage this as 'high' not unbreak, because the host wasn't in service
Jan 2 2019
That makes sense, then I'll add a +1 and and LGTM to the patchm, but I won't mind, if you'd ask for a second opinion from the more experienced wiki dba's, because there might be something I am not aware of
@bmansurov as I see the memory usage on the m2 hosts, this could be good, but I am not sure if it is a good idea to increase it to a 40x size of the original.
I mean, as I've see the numbers this shouldn't cause problems, but I'd like to ask @Marostegui or @jcrespo about why the original number was set to 10?
awesome!
Dec 26 2018
Maybe we should reimport the table in january?
Dec 21 2018
Dec 20 2018
Dec 19 2018
[email protected][ptwiki]> show tables like 'flagged%'; +-----------------------------+ | Tables_in_ptwiki (flagged%) | +-----------------------------+ | flaggedimages | | flaggedpage_config | | flaggedpage_pending | | flaggedpages | | flaggedrevs | | flaggedrevs_promote | | flaggedrevs_statistics | | flaggedrevs_tracking | | flaggedtemplates | +-----------------------------+ 9 rows in set (0.03 sec)
[email protected][ptwiki]> SET SESSION sql_log_bin=0; Query OK, 0 rows affected (0.03 sec)
I've created backups from the actual tables before the drop:
Dec 14 2018
the progress url for rename is: https://meta.wikimedia.org/wiki/Special:GlobalRenameProgress/Teseo
We didn't talked about this so far, but these views doesn't ask for having proper indexes?
@1997kB I'd say we should stick to the date on the calendar event
Aye, I am here, noted
Dec 12 2018
Tables were renamed on db1122 for proof:
I'll do first renaming the tables on db1122, and if nothing breaks this week, I'll do the drops
The sync finished, thank you @Cmjohnson
Virtual Drive: 0 (Target Id: 0) RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 State: Optimal Number Of Drives per span: 2 Number of Spans: 6 Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
The materialized view generation completed.
The total size of the materialized views are ~150 G all together.
root@labsdb1010:~# find /srv -iname comment_mat.ibd -ls | awk '{size_in_g +=$7} END {print "Total size: " size_in_g/1024/1024/1024}' Total size: 149.266
The total time view generation take is ~18 hours (!This run excluded enwiki_p.comments_mat as it was created earlier and took ~5hrs!)
root@labsdb1010:~# cat create_mat.log | egrep "Starting|Completed" Tue Dec 11 15:07:11 UTC 2018 - Starting Wed Dec 12 04:18:08 UTC 2018 - Completed
Dec 11 2018
ptwikipedia lives in the s2 section, the following hosts needs to be done:
root@db1063:~# megacli -PDList -aall | egrep -i "Slot|Firmw" Slot Number: 0 Firmware state: Rebuild Device Firmware Level: 0008
awesome, thanks!