Wikipedia:Requests for adminship/RedirectCleanupBot
- The following discussion is preserved as an archive of a successful request for adminship. Please do not modify it.
- This is a request for a fully automated adminbot.
Final (168/15/7); Originally scheduled 20:55, 11 October 2007 (UTC). Nomination successful. --Deskana (talk) 20:55, 11 October 2007 (UTC)[reply]
RedirectCleanupBot (talk · contribs) - This is a very different RfA from the type I am used to writing nominations for. In fact in many ways the title is wrong - I am not proposing that a new administrator be created, but that a Bot account is given a +sysop flag. It is incapable of the judgment we require an administrator to show. I will however outline why giving it the ability to use a sysop tool will be beneficial to the encyclopedia.
- The task
Special:BrokenRedirects list pages that are redirects to deleted or non-existent pages. They meet the speedy deletion criteria - CSD R1. When reviewing that list the only human action that is necessary is to ensure that each page does not contain useful history otherwise it is deleted. The page is updated every 2/3 days and I delete well over 100 redirects each time. It occurs to me however that this trivial task could be done just as well automatically.
Of course, a Bot cannot discern useful page history, so this Bot will only delete pages that have no history.
It will work off Special:BrokenRedirects and if an entry there is:
- A redirect to a deleted or non-existent page
- Has only one entry in its history
It will delete that redirect.
- Technical details
- BRFA: Wikipedia:Bots/Requests for approval/RedirectCleanupBot
- Operator: WJBscribe
- Code written by: Eagle 101
- Source code: <http://tools.wikimedia.de/~eagle/rdbot.pl>
- Test output: User:Eagle 101/RedirectCleanupBot
- Conclusions
This task is largely uncontroversial other than the fact that the Bot requires the ability to delete pages. If the Bot deletes a page other than a broken redirect with one revision, it will be blocked on sight. Blocked accounts cannot delete pages. This Bot will not be taking on any future tasks - this will be its sole function and its scope will not be expanded. Bots are designed to perform repetitive tasks that can be performed just as well automatically as manually. I believe this is such a task. WjBscribe 20:52, 4 October 2007 (UTC)[reply]
Questions about the Bot
[edit]Ask any questions below:
Questions from AA
- 1. What will be in the edit summary?
- A:It is in the source code, line 41, currently it is BOT: Deleting dead redirect per CSD R1. Changing this is not that hard to do. Suggestions are welcome :). —— Eagle101Need help? 21:30, 4 October 2007 (UTC)[reply]
- What about including the target in there (e.g. BOT: Deleting dead redirect to @@@@ per CSD R1)?
- That has been suggested, and I will include it shortly, I must get some food now, but I'll have code updates within 8 hours or so. As it is open source, if someone wants to give me a unix patch file they are more then welcome to e-mail that to me ;) —— Eagle101Need help? 21:44, 4 October 2007 (UTC)[reply]
- patch sent. --uǝʌǝsʎʇɹnoɟʇs 23:12, 4 October 2007 (UTC)[reply]
- Oy! the wonders of open source! I'll have a look and put it in. —— Eagle101Need help? 00:21, 5 October 2007 (UTC)[reply]
Could you make it include the target of the redirect it's deleting in the summary? That way if a redirect ever needed to be recreated it could easily be done by looking at the bot logs or page logs. Just a suggestion, possibly non-useful. I've already added my support. --JayHenry 03:06, 5 October 2007 (UTC)[reply]- Oh wait, nevermind, that's what was meant by the @@@@ above. I am teh illiterate. You guys are one step ahead! --JayHenry 03:08, 5 October 2007 (UTC)[reply]
- Oy! the wonders of open source! I'll have a look and put it in. —— Eagle101Need help? 00:21, 5 October 2007 (UTC)[reply]
- patch sent. --uǝʌǝsʎʇɹnoɟʇs 23:12, 4 October 2007 (UTC)[reply]
- That has been suggested, and I will include it shortly, I must get some food now, but I'll have code updates within 8 hours or so. As it is open source, if someone wants to give me a unix patch file they are more then welcome to e-mail that to me ;) —— Eagle101Need help? 21:44, 4 October 2007 (UTC)[reply]
- What about including the target in there (e.g. BOT: Deleting dead redirect to @@@@ per CSD R1)?
- A:It is in the source code, line 41, currently it is BOT: Deleting dead redirect per CSD R1. Changing this is not that hard to do. Suggestions are welcome :). —— Eagle101Need help? 21:30, 4 October 2007 (UTC)[reply]
- 2. Are there instances when a broken redirect should not be deleted from user/user talk space?
- A: I can't immediately think of any. If there are some we can restrict the Bot further. In my experience such redirects arise either (1) where an article was drafted in userspace, moved to the mainspace and subsequently deleted or (2) where someone is renamed and deletes unwanted subpages, forgetting the redirects left over from the rename. It seems appropriate to delete these redirect. As with other namespaces, the Bot won't delete if the page has more than 1 edit in the history. WjBscribe 21:30, 4 October 2007 (UTC)[reply]
- True, but if I have myself renamed and later decide to leave Wikipedia and delete my user page (not because of m:RTV reasons, but just because I left), it can actually be useful to have a redirect from User:Melsaran to User:Mynewname so that people who click a link from Melsaran's signature know Melsaran's current name and how to access his talk page/contribs. I think it'd be a good idea to disable deleting redirects from userpages (not subpages) to other userpages. Melsaran (talk) 12:45, 5 October 2007 (UTC)[reply]
- I'm not sure I've entirely followed you here. In this scenario you were renamed before you left the project? It seems to me that redirects are trivial to recreate if the user wants them (or if a third party thinks one would be useful and the user has no objection). I'm not sure we should keep redlinked redirects from old usernames of people who have left indefinitely in case they return. Certainly we are not doing so at the moment - admins are deleting such redirects... WjBscribe 12:51, 5 October 2007 (UTC)[reply]
- I used to do broken redirects and I would have deleted that. If it is a broken redirect with no history it would have been deleted by me, a human, no questions asked, regardless of namespace. No one ever seemed to mind. - cohesion 19:00, 6 October 2007 (UTC)[reply]
- I'm not sure I've entirely followed you here. In this scenario you were renamed before you left the project? It seems to me that redirects are trivial to recreate if the user wants them (or if a third party thinks one would be useful and the user has no objection). I'm not sure we should keep redlinked redirects from old usernames of people who have left indefinitely in case they return. Certainly we are not doing so at the moment - admins are deleting such redirects... WjBscribe 12:51, 5 October 2007 (UTC)[reply]
- True, but if I have myself renamed and later decide to leave Wikipedia and delete my user page (not because of m:RTV reasons, but just because I left), it can actually be useful to have a redirect from User:Melsaran to User:Mynewname so that people who click a link from Melsaran's signature know Melsaran's current name and how to access his talk page/contribs. I think it'd be a good idea to disable deleting redirects from userpages (not subpages) to other userpages. Melsaran (talk) 12:45, 5 October 2007 (UTC)[reply]
- A: I can't immediately think of any. If there are some we can restrict the Bot further. In my experience such redirects arise either (1) where an article was drafted in userspace, moved to the mainspace and subsequently deleted or (2) where someone is renamed and deletes unwanted subpages, forgetting the redirects left over from the rename. It seems appropriate to delete these redirect. As with other namespaces, the Bot won't delete if the page has more than 1 edit in the history. WjBscribe 21:30, 4 October 2007 (UTC)[reply]
Questions from F Mita
- 3. Will the bot also delete the redirect page's talk page?
- A: Yes, as the talk page will also show up in special:BrokenRedirects. —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- A: But only if the talkpage is also a redlink redirect with no page history (which they usually are if there's a talkpage at all). WjBscribe 22:51, 4 October 2007 (UTC)[reply]
- OK, so if it is not a redlink redirect, or has a page history it will be kept. My understanding is that talk pages are usually deleted if the main page is deleted, but I am not sure if this would be considered a problem. F Mita 23:07, 4 October 2007 (UTC)[reply]
- 4. Should the presence of a talk page perhaps be a non-delete criteria for the bot?
- A: Should it be? I can't think of any reason why it should be. If you can think of a reason why, please let me know, and we shall code it in. —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- The only situation that occurred to me was that there was some discussion on the talk page that might impact whether the redirect should be deleted. In other words there could be a discussion that might make the deletion controversial (for want of a better word) and which should therefore be reviewed by a human. If there are not many instances of talk pages that aren't redirects, then maybe it would be better to err on the side of caution when there is one. Just a thought. F Mita 23:07, 4 October 2007 (UTC)[reply]
- From experience, those talkpages are usually a Wikiproject tag or {{Talkheader}}. I still can't think of a talkpage discussion that would make the redirect controversial and need to be kept where the page has only edit in its history and points to a deleted page. I don't have a problem with narrowing the Bot's scope to avoid potential problems, but I haven't yet imagined even one scenario where the presence of a talkpage would be a relevant (let alone determining) factor... WjBscribe 00:36, 5 October 2007 (UTC)[reply]
- The only situation that occurred to me was that there was some discussion on the talk page that might impact whether the redirect should be deleted. In other words there could be a discussion that might make the deletion controversial (for want of a better word) and which should therefore be reviewed by a human. If there are not many instances of talk pages that aren't redirects, then maybe it would be better to err on the side of caution when there is one. Just a thought. F Mita 23:07, 4 October 2007 (UTC)[reply]
- A: Should it be? I can't think of any reason why it should be. If you can think of a reason why, please let me know, and we shall code it in. —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- 5. Are there any tags that could be placed on the redirect page that should be a non-delete criteria?
- A:No, and frankly its not needed, as a second edit to a redirect page will cause the bot to skip it. (The bot only deletes pages with one edit, so any edit to add a tag, or to point it at a new target would cause the bot to skip). —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- A:Not that I can think of - usually its categories (like {{R from alternative spelling}}) but those shouldn't affect the need to delete. If a template is added after the redirect is created, it will have more than 1 edit in the history and not be deleted anyway. WjBscribe 22:51, 4 October 2007 (UTC)[reply]
- 6. Does the bot check that the redirect target page has not been created since the redirect page was listed on Special:BrokenRedirects? (I am assuming here that Special:BrokenRedirects is not recreated on the fly each time it is viewed)
- A: I will check the regexs that I use in a few hours. I am not ignoring this question, and yes the bot should ignore any redirects that get re-created. —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- OK. Just to be clear, I didn't mean where the actual redirect page had been recreated (I guess that would be covered alreay because there would be mnore than one edit), rather where the target page (i.e. the page that the redirect points) had been recreated since the redirect page had been added to the list. F Mita 23:07, 4 October 2007 (UTC)[reply]
- To add - I expressed this to Eagle earlier, but was not too concerned about it, but since we are loading the actual redirect page now, it should be easy to load the target as well - it's up to eagle as to efficiency, however. --uǝʌǝsʎʇɹnoɟʇs 23:16, 4 October 2007 (UTC)[reply]
- I do think that this should be a definite requirement for the bot, as it should not be deleting pages that actually redirect to a page. This situation would probably happen very infrequently, but should be catered for (no doubt a human admin would spot the fact that the target page did actually exist). F Mita 23:34, 4 October 2007 (UTC)[reply]
- It will be able to tell if a target exists by seeing if the target is a blue link in the special:BrokenRedirects, I have not yet had a chance to test this, but before it is live, I will be sure that anything that points to an existing page is not deleted. In addition we will include a pointer in the deletion summary of where the redirect pointed. —— Eagle101Need help? 00:15, 5 October 2007 (UTC)[reply]
- I do think that this should be a definite requirement for the bot, as it should not be deleting pages that actually redirect to a page. This situation would probably happen very infrequently, but should be catered for (no doubt a human admin would spot the fact that the target page did actually exist). F Mita 23:34, 4 October 2007 (UTC)[reply]
- To add - I expressed this to Eagle earlier, but was not too concerned about it, but since we are loading the actual redirect page now, it should be easy to load the target as well - it's up to eagle as to efficiency, however. --uǝʌǝsʎʇɹnoɟʇs 23:16, 4 October 2007 (UTC)[reply]
- OK. Just to be clear, I didn't mean where the actual redirect page had been recreated (I guess that would be covered alreay because there would be mnore than one edit), rather where the target page (i.e. the page that the redirect points) had been recreated since the redirect page had been added to the list. F Mita 23:07, 4 October 2007 (UTC)[reply]
- A: I will check the regexs that I use in a few hours. I am not ignoring this question, and yes the bot should ignore any redirects that get re-created. —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- 7. What proportion of pages are estimated fall under the "single edit" criteria?
- A: You are free to see for yourself in the test results at User:Eagle_101/RedirectCleanupBot. Anything with a -- 1 following the name would have been deleted had the actually had the delete function enabled. —— Eagle101Need help? 22:46, 4 October 2007 (UTC)[reply]
- A: Proportion of pages is probably the wrong way to think of this- they mostly result from admins not looking for redirects when they delete pages and so are being created all the time. They're also deleted all the time. I would estimate that about 150 broken redirects are listed every few days Special:BrokenRedirects and that 80-90% of those contain only one edit. WjBscribe 22:51, 4 October 2007 (UTC)[reply]
- Question from Ronnotel
- 8. What controls will prevent the bot from being compromised and allowing a non-admin to gain admin powers? Ronnotel 22:40, 4 October 2007 (UTC)[reply]
- A: A good password, which I hope the bot operator has implented. Besides that, nothing. A bot is just like any other account, and logs in vie the same methods any other normal user can login. If you trust the admin that is running the bot (WJScribe) not to hand out the password to the account, then there should be no problems. —— Eagle101Need help? 22:48, 4 October 2007 (UTC)[reply]
- A:I tend to go a little overboard on passwords. The password for the Bot is 20+ characters - including letters, numbers and characters that are neither. WjBscribe 22:53, 4 October 2007 (UTC)[reply]
- Access to the account isn't what I'm worried about. How will you know that the code itself hasn't been compromised? Is there some sort of code migration mechanism that requires admin rights to place code into production? I would think ths would be a requirement before we can consider granting a bot admin rights. Ronnotel 23:03, 4 October 2007 (UTC)[reply]
- Well, unless someone manages to break my ssh private/public key combo to toolserver, I think this is highly doubtful. As long as WJScribe always gets the code from my toolserver account there should not be any problems at all. —— Eagle101Need help? 23:09, 4 October 2007 (UTC)[reply]
- Access to the account isn't what I'm worried about. How will you know that the code itself hasn't been compromised? Is there some sort of code migration mechanism that requires admin rights to place code into production? I would think ths would be a requirement before we can consider granting a bot admin rights. Ronnotel 23:03, 4 October 2007 (UTC)[reply]
- Question from CO
- 9. Why is WJBscribe running it, seeing as he has never run a bot before? A bot is one thing to run, but one with the sysop bit? Is WJB knowledgeable with bots and programming to run it?
- He should be fine, all it is, is telling it to run... its not rocket science ;). Its about as much work as clicking on an icon on your screen. I don't have the time to run and monitor a bot like this, which is why I'm letting someone knowledgeable with the task to operate it. I will be on hand to fix any bugs, and anyone else is free to submit to me unix patches. —— Eagle101Need help? 23:42, 4 October 2007 (UTC)[reply]
- The idea was to combine knowledge of the task with someone with good coding experience. I will be overseeing the Bot as someone who does a lot of redirect related work (both speedies and closing most WP:RFDs) so can spot if things were going wrong or if the task needed further restriction. It was I who investigated the possibility of this Bot in the first place. If anything beyond my technical abilities occurs, Eagle says he'll hold my hand, but he doesn't want the day-to-day supervision of the Bot (which I'm happy to do). Also there may be newbie questions to field (someone may assume the Bot deleted their article rather than a redirect if it was moved after they created it) which I happy to do... WjBscribe 23:54, 4 October 2007 (UTC)[reply]
- Question from dgies
- 10. I'm uncomfortable with the idea of a bot having the +sysop flag. Is there some reason why this bot can't simply scan for these pages and send you an email with links? I realize it's slightly more work but presuming you can trust the email (cryptographic signing by your bot?) then you should be able to tear through the list pretty quickly. —dgiestc 23:43, 4 October 2007 (UTC)[reply]
- The point of the bot is to do it automatically. The end result is the same from an e-mail with someone clicking links, and the bot just doing it. Which one is easier, and in some ways safer? (do realize that a bot will never accidentally delete something. Bots are by their very nature more accurate then us humans. —— Eagle101Need help? 23:47, 4 October 2007 (UTC)[reply]
- Yes I realize computers don't make mistakes, but programmers do. A human is never vulnerable to escape characters, to name one problem. —dgiestc 23:50, 4 October 2007 (UTC)[reply]
- Yes, but once the code is right, it never makes a mistake, or if it does make a mistake, it make the same mistake predictably. Thats why we debug code :) —— Eagle101Need help? 00:13, 5 October 2007 (UTC)[reply]
- Additionally, remember that many programmers have reviewed and tested the bot without any complaints, any issue would have to be very minor. --uǝʌǝsʎʇɹnoɟʇs 00:26, 5 October 2007 (UTC)[reply]
- Yes, but once the code is right, it never makes a mistake, or if it does make a mistake, it make the same mistake predictably. Thats why we debug code :) —— Eagle101Need help? 00:13, 5 October 2007 (UTC)[reply]
- Yes I realize computers don't make mistakes, but programmers do. A human is never vulnerable to escape characters, to name one problem. —dgiestc 23:50, 4 October 2007 (UTC)[reply]
- The point of the bot is to do it automatically. The end result is the same from an e-mail with someone clicking links, and the bot just doing it. Which one is easier, and in some ways safer? (do realize that a bot will never accidentally delete something. Bots are by their very nature more accurate then us humans. —— Eagle101Need help? 23:47, 4 October 2007 (UTC)[reply]
- Questions from SQL(Query Me!)
- 11. I do this a lot myself, by hand, and, bot assisted. Something I come across frequently, is user or user talk pages that are protected, and, redirected to the userpage (which, does not always exist), due to abuse from a blocked user. It is not generally a good idea to delete these. How would this bot handle this situation?
- Its not all that hard, in fact its already done :), anything that is protected will by its very nature have more then 1 event in its history, preventing the bot from deleting at all. (the protection shows up in the history log). —— Eagle101Need help? 23:50, 4 October 2007 (UTC)[reply]
- I hadn't noticed the 1 edit thing, that pretty much covers most of my concerns. SQL(Query Me!) 23:53, 4 October 2007 (UTC)[reply]
- Its not all that hard, in fact its already done :), anything that is protected will by its very nature have more then 1 event in its history, preventing the bot from deleting at all. (the protection shows up in the history log). —— Eagle101Need help? 23:50, 4 October 2007 (UTC)[reply]
- 12. It is not always appropriate to nuke broken redirects. Sometimes it's bad pagemoves, sometimes it's vandalism, sometimes it's other things. How would this bot account for these variables?
- In practice the only time these shouldn't be deleted is when they have good history. The Bot will only delete redirects with no history, so nothing useful could be lost. Worst case scenario, a redirect is easily recreated - its not like losing an article. But I don't envisage that being necessary given the Bot's limited scope. WjBscribe 23:56, 4 October 2007 (UTC)[reply]
- Yep, not a big deal and, the 1 edit in history, safegaurd prevents this sort of issue... I meant to strike this question, as, the answer was right there... Thanks! SQL(Query Me!) 23:59, 4 October 2007 (UTC)[reply]
- Hmm, I'm not so sure... I have created redirects; they have no history. Now the page being pointed to gets vandal-moved. Then what? The redirect is lost. Is it possible for the bot to check wether the page itself has moved, and if so, simply change the redirect? — Edokter • Talk • 21:21, 5 October 2007 (UTC)[reply]
- Bear in mind that this Bot won't delete redirects to redirects (double redirects) which are what result from vandal page moves. It's only if the target page is actually deleted that the redirect is redlinked and meets the Bot's criteria. For example if you redirect Duke of buckingham to Duke of Buckingham and a vandal moves the latter to Who ate all the massive pies, the page Duke of Buckingham still exists (though now as a redirect) so the redirect from Duke of buckingham would still be a bluelink and wouldn't be deleted by the Bot. WjBscribe 21:42, 5 October 2007 (UTC)[reply]
- Thanks for clearing that up. — Edokter • Talk • 21:48, 5 October 2007 (UTC)[reply]
- Bear in mind that this Bot won't delete redirects to redirects (double redirects) which are what result from vandal page moves. It's only if the target page is actually deleted that the redirect is redlinked and meets the Bot's criteria. For example if you redirect Duke of buckingham to Duke of Buckingham and a vandal moves the latter to Who ate all the massive pies, the page Duke of Buckingham still exists (though now as a redirect) so the redirect from Duke of buckingham would still be a bluelink and wouldn't be deleted by the Bot. WjBscribe 21:42, 5 October 2007 (UTC)[reply]
- In practice the only time these shouldn't be deleted is when they have good history. The Bot will only delete redirects with no history, so nothing useful could be lost. Worst case scenario, a redirect is easily recreated - its not like losing an article. But I don't envisage that being necessary given the Bot's limited scope. WjBscribe 23:56, 4 October 2007 (UTC)[reply]
- Question from Mercury
- 13. Can the bot be coded to check for talk page messages, and shutdown once a message is posted?
- A: The bot will stop running if it is blocked. Stopping on talk page messages often can cause issues, as it's too easy to disable a useful bot - it's up to the operators on this, but as a BAG member, there are not many bots that do this. --uǝʌǝsʎʇɹnoɟʇs 00:26, 5 October 2007 (UTC)[reply]
- A: No doubt it could be - but I worry about it being interrupted too often. When I delete redirects, newbies sometimes come to my page asking why I deleted their article. Usually its because the article was moved before it was speedied or AfDed and the creator expected to find it where they created it and find my name in the deletion log. I expect the Bot will get the same kind of inquiries that will need to answered, or forwarded to the admin that deleted the actual content. Those could be directed to my talkpage but I suspect (judging from other Bots) some will still post to its talkpage. So my instinct is not for it to stop when it gets a talkpage message... WjBscribe 00:28, 5 October 2007 (UTC)[reply]
- A: If people want it I can implent it... but not many bots run with that on as far as I know. Its just as easy to post to WP:ANI and get an admin to block it. This the admin should do instantly. We can always restart the bot ;). —— Eagle101Need help? 00:37, 5 October 2007 (UTC)[reply]
- A2: After thinking some... there will be a page at User:RedirectCleanupBot/Shutoff. I would hope that this would not be needed, but its there anyway. I'm coding the changes in as we speak. Please do note that page is semi-protected. —— Eagle101Need help? 01:12, 5 October 2007 (UTC)[reply]
- Questions from EdJohnston
- 14. Is it possible that the decision process for the administrators who normally process Special:Brokenredirects is more subtle than what this bot would do? There is a remark in the edit history of the Talk page suggesting that an admin could take any of several repair actions, depending on what is found.
- A: Yes, however non-delete actions are in my experience confined to when the page has history. Eg. a vandal changes an article into a broken redirect - the Bot won't delete because it has more than 1 edit. The type of broken redirect being described that results from multiple page moves is a double redirect - another Bot fixes those and this Bot won't delete those as there is a target. Note the editor's conclusion:"If there is no edit history and target page/history is empty, I add {{db|this page is a redirect to an empty page}}" (a tag for speedy deletion), which is what a on-admin would do - an admin would delete. That quoted sentence is exactly the set of circumstances under which the Bot will be deleting. Also I confess that I don't check how recently the target was deleted (in case it was accidental and will be reverted) - the Bot will do that so in this way its going to be more subtle than me... WjBscribe 03:11, 5 October 2007 (UTC)[reply]
- 15. Could a new set of test results be generated, where we can still see the redirects? The ones listed in User:Eagle 101/RedirectCleanupBot have mostly been cleared out already as part of normal maintenance. Without being able to see the now-deleted redirect, it's hard to know if the bot diagnosed the redirect's history correctly.
- A: If one of the pages on there has been deleted, thats confirmation the bot was correct in its actions (or what it would have done). Next time special:BrokenRedirects regenerates either me or WJBscribe will give an updated list. —— Eagle101Need help? 02:54, 5 October 2007 (UTC)[reply]
- A: Yeah - its a bit difficult to ask admins not to do cleanup work for the 7 days this RfA will run. The Bot can have a new run next time Special:BrokenRedirects updates and obviously admins can check the deleted pages, but I can't see a way for more output to be generated now... WjBscribe 02:59, 5 October 2007 (UTC)[reply]
- 16. What is the rule that determines whether redirects show up as red or blue in Special:Brokenredirects? How are the strikethroughs made? (The page has no history tab).
- A: The page is a little odd and special pages never have histories. Redirects that are blue with an arrow towards another page that is red are broken redirects. When they are deleted, they usually turn red and are struck through. If instead of being deleted, the target is changed to a bluelink (or the page is replaced with content) the entry is also struck out, but remains blue. Some of those struck blue entries are interwiki redirects, which are also picked up by the special page. WjBscribe 02:59, 5 October 2007 (UTC)[reply]
- Question from Chaser
- 17. There's no edit or deletion rate in the BRFA. Is it possible to throttle the deletions to 2-3 per minute to allay the inevitable concern, however well founded, that the bot will go beserk?--chaser - t
- A: : In the start when it does its testing for WP:BAG it will probably make one or two deletes per minute. When BAG approves of it, it will probably increase speed to 4 deletes per minute. It won't be doing anything insane. —— Eagle101Need help? 04:36, 5 October 2007 (UTC)[reply]
- Question from NE2
- 18. Does the bot perform any other cleanup tasks, such as removing links to the redirect?
- A: No it doesn't: (1) to keep the code simple and (2) because realistically working out which redlinks should stay because there will one day be content, and which should go because it should never be created requires human judgment. Admins reviewing broken redirects don't tend to tidy up the links either - maybe we should but its more work, and often editors of the articles are better placed to make that judgment anyway. WjBscribe 07:04, 5 October 2007 (UTC)[reply]
- Thank you; I was making sure you understood that sometimes you shouldn't remove backlinks. --NE2 07:07, 5 October 2007 (UTC)[reply]
- A: If people really want detailed logs about things in the backlinks etc, so that they may remove them as they wish... the bot can generate such reports and post them within its userspace, but it will not do any more deleting then what is specified here. The bot may end up posting to its space a list of things in which it did not know what to do with etc. —— Eagle101Need help? 07:14, 5 October 2007 (UTC)[reply]
- A: No it doesn't: (1) to keep the code simple and (2) because realistically working out which redlinks should stay because there will one day be content, and which should go because it should never be created requires human judgment. Admins reviewing broken redirects don't tend to tidy up the links either - maybe we should but its more work, and often editors of the articles are better placed to make that judgment anyway. WjBscribe 07:04, 5 October 2007 (UTC)[reply]
- Question from Ronnotel
- 19. Will consider signing the perl script with your private key and publishing your public key? If you have already done so, can you include that information on this page?
- A: Well... I consider this whole excersise pointless, but I'll look into it. I frankly find that publishing the code on toolserver under my account is foolproof enough. Unless someone has my private ssh key (good luck in breaking that) getting into that account is virtually impossible. —— Eagle101Need help? 13:43, 5 October 2007 (UTC)[reply]
- Questions from Uncle Uncle Uncle
- 20. Does there exist a process to undo all changes by the bot over a given time period? For cleanup of any problems that may occur
- A: I'm not sure I understand the question, perhaps you could elaborate? WjBscribe 22:29, 5 October 2007 (UTC)[reply]
- I meant: What if the bot makes a large number of 'bad' actions over a period of time? Is there an 'easy' method to undo the actions, or would it require a human to manually perform a large number of new actions to fix the 'bad' actions? Uncle uncle uncle 22:38, 5 October 2007 (UTC)[reply]
- It would require a human to fix - but it really shouldn't be capable of bad actions, much less many of them given the limitations of its code. Because its approved task is so narrow - the target must be (a) a redirect to a deleted/non-existent page and (b) contain only one edit - any error will be very obvious and should therefore be quickly spotted and the Bot shut down. If everything the Bot deleted in a run needed to be undeleted (which seems very improbable), tabbed browsing would allow any admin to do that pretty quickly. WjBscribe 22:43, 5 October 2007 (UTC)[reply]
- I meant: What if the bot makes a large number of 'bad' actions over a period of time? Is there an 'easy' method to undo the actions, or would it require a human to manually perform a large number of new actions to fix the 'bad' actions? Uncle uncle uncle 22:38, 5 October 2007 (UTC)[reply]
- A: I'm not sure I understand the question, perhaps you could elaborate? WjBscribe 22:29, 5 October 2007 (UTC)[reply]
- 21. Will the bot be open to recall? [1]
- A: The idea had not occured to me. It can be blocked by any admin if it malfunctions and will also have a semi-protected page that will allow non admins to shut it down. I'm not sure being listed at CAT:AOR on top of that is necessary - after all it isn't a person, just a script run on a separate account to my main one. But I'm willing to consider if you can think of a good reason for it... WjBscribe 22:29, 5 October 2007 (UTC)[reply]
- Comment The bot approvals group will be able to revoke its bot authorization should a legitimate need arise. ((1 == 2) ? (('Stop') : ('Go')) 18:00, 6 October 2007 (UTC)[reply]
- A: The idea had not occured to me. It can be blocked by any admin if it malfunctions and will also have a semi-protected page that will allow non admins to shut it down. I'm not sure being listed at CAT:AOR on top of that is necessary - after all it isn't a person, just a script run on a separate account to my main one. But I'm willing to consider if you can think of a good reason for it... WjBscribe 22:29, 5 October 2007 (UTC)[reply]
- 22. If the bot is successful as an administrator, do you plan to request that the bot be promoted to bureaucrat?
- A: LOL. No absolutely not. This Bot will never be expanded beyond its present scope. If a Bot is needed to perform other Admin tasks, a new Bot will be needed and will require separate approval. Similarly, if the Community ever wanted a crat Bot (not sure why they would given crats mostly exercise discretion and Bots cannot), it would need to be a separate Bot and would require RfB. WjBscribe 22:29, 5 October 2007 (UTC)[reply]
- 23. Are modifications allowed to the Bot after it has been promoted to administrator? How much change is allowed before the Bot is no longer the same Bot that was given the administrative bit? Who makes that decision? (perhaps this might be better on the talk page) Uncle uncle uncle 22:56, 5 October 2007 (UTC)[reply]
- A: Yes, the bot will be modified to do its task better. If for example the code breaks because mediawiki changes, I will change the code to make it function again. The bot may generate log data that would be saved to its userspace. (to allow admins and others interested to monitor the bot). Otherwise no expansion will be done to its task. The code may change, but it will always be open source under the GPL version 2. In short, the bot's scope will not increase or change, its doing just the task listed at the top of this RFA, and nothing else, but the code may change if we run into any corner cases etc. —— Eagle101Need help? 23:07, 5 October 2007 (UTC)[reply]
- Question from Ariel
- 24. Regarding the page User:RedirectCleanupBot/Shutoff, do you think it is a good idea, and really necessary, to have a page that allows non-admins to shut off the bot? Considering the amount of sleeper accounts out there, and established editors who are not always constructive, and add in the fact that most people will likely not have read this RfA, nor understand what the bot does, (and won't bother to read the bot's page to find out) I'm not sure it is a good idea to have a page that allows any editor older than 4-5 days turn the bot off. While I realize it is easy to simply start the bot up again, it seems this could be problematic. Most (all?) other bots require admin status to disable them, and it seems this one should be no different. ANI is there for anyone to request shutoff of the bot, and admin action could be taken within minutes of a report. I'll be interested in hearing any counter arguments to this, but I see a potential for headaches. Ariel♥Gold 05:53, 7 October 2007 (UTC)[reply]
- A: I was reluctant to implement this feature for exactly the reasons you give, though I will say that non-admin Bot shutoff functions seem to be becoming increasingly common. I agreed to the implementation of the shutoff page only on the express proviso that if that page is repeatedly abused by sleepers, it will be fully protected (or even deleted and the relevant bit of Bot code removed). It is technically a window for abuse - but it requires a clever vandal to do it. In the interest of reassuring people, I've agreed to the feature but it would be withdrawn if it is abused. WjBscribe 06:07, 7 October 2007 (UTC)[reply]
- Outside view: My experience with OrphanBot and ImageRemovalBot is that having this sort of shutoff will produce one or two false alarms a year from people who don't understand the bot's purpose, and maybe one a year from someone being disruptive. --Carnildo 06:04, 7 October 2007 (UTC)[reply]
- Okay, thank you for the quick response, WJB, I really appreciate it. And I'd also like to thank Carnildo for providing insight into existing bots that have the same feature (I'd not seen a bot with a non-admin shut-off yet, so I was not sure if there were any). I'm relieved to know the page would be removed/fully protected should this become an issue, and I'm glad you'd already given this issue thought. That being said, I'm still working my way through this very long read here, but I think this bot is a great idea. I commend you, and Eagle for being so thorough and detailed in the bot's functions, and how you both address concerns quickly and insightfully. Now, back to reading! :o) Ariel♥Gold 10:51, 7 October 2007 (UTC)[reply]
- Outside view: My experience with OrphanBot and ImageRemovalBot is that having this sort of shutoff will produce one or two false alarms a year from people who don't understand the bot's purpose, and maybe one a year from someone being disruptive. --Carnildo 06:04, 7 October 2007 (UTC)[reply]
- A: I was reluctant to implement this feature for exactly the reasons you give, though I will say that non-admin Bot shutoff functions seem to be becoming increasingly common. I agreed to the implementation of the shutoff page only on the express proviso that if that page is repeatedly abused by sleepers, it will be fully protected (or even deleted and the relevant bit of Bot code removed). It is technically a window for abuse - but it requires a clever vandal to do it. In the interest of reassuring people, I've agreed to the feature but it would be withdrawn if it is abused. WjBscribe 06:07, 7 October 2007 (UTC)[reply]
- Question from xaosflux
- 25. Will you pledge that this task will be the only administrative actions taken by this account, without explicit community consensus given for additional tasks at WP:RFBOT and/or here at RFA? — xaosflux Talk 21:49, 7 October 2007 (UTC)[reply]
- A: I think I've answered this one a few times but this page is growing so quickly its getting hard to find the relevant bits of past discussions (see here for example. This Bot will only perform the task requested here - that is deleting broken redirects where the page history contains only one edit. It may also produce a log in its userspace (though in practice its deletion log should contain all necessary info). No other tasks (admin or otherwise) will be added to this Bot - further tasks should require a new Bot and a new BRFA and RFA. The only changes to the code will be to allow the task to be performed more effectively or to compensate for any mediawiki changes that affect its operation, but that basic element (broken redirect, one edit only) will not change. One example I can think of is that if Special:BrokenRedirects were in future replaced by a better list of broken redirects, the Bot may be switched to work off that instead. WjBscribe 22:02, 7 October 2007 (UTC)[reply]
- A: Let me expand on that. No this bot will not accept any more tasks. The task at the top of the RFA is the full extent of what this bot will do. I pledge my honor as a scout on that. The only thing it may do in addition to the stated task is to do some logging on its userpages. This is only to assist those monitoring the bot, and in no way will affect the greater encyclopedia. There will be no more tasks. Any additional bots needing an admin flag will need to go through RFA and allow the folks here to vet it. (This really was a good thing, people found problems in the source code, pointed them out to me etc), and I personally recommend that future bots go through this as well. BRFA frankly does not get that many people. —— Eagle101Need help? 23:08, 7 October 2007 (UTC)[reply]
- Questions from Carcharoth
- 26. I may have missed something, but is it possible to grant temporary sysop rights to allow a trial period of editing, followed by BAG approval, followed by a new RfA? Ideally, to avoid wasting people's time, there should be a way to allow bots to have trial runs with admin tools. I don't fully trust test wikis or reviews of coding to iron out all the bugs, and would prefer to have the RfA conducted after a period of actual "onsite" editing by the bot. Carcharoth 00:03, 8 October 2007 (UTC)[reply]
- A: Yes its probably possible, but to do so is rather pointless. I have already given everyone here an exact sample of what the bot will do. The example can be found at User:Eagle 101/RedirectCleanupBot. Anything with a -- 1 following it would have been deleted by the bot. (Note that most if not all of them are already deleted as of this post). This is more or less as good as a trial, its telling you exactly what it would have done. The RFA is mainly to gauge the community opinion to the idea of a bot doing this task, the technical specifics of it will be left up to the following BRFA. Trust me WP:BAG won't let a rampent bot go off. —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- Just to check - that list was generated by actually running the bot's code (minus the deletion stage) on the account, right? As opposed to running the bot's code on a list copied from Special:Redirect, or something. I notice that you added that list, rather than the proposed bot account. Is it possible to get the bot account itself to write a log of what it would have done, kind of like a dry run without the deletion tools - the proposed bot account could even say in the log "ERROR: Could not delete. No tools yet.
Stupid humansClever humans to be so cautious." :-) I'd support the bot if it showed a sense of humour... Anyway, the serious point is that I want to try and ascertain precisely how close to real, onsite conditions, the experiment was run under. Also, have you deliberately tried to break the bot by throwing nasty problems at it. How did it cope with such tests? Carcharoth 00:55, 8 October 2007 (UTC)[reply]- That was as real as it could get. Basically I ran the bot under a sock account of mine, and had it write output to a file, I then copy pasted that file to wiki. The test was a real live run, except with the stuff that deletes commented out. As I said things with a -- 1 would have been deleted. (If you are realling interested in how I did it, I simply commented out the delete stuff then had it print to a file, the command was 'rdbot.pl > test.txt' where text.txt is the test output, about as real as you can sans deletions). —— Eagle101Need help? 04:28, 8 October 2007 (UTC)[reply]
- That sounds fine. Thanks for providing the extra details. Carcharoth 09:54, 8 October 2007 (UTC)[reply]
- That was as real as it could get. Basically I ran the bot under a sock account of mine, and had it write output to a file, I then copy pasted that file to wiki. The test was a real live run, except with the stuff that deletes commented out. As I said things with a -- 1 would have been deleted. (If you are realling interested in how I did it, I simply commented out the delete stuff then had it print to a file, the command was 'rdbot.pl > test.txt' where text.txt is the test output, about as real as you can sans deletions). —— Eagle101Need help? 04:28, 8 October 2007 (UTC)[reply]
- Just to check - that list was generated by actually running the bot's code (minus the deletion stage) on the account, right? As opposed to running the bot's code on a list copied from Special:Redirect, or something. I notice that you added that list, rather than the proposed bot account. Is it possible to get the bot account itself to write a log of what it would have done, kind of like a dry run without the deletion tools - the proposed bot account could even say in the log "ERROR: Could not delete. No tools yet.
- A: Yes its probably possible, but to do so is rather pointless. I have already given everyone here an exact sample of what the bot will do. The example can be found at User:Eagle 101/RedirectCleanupBot. Anything with a -- 1 following it would have been deleted by the bot. (Note that most if not all of them are already deleted as of this post). This is more or less as good as a trial, its telling you exactly what it would have done. The RFA is mainly to gauge the community opinion to the idea of a bot doing this task, the technical specifics of it will be left up to the following BRFA. Trust me WP:BAG won't let a rampent bot go off. —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- 27. Is the bot GIGO proof? By this, I mean does the bot rely on Special:BrokenRedirects to provide the right output, or does it perform its own checks to make sure that it hasn't been provided with a load of garbage. Similarly, are there any ways in which the process of inputting or reading the data could get corrupted, and are there any safety checks in place to catch things like that? ie. Does the bot check its own actions, or does it rely on humans to spot that it is going wrong? If it does rely on Special:BrokenRedirects, is the way that list is generated safe? I presume that anyone with the access levels to improperly affect Special:BrokenRedirects would be more likely to be engaged in far more destructive behaviour. Carcharoth 00:03, 8 October 2007 (UTC)[reply]
- A: It does rely on Special:BrokenRedirects as of now, but if an api.php extension is made the code will use that instead. If it does not specifically match the format the bot will not run. (I will make doubly sure of this, and I'm sure BAG will as well). —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- So in layman's terms it gets a list, but then it carries out its own checks as well (looks for #REDIRECT and looks in the edit history)? Does it look for categories? I've created lots of redirects with categories in a single edit, so the "no edit history" thing won't help here. Carcharoth 00:55, 8 October 2007 (UTC)[reply]
- I don't think categories should make a difference - redirects tagged {{R from alternative spelling}} or similar should be deleted if the target is deleted. Templates like {{R from merge}} will only be added to an article with existing content so won't be deleted by the Bot as the page will have more than one edit... WjBscribe 01:51, 8 October 2007 (UTC)[reply]
- Good points. The redirects I am concerned about are on my watchlist anyway, so I can always scan the list to see if they are still there. Carcharoth 09:54, 8 October 2007 (UTC)[reply]
- I don't think categories should make a difference - redirects tagged {{R from alternative spelling}} or similar should be deleted if the target is deleted. Templates like {{R from merge}} will only be added to an article with existing content so won't be deleted by the Bot as the page will have more than one edit... WjBscribe 01:51, 8 October 2007 (UTC)[reply]
- So in layman's terms it gets a list, but then it carries out its own checks as well (looks for #REDIRECT and looks in the edit history)? Does it look for categories? I've created lots of redirects with categories in a single edit, so the "no edit history" thing won't help here. Carcharoth 00:55, 8 October 2007 (UTC)[reply]
- A: It does rely on Special:BrokenRedirects as of now, but if an api.php extension is made the code will use that instead. If it does not specifically match the format the bot will not run. (I will make doubly sure of this, and I'm sure BAG will as well). —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- 28. Are we 100% sure that there are no creative ways for vandals to fool the bot into deleting redirects that are needed. I can think of a few possibilities, but I think they are so complicated that they would be discovered and fixed before the bot got round to dealing with any redirects. In any case, the "no edit history" thing allays most concerns. Carcharoth 00:05, 8 October 2007 (UTC)[reply]
- A: Pretty darned sure, I'd say 99.9% sure. Nothing is sure in this world, but there is no way to pass the bot invalid input that has been spotted so far. Any such flaw will be fixed ASAP if they are ever spotted. —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- Fair enough. Carcharoth 00:55, 8 October 2007 (UTC)[reply]
- A: Pretty darned sure, I'd say 99.9% sure. Nothing is sure in this world, but there is no way to pass the bot invalid input that has been spotted so far. Any such flaw will be fixed ASAP if they are ever spotted. —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- 29. Timescale and rate of editing. If the bot was approved, how long would it take between the creation of a broken redirect and the bot deleting it? If it is not done fairly soon, then a vandal could "race the bot" and make edits to broken redirects merely to put the bot out of a job and force human admins to do the deletions (a fairly improbable if annoying scenario). If the bot deletes too quickly, it doesn't give humans time to clear up after themselves. Think what it is like with the signature bots. Some people would be happy to be lazy and leave the broken redirects for the bot to tidy up. Others might prefer to carefully review the broken redirects themselves and decide what is needed. Would the bot give them time to do this? Carcharoth 00:03, 8 October 2007 (UTC)[reply]
- A: This I believe has been mentioned somewhere, it will work off of special:brokenredirects, approx 1-2 hours after it updates, though it may wait longer then that. The deletion rate will be no more then 4-6 per minute. In any case, lag will be at least 1-2 hours, so any user making a redlinked redirect would notice immediately after they made the mistake and fix it long before the bot will nab it. —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- OK. Thanks for clarifying that. Carcharoth 00:55, 8 October 2007 (UTC)[reply]
- A: This I believe has been mentioned somewhere, it will work off of special:brokenredirects, approx 1-2 hours after it updates, though it may wait longer then that. The deletion rate will be no more then 4-6 per minute. In any case, lag will be at least 1-2 hours, so any user making a redlinked redirect would notice immediately after they made the mistake and fix it long before the bot will nab it. —— Eagle101Need help? 00:17, 8 October 2007 (UTC)[reply]
- 30. Finally, what happens to redirects with no history, currently pointing at a redlink, that a human would realise should be redirected to a new destination, instead of being deleted? These are usually left after someone has forgotten to tidy up after deleting an article. Surely the bot shouldn't be deleting in such cases? Carcharoth 00:03, 8 October 2007 (UTC)[reply]
- A: I have done a lot of cleaning up broken redirects (I estimate at least 2000 deletions - with others being retargeted instead). I have never retargeted a redlink with only one edit to my knowledge - retargeting usually happens where the redirect has pointed at a number of targets, only the latest of which has been redirected. The overwhelming majority of one edit redirects result from pagemoves or people creating redirects for alternative spellings/capitalisations of the target, which will have no life independent to that of the target. In the remaining minority of cases, if redirect X always pointed to article Y (now deleted) there was probably no pressing need for it to point to article Z. If someone later decides that a redirect to article Z at X would be useful, it can be created with very little effort. Clumsy retargeting to tangentially related topics is something that really needs to be avoided anyway - it seems to be the source of a lot of RfD discussions. Leaving those who know about a subject to decide what redirects would be useful is probably a more beneficial approach than admins doing cleanup trying to guess possible targets. WjBscribe 00:34, 8 October 2007 (UTC)[reply]
- Fair enough. I would tend to agree, but it is nice to have the reassurance of your experience. Just to nigle away at this point a little longer... Do you think the 2000 you have personally reviewed were a good spread of the various types of broken redirects? Were there any types that were very rare, or types that are so rare than you didn't come across them in the 2000 that you handled? Carcharoth 00:55, 8 October 2007 (UTC)[reply]
- Well on the basis that when I cleanup Special:BrokenRedirects I deal with every redirect listed, I think that's probably as good a random sample as I could get. Quite a few commentators here have said they also have experience of this task and none has yet given an example of a redirect that the Bot would delete but should be kept. With the level of community involvement in this RfA, I hope experience all "types" of redirects is present... WjBscribe 01:21, 8 October 2007 (UTC)[reply]
- That's convincing enough for me. Will you be intending to still scan a list daily/weekly? Or will you do less of that if the bot seems to be running with no problems? ie. Will you constantly look over its shoulder, or will you eventually leave it to the community to alert you of problems? Carcharoth 09:54, 8 October 2007 (UTC)[reply]
- At first I'll prob watch it a lot - curiosity appart from anything else. Later I guess I prob will watch it less, though I prob will have a look after most runs so I can deal with the redirects left behind (i.e. those with more than one edit). I would always keep a close eye if any changes to the code had been made recently... WjBscribe 10:09, 8 October 2007 (UTC)[reply]
- That's convincing enough for me. Will you be intending to still scan a list daily/weekly? Or will you do less of that if the bot seems to be running with no problems? ie. Will you constantly look over its shoulder, or will you eventually leave it to the community to alert you of problems? Carcharoth 09:54, 8 October 2007 (UTC)[reply]
- Well on the basis that when I cleanup Special:BrokenRedirects I deal with every redirect listed, I think that's probably as good a random sample as I could get. Quite a few commentators here have said they also have experience of this task and none has yet given an example of a redirect that the Bot would delete but should be kept. With the level of community involvement in this RfA, I hope experience all "types" of redirects is present... WjBscribe 01:21, 8 October 2007 (UTC)[reply]
- Fair enough. I would tend to agree, but it is nice to have the reassurance of your experience. Just to nigle away at this point a little longer... Do you think the 2000 you have personally reviewed were a good spread of the various types of broken redirects? Were there any types that were very rare, or types that are so rare than you didn't come across them in the 2000 that you handled? Carcharoth 00:55, 8 October 2007 (UTC)[reply]
- A: I have done a lot of cleaning up broken redirects (I estimate at least 2000 deletions - with others being retargeted instead). I have never retargeted a redlink with only one edit to my knowledge - retargeting usually happens where the redirect has pointed at a number of targets, only the latest of which has been redirected. The overwhelming majority of one edit redirects result from pagemoves or people creating redirects for alternative spellings/capitalisations of the target, which will have no life independent to that of the target. In the remaining minority of cases, if redirect X always pointed to article Y (now deleted) there was probably no pressing need for it to point to article Z. If someone later decides that a redirect to article Z at X would be useful, it can be created with very little effort. Clumsy retargeting to tangentially related topics is something that really needs to be avoided anyway - it seems to be the source of a lot of RfD discussions. Leaving those who know about a subject to decide what redirects would be useful is probably a more beneficial approach than admins doing cleanup trying to guess possible targets. WjBscribe 00:34, 8 October 2007 (UTC)[reply]
- 31. Final question from me, I think. There have been cases in the past where Wikipedia has come to rely on a bot doing tasks, and then the bot operator leaves Wikipedia. In most cases, the slack is taken up again very quickly. In this case, it would be easy to get humans dealing with the broken redirects, but do you (a) have plans for what to do if either or both of you leave Wikipedia or go on an extended break? and (b) will you both pledge to (within reason) help in transferring, changing such bot-coding/bot-running roles if the situation arises? I presume that a new RfA would be needed if this happens, though some might dispute that (I suspect RfA is re-approving the people running the bot, while BAG is approving the bot itself). Anyway, what are the plans for future changes like this? Carcharoth 09:54, 8 October 2007 (UTC)[reply]
- A: If Eagle_101 leaves, several other Perl programmers have offered their help and I would be guided by BAG as to who was qualified to provided that assistance. If I leave, the code is simple and publically available - so there's nothing I need to do as such. And yes, I get someone else taking over the Bot would need approval through whatever mechanism is then in place (presumably RfA if nothing changes). It should be a simpler one than this though, as it would be only operator that needed approval, rather than everything being considered at once. I would certainly agree to hand over access to this account (RedirectCleanupBot) if that is felt desirable - provided my replacement agreed to seak approval first (perhaps I could send the password to a bureaucrat, or to ArbCom to be passed on once a replacement is selected/approved?). WjBscribe 10:09, 8 October 2007 (UTC)[reply]
- Possibly an entirely new account, but as long as the issue has at least been considered. Carcharoth 10:16, 8 October 2007 (UTC)[reply]
- A: If Eagle_101 leaves, several other Perl programmers have offered their help and I would be guided by BAG as to who was qualified to provided that assistance. If I leave, the code is simple and publically available - so there's nothing I need to do as such. And yes, I get someone else taking over the Bot would need approval through whatever mechanism is then in place (presumably RfA if nothing changes). It should be a simpler one than this though, as it would be only operator that needed approval, rather than everything being considered at once. I would certainly agree to hand over access to this account (RedirectCleanupBot) if that is felt desirable - provided my replacement agreed to seak approval first (perhaps I could send the password to a bureaucrat, or to ArbCom to be passed on once a replacement is selected/approved?). WjBscribe 10:09, 8 October 2007 (UTC)[reply]
- 32. Actually, one more! Would you both be prepared to retire the bot if Mediawiki software changes made it redundant (or less needed)? This is what actually happened with User:ProtectionBot, which was made redundant before its RfA finished, due to the introduction of cascading protection. Carcharoth 09:54, 8 October 2007 (UTC)[reply]
- A: If the Bot isn't needed it would effectively self-retire as it would have nothing to do. As to less needed, I suppose that depends what you mean. If there are still one-edit broken redirects to be cleaned up, I guess the Bot may as well continue unless the number becomes very small (though the Wikipedia is expanding that seems unlikely). But if the Bot stopped being useful, sure it'd be time for it to retire and I would ask a steward to desysop it rather than have it an inactive admin account... WjBscribe 10:09, 8 October 2007 (UTC)[reply]
- Thanks for clearing that up. And many thanks to you and Eagle 101 for answering the questions I had. There turned out to be more than I thought at first! :-) Carcharoth 10:16, 8 October 2007 (UTC)[reply]
- A: If the Bot isn't needed it would effectively self-retire as it would have nothing to do. As to less needed, I suppose that depends what you mean. If there are still one-edit broken redirects to be cleaned up, I guess the Bot may as well continue unless the number becomes very small (though the Wikipedia is expanding that seems unlikely). But if the Bot stopped being useful, sure it'd be time for it to retire and I would ask a steward to desysop it rather than have it an inactive admin account... WjBscribe 10:09, 8 October 2007 (UTC)[reply]
- A completely tangential question from Миша13
- 33. Why the hell are you going through this nonsense? Why not either a) run it on your own accounts like everyone else does or b) reform the policy first, so that such absurdities are not required? (Also compare with Freakofnurture's oppose - I won't go that far, but I do protest this as well.) If this passes (and it will), it sets a precedent for this silliness in the future.
- A: The precedent was set by Wikipedia:Requests for adminship/TawkerbotTorA and Wikipedia:Requests for adminship/ProtectionBot and no crat has been willing to give a Bot account +sysop without BRFA and RFA. If this passes, it will demonstrate that (a) the Community is not, as is often claimed, hostile to the idea of official admin Bots and (b) that it is entirely possible for someone to obtain official approval to run an admin Bot. This should confirm that there is indeed a mechanism (albeit a tortuous one) for obtaining such approval. I really don't think the policy was ever going to be reformed while there was a lingering belief that all admin Bots are bad. Having an admin Bot that can be pointed to so as to say, "Look, there is an officially approved and recognised admin Bot that is helping the project and not going crazy" should make it easier for future Bots, not harder. WjBscribe 20:53, 8 October 2007 (UTC)[reply]
- <gasp> It was a SMOKESCREEN! RUN!!! The AdminBots are coming!!! :-) One of them even has a million edits. We will feel so inferior! <ahem> "We now return you to your scheduled programming." Carcharoth 21:40, 8 October 2007 (UTC)[reply]
- I'M IN UR ENCYCLOPEDIAS, DELETING UR ARTICLES. Миша13 21:52, 8 October 2007 (UTC)[reply]
- <gasp> It was a SMOKESCREEN! RUN!!! The AdminBots are coming!!! :-) One of them even has a million edits. We will feel so inferior! <ahem> "We now return you to your scheduled programming." Carcharoth 21:40, 8 October 2007 (UTC)[reply]
- A: The precedent was set by Wikipedia:Requests for adminship/TawkerbotTorA and Wikipedia:Requests for adminship/ProtectionBot and no crat has been willing to give a Bot account +sysop without BRFA and RFA. If this passes, it will demonstrate that (a) the Community is not, as is often claimed, hostile to the idea of official admin Bots and (b) that it is entirely possible for someone to obtain official approval to run an admin Bot. This should confirm that there is indeed a mechanism (albeit a tortuous one) for obtaining such approval. I really don't think the policy was ever going to be reformed while there was a lingering belief that all admin Bots are bad. Having an admin Bot that can be pointed to so as to say, "Look, there is an officially approved and recognised admin Bot that is helping the project and not going crazy" should make it easier for future Bots, not harder. WjBscribe 20:53, 8 October 2007 (UTC)[reply]
- Question from 1of3
- 34. If a deleted article, such as a biography with single-edit redirects from alternative spellings of the name, makes it past DRV, then are those alternative spelling redirects are gone with no easy way to find or recover them? 1of3 10:05, 10 October 2007 (UTC)[reply]
- A: Yes, but the same thing happens with humans doing this task - if there is a long enough gap between the deletion and DRV that an admin has cleaned up broken redirects, those redirects are deleted now (admins can't guess as to the likelihood of the target being recreated in future). If the redirects to the article were useful, they will no doubt be recreated - either by an editor consciously recreating the redirects or by people creating them when they type those in as a search term and don't end up at the desired article. The Bot will not run for a few hours after the list is updates, so articles mistakenly deleted are restored or those quickly placed on DRV should keep their redirects (which is something that can't be guaranteed at present with human review). WjBscribe 10:14, 10 October 2007 (UTC)[reply]
General comments
[edit]Please keep criticism constructive and polite. Remain civil at all times.
Having reviewed the code for the bot, I can pretty confidently say that the bot is unlikely to misbehave and will perform as specified. My reservation, however, is that I'm not sure the scope of the bot's function is really one necessary for an adminbot. The task it is performing -- deleting approx. 300 pages per week -- is one that could be accomplished quite simply by a single admin, or small group of admins, in under an hour per week. What concerns me is that granting this bot adminship stands to set an, as of yet unestablished, precedent that adminbots are generally accepted. This is a precedent I would be fine with having set, but would see it more appropriately established through the promotion of Curpsbot or similar currently active, though "illegitimate," adminbots. I also worry that the scope of this bot will be expanded to include other admin tasks following its promotion and with only the consent of the bot approvals group, a group that generally shares my view and readily approves most adminbot functions. My support for the promotion of this bot would be strictly on the condition that the bot's scope not be expanded to include any other tasks without bringing the bot back to RfA for reconfirmation by the community. Should the bot undertake any additional tasks, with or without the approval of the BAG, the bot should be stripped of its admin status and deactivated on sight. If the bot's operator and writer can agree to these terms, and if the BAG agrees not to authorize any further tasks for this bot without a further RfA for the explicit tasks, then the bot has my support. AmiDaniel (talk) 21:34, 4 October 2007 (UTC)[reply]
- I will say right now, this bot will not do any further tasks. I have always advocated that admin bots should have one task per approved account. That allows the community to make its own choices on a task by task basis in a more public fora then WP:BRFA. Any new tasks warrent a new bot account, and a new RFA. —— Eagle101Need help? 21:39, 4 October 2007 (UTC)[reply]
- This Bot will perform no other tasks. May a steward take both mine and the Bot's flag if I lie. As Eagle says, a new task means a new Bot and a new RfA. WjBscribe 21:49, 4 October 2007 (UTC)[reply]
- I would add that if the bot operator began using the admin account for anything outside the scope under which the administrator flag was granted, I would personally request a steward desysop the account, as well as block the account. So, I don't think there's any danger. That plus the fact that I trust WJBscribe implicitly, meaning that me thinking hypothetically about what I'd do is totally unnecessary :-) --Deskana (talk) 23:14, 4 October 2007 (UTC)[reply]
- Thank you all for your reassurance -- glad that we're clear on that point :) AmiDaniel (talk) 00:12, 5 October 2007 (UTC)[reply]
My concern, somewhat alleviated by the answer to Q.8, is the possibly of inappropriate rights promotion. Grabbing admin rights through poor code management is, quite literally, the oldest trick in the hacker's book. Provided WJBscribe and Eagle 101 understand that losing control of their bot's code will be no different than losing control of their admin account, with sanctions that might entail, then I'm perfectly satisfied to support. Ronnotel 23:25, 4 October 2007 (UTC)[reply]
- I do want to make it clear, that I wanted to go rouge and run an unauthorized bot on my account I could do so just as easily under my main admin account as I could do under any special "bot" account. Its technically the same thing, just the bot account separates WJScribe's deletions and the bot's deletions. That trick you speak of applies to every one of our 1,200+ admins. —— Eagle101Need help? 23:44, 4 October 2007 (UTC)[reply]
- I just wanted to make sure that the chain of admin rights is unbroken. From what you're telling me, all involved in this project are already admins so this shouldn't be a problem. Thanks. Ronnotel 00:01, 5 October 2007 (UTC)[reply]
- Along the lines of this inquiry, I found one design decision with this bot rather strange. Typically, or at least in my opinion, it is very unsafe practice to store passwords inside the source code, especially when the source code that contains this password is on a (even quite secure) shared server. I would definitely recommend that Eagle revise the code to 1) prompt the operator for a password each time the bot is run, 2) receive the input of this password with echoing disabled, 3) log out at the end of the session, and 4) nullify the memory used to store this password immediately after the log-in has been completed successfully. (1) is obvious -- it ensures that the password is not stored in an actual file anywhere on the server, and so cannot be accessed even by server admins. (2) prevents shoulder-surfing and accidentally copying the password to somewhere it shouldn't be. (3) prevents against session hijacking by ensuring that sessions only last for a brief amount of time. (4) guarantees that the password can't be retrieved from the toolserver's RAM or other shared memory (naturally, this can, theoretically, only be done by server admins, but it's possible nonetheless). I also might recommend that the bot operate off the secure server instead of the http server, which would make it nearly impossible to fish out the bot's password. This may be a bit of overkill, but I think (1) and (4) should be absolutely mandatory for any bot, regardless of whether it has an admin flag or not. AmiDaniel (talk) 00:11, 5 October 2007 (UTC)[reply]
- Number 1 at least will be implented, I wrote the bot up in about 2 hours, and did not think about that. The bot will most likely *not* be run on toolserver, but on a computer resource of WJBscribe's. I will of course by logical extension nullify the memory. I'll look into doing the others. :) —— Eagle101Need help? 00:19, 5 October 2007 (UTC)[reply]
- Great to hear :) Thanks for the speedy response. AmiDaniel (talk) 00:23, 5 October 2007 (UTC)[reply]
I am worried about this bot for a couple of reasons. First, the bot can be compromised and block everyone (admins included) at a time when no stewards are available (i.e. late at night). Second, I am wondering where this TestWiki is so that we all can see what the bot actually does. Miranda 01:19, 5 October 2007 (UTC)[reply]
- Do you think it any more likely to compromised than a human admin account? It would certainly be easier to tell if it had been, as unusual behaviour would be easily reconcilable. Remember that any admin blocked by the Bot can unblock themselves. If you want to see what it does, look at User:Eagle 101/RedirectCleanupBot. The entries on that list with 1 by them are the ones the Bot would have deleted - all others would be left alone. WjBscribe 01:24, 5 October 2007 (UTC)[reply]
- The bot has no ability to block, zero, zip, nada, zilch, 0. Its not in its code, I offer up the code to your review at http://tools.wikimedia.de/~eagle/rdbot.pl, mind you its slightly out of date, as I'm working on adding a few new safe guards/features such as checking a page to make sure it is allowed to run. This page will be able to be modified by any registered user. The absolute worst that can happen here is the bot deletes a redirect its not supposed to, which hopefully someone would report that to me and I could fix it ;). Please do scour the test results and let me know if something should not be deleted that is marked for deletion. —— Eagle101Need help? 01:29, 5 October 2007 (UTC)[reply]
- The testwikis in question are wiki.xyrael.net (eagle's official test) and st47.dyndns.org/wiki/index.php (my initial tests of the code and also some development work). They may be a little hard to follow, however. --uǝʌǝsʎʇɹnoɟʇs 01:41, 5 October 2007 (UTC)[reply]
- Thanks for your responses. Miranda 01:55, 5 October 2007 (UTC)[reply]
- Why can't we just have a feature in the MediaWiki software to eliminate R1s, each time they appear? — Nearly Headless Nick {C} 16:00, 5 October 2007 (UTC)[reply]
- Because MediaWiki is general purpose software and it does not make sense to implement this feature in the core software when it is just one Wiki that wants it. ((1 == 2) ? (('Stop') : ('Go')) 18:10, 6 October 2007 (UTC)[reply]
- Although, if the feature were implemented in MediaWiki, I doubt anyone would complain about it. The more human-like software is (the bot has an account? why, humans have accounts!), the more people are irrationally afraid of it. GracenotesT § 21:45, 6 October 2007 (UTC)[reply]
Coding comments
[edit](I couldn't find anywhere else to put this, so I made a new section just here.)
There's a possible slight bug in the code, although it's unlikely to come up. If a broken redirect is deleted, and then a new page is created at the same title that contains no wikilinks, the page isn't edited from then until when the bot runs, but the broken redirect is still cached in Special:Brokenredirects, the bot will delete the new page due to the else block of the if($targetname) check. Why is that fallback there? (I'd prefer it if the bot only deleted redirects that contain at least one link; I'm not sure if it's possible to create a page which qualifies as being a redirect without it containing a link, but if it is such pages should be sufficiently infrequent that the bot wouldn't be needed to delete/correct them, so I'd prefer it if the redirect were removed.) I'd also prefer it if $targetname was explicitly set to undef as it was created, and if the if() two lines down explicitly checked defined($targetname) (otherwise there will be weirdness in such cases as a broken redirect to 0 due to the way Perl treates boolean values, and this will also avoid some potential interpreter warnings about undefined variables; you are running this with warnings on, presumably?). --ais523 11:53, 5 October 2007 (UTC)
- Oh, and I'd like to see the new source code (after the changes you mention above and the ones I've just requested have been made) before I express an opinion on this RfA. (I have nothing against adminbots, but I don't like supporting RfAs without good evidence, usually in the form of contributions (but in this case in the form of source code), of how the new admin is going to behave.) --ais523 11:55, 5 October 2007 (UTC)
Actually, there's possibly a slightly worse problem along the same lines as the old one; if the first link in the new article is a redlink, then that will also cause the new page to be deleted. The bot absolutely needs to check that the page in question is, in fact, a redirect, before trying to delete it. (A case-insensitive check for #REDIRECT at the start of the article's content should rule out false positives completely (which is important for an adminbot) without producing too many false negatives (adminbots should err on the side of caution, and a false negative is no big deal because it will just lead to the adminbot not deleting a broken redirect when it could have done)). --ais523 12:01, 5 October 2007 (UTC)
- Ok, checking over this now. —— Eagle101Need help? 12:30, 5 October 2007 (UTC)[reply]
- Ok, first off I should thank you for actually checking the code :), that bit was introduced by ST47 in his patch above. I merely checked that patch to make sure it was not doing anything untoward, and just added it to the code base as it was given to me by a bag member. I appreciate you having found that rather then me having to debug that at a later point. Saying that I have put up a new version of the code at the same location as before, the script now checks User:RedirectCleanupBot/Shutoff for the string {allow}, if that is not there, it shuts off. Please do review the source code and if you have any further additions or changes, I would appreciate a unix diff patch, or you can just describe them ;). —— Eagle101Need help? 13:11, 5 October 2007 (UTC)[reply]
- There's a slight error in the uninitialisation of $targetname; "" is a blank string, and therefore is defined (you want to write
my $targetname = undef;
rather thanmy $targetname = "";
; sorry for not sending a patch, last time I tried this happened); likewise, the second check for defined (the one containingnext REDIRECT
should just be a test for nonemptiness ('' ne $target->{'content'}
is one way to do this, I think, but there are many others). One other error (this one won't cause the bot to do anything crazy, thought, just causes a few false negatives): $pageTitle will come out HTML-encoded from the regex you use (I can't think of an easy way to avoid this), and as a result an attempt to check "casting directors society of canada" (to take an example from the current special page) will instead end up checking [["casting directors society of canada"]], which is a different page. (Hmm... the deletion log from the correct page shows up there, but it's definitely a different page... maybe this is a MediaWiki bug?) However, the sanity check that checks whether the page is a redirect and whether it is broken will catch this, meaning that this is a false negative; a fix for this would be nice but complicated, I think (unless Perl has an XML-unescape function somewhere in its libraries, which it probably does, for this purpose; if it does, use it to unescape $pageTitle just inside loop REDIRECT). --ais523 17:35, 5 October 2007 (UTC)- Correct, I know about the quote problem, and frankly as long as the bot does not take that error and delete something else, I'm ok with it for now. It means it won't touch anything with a " in it. As far as target name I was having a minor problem debugging something, and I forgot to fix that, would have shown up rather quickly in any serious debugging, but thanks for pointing it out. I'll make the test back to a test for non-emptiness. (that is what it was before I tried to get cute >.>). Thanks again. `—— Eagle101Need help? 17:44, 5 October 2007 (UTC)[reply]
- You should be able to fix the quote problem easily using the HTML::Entities module. —Ilmari Karonen (talk) 02:47, 6 October 2007 (UTC)[reply]
- Correct, I know about the quote problem, and frankly as long as the bot does not take that error and delete something else, I'm ok with it for now. It means it won't touch anything with a " in it. As far as target name I was having a minor problem debugging something, and I forgot to fix that, would have shown up rather quickly in any serious debugging, but thanks for pointing it out. I'll make the test back to a test for non-emptiness. (that is what it was before I tried to get cute >.>). Thanks again. `—— Eagle101Need help? 17:44, 5 October 2007 (UTC)[reply]
- There's a slight error in the uninitialisation of $targetname; "" is a blank string, and therefore is defined (you want to write
- Ok, first off I should thank you for actually checking the code :), that bit was introduced by ST47 in his patch above. I merely checked that patch to make sure it was not doing anything untoward, and just added it to the code base as it was given to me by a bag member. I appreciate you having found that rather then me having to debug that at a later point. Saying that I have put up a new version of the code at the same location as before, the script now checks User:RedirectCleanupBot/Shutoff for the string {allow}, if that is not there, it shuts off. Please do review the source code and if you have any further additions or changes, I would appreciate a unix diff patch, or you can just describe them ;). —— Eagle101Need help? 13:11, 5 October 2007 (UTC)[reply]
- I've emailed Eagle 101 with another coding flaw. I found a situation in which the bot displays the wrong number of edits - but the number displayed is 1) independent of the actual # of edits and 2) not 1. So long as the bot is testing for edits = 1, this flaw will only mean that some pages escape deletion by the bot. As such, I don't see it as a "no go" problem, but a cautionary problem. Details are not being posted here due to WP:BEANS. GRBerry 13:49, 10 October 2007 (UTC)[reply]
Discussion
[edit]- Can an admin or in this case, adminbot, make deletions when blocked? If yes, does the bot have something to turn itself off if it is blocked? --Maxim(talk) (contributions) 00:04, 5 October 2007 (UTC)[reply]
- No. I've tested this on a TestWiki and admins accounts cannot delete anything while blocked. WjBscribe 00:08, 5 October 2007 (UTC)[reply]
- As an administrator, bot operator, and Perl programmer, I offer my assistance to WJBscribe if he needs it. ^demon[omg plz] 12:48, 5 October 2007 (UTC)[reply]
- Meh, this just seems to me like an attempt to loosen up the Wikipedia community to the idea of adminbots. And an unnecessary one at that (minus the fact that in the past such attempts have failed per RfA...). --Iamunknown 08:01, 10 October 2007 (UTC)[reply]
Support
- Per general support for adminbots. Is this a move towards sanity? Moreschi Talk 20:54, 4 October 2007 (UTC)[reply]
- As the programmer, I support my code. —— Eagle101Need help? 20:55, 4 October 2007 (UTC)[reply]
- Support as bot operator who has reviewed and tested the code - it works as expected, there are no security issues or potentially bad situations that could cause the bot to perform unexpected operations. --uǝʌǝsʎʇɹnoɟʇs 20:58, 4 October 2007 (UTC)[reply]
- Strong support This will be very useful to have, and of course I trust the owner and coder :) Majorly (talk) 20:59, 4 October 2007 (UTC)[reply]
- Strong support (note: Bot approvals group member)) - code is public, its hard to screw up, no judgment involved whatsoever on the bots part. It's a no brainer support -- Tawker 21:01, 4 October 2007 (UTC)[reply]
- Support - (many edit conflicts) The concept is good and I trust that the code is sound. I was initially concerned on understanding that this was to be an admin bot, and no less a deletion bot, but since it will only delete under such strict circumstances, I cannot imagine it being a problem. Nihiltres(t.l) 21:02, 4 October 2007 (UTC)[reply]
- Support ➪HiDrNick! 21:03, 4 October 2007 (UTC)[reply]
- Support — if it won't make decisions, why not? --Agüeybaná 21:06, 4 October 2007 (UTC)[reply]
- That is why not. Someone making sentient decisions can do a better job than a bot here. Dekimasuよ! 08:30, 6 October 2007 (UTC)[reply]
- Obvious support. A job that a bot can easily perform and a bot with thoroughly reviewed code. Will (aka Wimt) 21:07, 4 October 2007 (UTC)[reply]
- Support Job is uncontroversial and tedious, perfect application for a bot. I have read, and understood the source code, and it will function as described. ((1 == 2) ? (('Stop') : ('Go')) 21:08, 4 October 2007 (UTC)[reply]
- Support - code makes sense, and is a good opportunity for us to take the step of allowing admin bots, especially for menial chores such as this (where the human admin is effectively taking on the role of robot when deleting the redirects). Important point is that bot only deletes pages with one edit, so most applications will be to pseudonyms created by the author of a bio article since deleted under CSD G7. Effectively, most applications of this bot will simply extend the human speedy deletion procedure. Martinp23 21:13, 4 October 2007 (UTC)[reply]
- Definite support! We definitely need a bot like this. I'm pretty sure that anything coded by Eagle101 would handle just fine. :) *Cremepuff222* 21:14, 4 October 2007 (UTC)[reply]
- Support As someone who's done this tedious task, it will be a relief for a bot to take over! I'm happy that with the restrictions specified, there is little chance of any problems. I trust the author and code reviewers judgements that it's bug free. → AA (talk) — 21:21, 4 October 2007 (UTC)[reply]
- Strong support, and I also recuse myself from closing this RFA. Andre (talk) 21:21, 4 October 2007 (UTC)[reply]
- Support - This looks to me like a bot task, and the bot will be operated by an admin. Od Mishehu 21:25, 4 October 2007 (UTC)[reply]
- Support as BAG member. Simple bot task, a matter of giving a +sysop flag to a +sysop user already. An incredibly simple task, no major requirements. Code is available which is a big plus. Matt/TheFearow (Talk) (Contribs) (Bot) 21:29, 4 October 2007 (UTC)[reply]
- I am in general support of this bot getting an admin bit, but I have to wonder if there are more pressing tasks that deserve admin bots. The reasoning behind the nom is that the task is fairly tedious and it is a "trivial task [that] could be done just as well automatically." That describes many admin tasks... (CSD backlogs, anyone?) This task does not seem so pressing, but if it can allow a kind-of back door allowance for more admin bots with solid code and trusted operators, why not? --Jeffrey O. Gustafson - Shazaam! - <*> 21:33, 4 October 2007 (UTC)[reply]
- Hmm, let's see. No answers to questions, no indication that it will communicate well with users, not enough prior experience in the area where it plans to use its admin powers, and an edit count of 0 is a bit low. Yes, I'm joking.
I've just reviewed the code. I can only see one way it would go wrong, and it's in an extremely improbable situation: an announcement is added to the Wikipedia interface, containing a bulleted list, and one bullet point begins with a link to a page that's only been edited once. The bot would then delete that announcement page, thinking it was a dead redirect. I can't see this happening in reality, and it wouldn't be a huge problem even if it did, since it would only affect one page and would be easily fixed. Human admins are much more likely to make much bigger mistakes.
In short, I trust this bot to do what it's meant to do, and I support. rspeer / ɹəədsɹ 21:34, 4 October 2007 (UTC)[reply] - Support, meets all of my criteria in User:Gracenotes/Admin bot. Two not-so-essential requests, though: 1. Could the bot recognize the {{bots}} templates for exceptions? 2. Could the bot please include the target page in the edit summary? The former will allow for (as stated) exceptions, in case one is needed, and the latter can allow for human review, and help in situations where retargeting is more appropriate than deletion. Granted, these changes will make make the code less pithy, but in my opinion they're worth it. GracenotesT § 21:35, 4 October 2007 (UTC)[reply]
- We can include the target page. —— Eagle101Need help? 21:37, 4 October 2007 (UTC)[reply]
- Support - the operator is an admin and the coder is an admin. The bot has already been approved by WP:BRFA. I've read the code and it seems fine assuming the format for our history pages don't radically change. ~a (user • talk • contribs) 21:38, 4 October 2007 (UTC)[reply]
- The bot has not yet been approved, approval of the bot is pending the result of the RFA, at least that is how I understand it. —— Eagle101Need help? 21:41, 4 October 2007 (UTC)[reply]
- Do I trust WJBscribe? Absolutely. Do I trust Eagle 101? Absolutely. By extension, can I support this bot? Absolutely. Wizardman 21:53, 4 October 2007 (UTC)[reply]
- Support This is a beneficial and no-risk task, perfectly suited for a bot. —Ruud 22:00, 4 October 2007 (UTC)[reply]
- Support --while it may set precedent, its advantages overweigh its disadvantages. Danny 22:07, 4 October 2007 (UTC)[reply]
- Support. Would this be the first admin bot to pass an RfA? Anyways, I trust the owners and the users who have reviewed the code. RyanGerbil10(C-Town) 22:10, 4 October 2007 (UTC)[reply]
- Support based on assurance by WjBscribe and Eagle101 that the task scope will never be expanded. Upon further rumination, rather than oppose or remain neutral on this particular RfA, it makes much more sense for me to say that I will strongly oppose any future bot RfA where the operator does not explicitly make this same assurance. I’ll be strking out my neutral below in a second. --barneca (talk) 22:15, 4 October 2007 (UTC)[reply]
- Very Weak Support - I've always hated the idea of having a bot become an admin, but ressurence counts. I'll give it a go for now. --Hirohisat 紅葉 21:44, 4 October 2007 (UTC)[reply]
- I strongly support this nomination. It doesn't bother me that this nominee isn't a person. All bots I know about are extremely useful and needed. Acalamari 22:28, 4 October 2007 (UTC)[reply]
- Support this thoughtful and reasonable request.--chaser - t 22:42, 4 October 2007 (UTC)[reply]
- Support Shouldn't even need to go through this process. east.718 at 22:53, October 4, 2007
- Support. The code looks correct, and this looks like a useful task for a bot. --Carnildo 23:00, 4 October 2007 (UTC)[reply]
- Support even though the bot has unacceptably low template and portal talk counts. :-) - Philippe | Talk 23:18, 4 October 2007 (UTC)[reply]
- OH NOES! ADMIN BOTS WILL BLOCK US ALL!!!!! Err ... I mean ... strong support. Seriously, I've always thought opposition to adminbots was silly. I trust a bot more than I do a human. Humans can go rogue. Humans can push a POV. Bots just do what you tell them to. --B 23:30, 4 October 2007 (UTC)[reply]
- Support there is nothing wrong, very useful bot. Carlosguitar 23:34, 4 October 2007 (UTC)[reply]
- Support - trust the bot operator and the coder. Have overcome fear that bots will block us all. If the bot needs admin powers to accomplish this one tedious task, it seems reasonable to allow that. Operator and coder have allowed that it be immediately blocked if they/it make(s) a mistake, so I see no reason to disallow this experiment in cleanup. - Kathryn NicDhàna ♫♦♫ 23:36, 4 October 2007 (UTC)[reply]
- Support I just tried going through broken redirects myself, and even with Twinkle deletion to speed it up, its still a boring, menial task. The faster we can clear such uncontroversial backlogs the better. Mr.Z-man 23:41, 4 October 2007 (UTC)[reply]
- Strongest possible support. Administrators shouldn't be stuck doing work an automated process can manage with a 0% error rate. We choose administrators here because they write content, or are good at dealing with people, or know their way around the more complicated sections of this website. To drag people away from doing much more meaningful work here is ludicrous. I've had a hell of a lot of experience with Eagle's coding with the old IRC Spotbot stuff and I've never, at any time, had any suspicions he doesn't know exactly what he's doing. I've also rustled up some half baked utilities for Wikipedia myself, so I believe I'm qualified to repeat "a program can only do what it's designed to do" idiom. Nick 23:45, 4 October 2007 (UTC)[reply]
- Support given the strong promises made to never expand the bot's approved task. --ElKevbo 23:53, 4 October 2007 (UTC)[reply]
- Support No reason not to, plus, the bot saves me the annoyance! :) SQL(Query Me!) 23:54, 4 October 2007 (UTC)[reply]
- I trust WJBscribe and Eagle 101 for the code so the bot should be fine. A bit irregular to trust a bot, but the source code has been reviewed by AmiDaniel so everything seems good. --DarkFalls talk 23:59, 4 October 2007 (UTC)[reply]
- Support. I have no problem supporting this endeavor provided my concern (noted above) about maintaining an unbroken chain of admin rights in the management of the code base is met. Ronnotel 00:03, 5 October 2007 (UTC)[reply]
- Support I can definately see why this bot should have adminship. Captain panda 00:09, 5 October 2007 (UTC)[reply]
- Support. I have no problem with a tightly-restricted adminbot. I'm a little concerned about unanticipated corner cases, so I hope there will be a log created that everyone can check, before too many redirects are processed. Regarding Question 4, from F Mita, I argue that the Talk pages of dead redirects should be kept for human inspection. (There shouldn't be very many of them). I also support the two features requested by Gracenotes, above. EdJohnston 00:10, 5 October 2007 (UTC)[reply]
- To clarify re: Q.4 - they will be. They will only be deleted if the talkpage is also itself a dead redirect - otherwise the Bot will leave them alone. WjBscribe 00:14, 5 October 2007 (UTC)[reply]
- We already have a log. If you would like to check the bot's activities, please see User:Eagle 101/RedirectCleanupBot. Anything with a -- 1 next to it would have been deleted by the bot. —— Eagle101Need help? 00:30, 5 October 2007 (UTC)[reply]
- Technically, I think that is a copy of a log that you cut and pasted onto that page. A true log would have created by User:RedirectCleanupBot. Carcharoth 09:35, 8 October 2007 (UTC)[reply]
- Technically the log is nothing but a copy of my computer's RAM or Hard Drive. It looks no different if I paste it in or if I progrram the bot to paste it in. :) —— Eagle101Need help? 18:00, 8 October 2007 (UTC)[reply]
- Technically, I think that is a copy of a log that you cut and pasted onto that page. A true log would have created by User:RedirectCleanupBot. Carcharoth 09:35, 8 October 2007 (UTC)[reply]
- We already have a log. If you would like to check the bot's activities, please see User:Eagle 101/RedirectCleanupBot. Anything with a -- 1 next to it would have been deleted by the bot. —— Eagle101Need help? 00:30, 5 October 2007 (UTC)[reply]
- To clarify re: Q.4 - they will be. They will only be deleted if the talkpage is also itself a dead redirect - otherwise the Bot will leave them alone. WjBscribe 00:14, 5 October 2007 (UTC)[reply]
- Support with the hope that over time we may see more tasks given to bots with the sysop bit turned on. ++Lar: t/c 00:12, 5 October 2007 (UTC)[reply]
- support As long as bots with admin tools are coded properly, they should not do anything like delete the main page as they are not self aware... yet --Hdt83 Chat 00:33, 5 October 2007 (UTC)[reply]
- support as per B.Pharaoh of the Wizards 00:38, 5 October 2007 (UTC)[reply]
- Don't tell Aphaia. – Steel 00:43, 5 October 2007 (UTC)[reply]
- Strong support. Daniel 00:56, 5 October 2007 (UTC)[reply]
- Support GizzaDiscuss © 00:57, 5 October 2007 (UTC)[reply]
- Support Okay. Jmlk17 01:03, 5 October 2007 (UTC)[reply]
- Support Bot will do a menial yet necessary task. Most concerns are addressed, and I'm highly confident in Will's and other reviewer's judgment about the bot. However, a bot is only as good as it's curator, so I suggest constant supervision by both Eagle and other admins. - Mtmelendez (Talk|UB|Home) 01:40, 5 October 2007 (UTC)[reply]
- Support. Seems an appropriate task, with little downside should any unanticipated cases arise. Gimmetrow 01:53, 5 October 2007 (UTC)[reply]
- Support, as there's no way that (as it stands) this bot will prove any harm with having a delete button.—Ryūlóng (竜龍) 01:58, 5 October 2007 (UTC)[reply]
- Support Pile-on support from this admin ELIMINATORJR 02:04, 5 October 2007 (UTC)[reply]
- Support - My questions are well answered. My suspicions are proven false, since this bot does not have an admin flag. Miranda 02:05, 5 October 2007 (UTC)[reply]
- It's the distant future, the year 2000. The humans are dead. -- John Reaves 02:08, 5 October 2007 (UTC)[reply]
- Support When a task needs to be done, and that task requires a certain access level, it should be granted. This bot, like all bots, can and will be shut down if it ever begins rampaging; nothing to fear. - auburnpilot talk 02:14, 5 October 2007 (UTC)[reply]
- Support - reasonably limited scope (to prevent false positives being fucked up by the bot), operator is not a dimwit, apparently there will be a publically usable shutoff function, and hey, if it goes berserk there's always undelete+block, no? MessedRocker (talk) 02:19, 5 October 2007 (UTC)[reply]
- Ok. My concerns have been addressed, and I trust the operator. I'll go along with this. Mercury 02:22, 5 October 2007 (UTC)[reply]
- Support We definitely need a bot like this. No major concerns here. --Siva1979Talk to me 02:48, 5 October 2007 (UTC)[reply]
- Support Per the commitment that the bot never adds any other tasks. --JayHenry 03:06, 5 October 2007 (UTC)[reply]
- Support - I doubt the bot will go rogue, and if it does, it can be fixed, right? ;) Also, I think the user is trusted enough to maintain this bot. Regards, Ανέκδοτο 03:08, 5 October 2007 (UTC)
- Support. Is this the first Adminbot? J-ſtanTalkContribs 03:22, 5 October 2007 (UTC)[reply]
- Yes, well... first official adminbot, there have been and probably are some bots running that are not publicly known. This will, if approved, be the first adminbot to use the functions with the full knowledge and will of the community... at least the portion that watches RFA. —— Eagle101Need help? 03:49, 5 October 2007 (UTC)[reply]
- Support. Looks good to me. I'm glad to see it'll have an emergency shutoff. FolicAcid 03:24, 5 October 2007 (UTC)[reply]
- Code looks fine to me. MER-C 03:57, 5 October 2007 (UTC)[reply]
- Support While this is the first I know of such things, I too have visions of rouge FrankenBots. However, all seems well. —Ignatzmicetalkcontribs 04:02, 5 October 2007 (UTC)[reply]
- Support Nothing controversial, so I'm fine with it. Nishkid64 (talk) 04:12, 5 October 2007 (UTC)[reply]
- Support - adequate assurances from the nomination and the management of two very well trusted editors earns my full support. Sephiroth BCR (Converse) 04:56, 5 October 2007 (UTC)[reply]
- Support The point that some opposers seem to miss is that these redirects are currently being deleted by hand by admins which, of course, have a brain but choose not to use it too much when doing these repetitive, beyond-boring deletions. Are they being lazy? No, they're being efficient. If anything, this bot may help in decreasing the mistakes made when deleting broken redirects. It would be trivial for the bot to be modified so that it tags broken redirects that have a history so that they can be considered with more care by admins. Pascal.Tesson 05:24, 5 October 2007 (UTC)[reply]
- True, though in practice just looking through the remaining entries in Special:BrokenRedirects when the Bot has finished its run will be just as effective as having the Bot tag stuff. And it seemed like a good idea to keep this Bot's code as simple as possible... WjBscribe 05:28, 5 October 2007 (UTC)[reply]
- Support with complete trust and a slap in the face with a big wet trout to opposers :) --Benchat 05:33, 5 October 2007 (UTC)[reply]
- + This is a surprise, I really didn't think I could ever support a bot with a flag. However, this is an absolutely non-controversial, menial task and I do on a personal level trust the coding knowledge of WJBScribe, AmiDaniel, and Eagle 101. So this is a rare trust support from myself. Keegantalk 05:49, 5 October 2007 (UTC)[reply]
- support. I don't program, so I haven't reviewed the code. But I trust WJBscribe and Eagle_101, as well as the others who have reviewed the code. This type of task is exactly what an AdminBot would be useful for. -- Flyguy649 talk contribs 06:12, 5 October 2007 (UTC)[reply]
- Support Will and Eagle have my full trust and support and I have no qualms about their proposal. Sarah 06:23, 5 October 2007 (UTC)[reply]
- Support. This is a perfectly sensible task for a bot to do, the source code is completely open, and the developers and nominator are all trusted members of the community. --krimpet⟲ 06:39, 5 October 2007 (UTC)[reply]
- Absolutely. ~ Riana ⁂ 06:44, 5 October 2007 (UTC)[reply]
- Support; the operator knows what he's doing. --NE2 07:09, 5 October 2007 (UTC)[reply]
- Support - no issues at all here. And sources too? Wow! - Alison ❤ 07:36, 5 October 2007 (UTC)[reply]
- Support, sounds like a good task for a bot. Kusma (talk) 08:09, 5 October 2007 (UTC)[reply]
- Support Good task for a bot and two trustworthy admins in charge. пﮟოьεԻ 57 08:30, 5 October 2007 (UTC)[reply]
- Weak support. I know absolutely nothing about programming or how bots work. However, as I understand it, the bot can be blocked if it does anything wrong, and its admin rights will then be revoked by a steward. I think we may as well give this a chance for now. As pointed out above, humans are more likely to go insane than bots are, and are much harder to stop. Besides, this may be a good precedent for the future; one day I would like to see a bureaucrat-bot closing RfAs, counting the votes and promoting accordingly, thus eliminating the need for a small élite group who have far too much power for their own good. WaltonOne 10:09, 5 October 2007 (UTC)[reply]
- Support one must think in technical terms. This bot is not becoming an admin. Not really. An administrator is one who manages and who helps out. This is a request for the +sysop flag, rather than a request to take on that responsibility. -- Anonymous DissidentTalk 10:23, 5 October 2007 (UTC)[reply]
- I have given this some more though. I'm really unsure. I can visualise some problems that could happen. Its not that I don't believe in the script...I'm just not entirely comfortable with a bot being an admin. I have thought on this a lot. I'm just too unsure. -- Anonymous DissidentTalk 13:21, 11 October 2007 (UTC)[reply]
- Support one must think in technical terms. This bot is not becoming an admin. Not really. An administrator is one who manages and who helps out. This is a request for the +sysop flag, rather than a request to take on that responsibility. -- Anonymous DissidentTalk 10:23, 5 October 2007 (UTC)[reply]
- Support per the code being publically available (from henceforth, this will be my criteria for any and all adminbot RFAs). Neil ム 10:51, 5 October 2007 (UTC)[reply]
- Support. This bot can accomplish a great deal and I see very little that could go wrong. If the bot malfunctions it can be quickly blocked. As others have pointed out, this isn't going to be an AI admin, just a bot with an extra flag to help it get its work done. Chaz Beckett 11:59, 5 October 2007 (UTC)[reply]
- Strongest Possible Support. I've said before that there are many CSD that could be automated, and this is one of them. CSD R1 is one of the simplest to determine in terms of delete/keep. If the redirect is red, nuke it. Human administrators delete many many daily. Why shouldn't such a mundane task be put to a bot, who (when programmed, and I fully trust Eagle's abilities) can do it faster and without taking the time. To those who wish WJBscribe to be more bot-experienced, I must say that I do trust his judgement, and all bot operators were new at some point. Plus, let's assume it went crazy and deleted stuff it shouldn't...wouldn't it be easy enough to block the bot and restore the pages? Finally, as WJBscribe pointed out, admins cannot delete when they are blocked, so we have well over 1000 administrators who can stop it if it goes awry, not having to wait for bot operator intervention. ^demon[omg plz] 12:34, 5 October 2007 (UTC)[reply]
- Of course. What could go wrong with deleting redirects to nonexistent pages? Unlike with vandalism reversions or image taggings, the potential for errors here is extremely small. What a redirect to a nonexistent page is is an objective thing. Automation is good, when it's handed responsibly. It takes away the workload from our "real" admins, and gives them more time to do the more complicated deletions. Melsaran (talk) 12:50, 5 October 2007 (UTC)[reply]
- Support - As many others have said, this will give our admins more time to do less repetitive tasks, thereby helping the encyclopedia. Neranei (talk) 13:31, 5 October 2007 (UTC)[reply]
- Support we already tolerate several adminbots making deletions (some of them, tens of thousands) and of content a lot more controversial than redirects. So... here's an uncontroversial cleanup task, it would be silly to not have a bot doing it. Opposers should take a few admins to ArbCom if they feel so strongly about adminbots. --W.marsh 14:39, 5 October 2007 (UTC)[reply]
- Support - the bot's operator and programmer have my fullest confidence and trust, therefore I have no reason not to support their effort to implement this bot with the sysop bit. ɑʀкʏɑɴ 15:05, 5 October 2007 (UTC)[reply]
- Function, check. Transparency, check. Source code, check. Operator, check. Checklist complete, cleared for mop. - Mailer Diablo 15:13, 5 October 2007 (UTC)[reply]
- Support I have no objection to adminbots with trustworthy operators running on boring tasks like this. --Hut 8.5 15:39, 5 October 2007 (UTC)[reply]
- Support as long as this bot is operated by WjBscribe.--Duk 15:43, 5 October 2007 (UTC)[reply]
- Per betacommand. — Nearly Headless Nick {C} 15:58, 5 October 2007 (UTC)[reply]
- Betacommand has expressed that he "Very strong[ly] oppose[s]" this RfA. Did you mean to place your !vote in the oppose section? --ElKevbo 18:16, 5 October 2007 (UTC)[reply]
- No, Nick meant that blocking powers are best left to be handled by good o' humans. - Mailer Diablo 03:47, 6 October 2007 (UTC)[reply]
- Betacommand has expressed that he "Very strong[ly] oppose[s]" this RfA. Did you mean to place your !vote in the oppose section? --ElKevbo 18:16, 5 October 2007 (UTC)[reply]
- Support useful, and seems like it will have lots of eyes on it to prevent/troubleshoot problems. Rigadoun (talk) 16:01, 5 October 2007 (UTC)[reply]
- Support. I have inspected the code. It's very simple and thus easy to test. This provides confidence that the bot will work as planned. - Jehochman Talk 16:14, 5 October 2007 (UTC)[reply]
- Support simple ˉˉanetode╦╩ 18:59, 5 October 2007 (UTC)[reply]
- One of the reasons I don't generally like the idea of an admin bot is the number of judgement calls that the average administrator must make. This bot will be involving itself in none of those areas. EVula // talk // ☯ // 19:49, 5 October 2007 (UTC)[reply]
- Support I am all for careful, thoughtful automation of tedious or repetitive tasks. κaτaʟavenoTC 20:10, 5 October 2007 (UTC)[reply]
- Technical and moral support; I have reviewed the code, and it is sufficiently straightforward that any putative damaged caused by an unseen bug would be minor, trivially detected and simple to undo. Morally, I have no opposition to an admin bot iff it has exactly one well-defined task, strict criterion to do the task, and the operator has the bit himself (to avoid privilege escalation). This is currently the case. — Coren (talk) 20:13, 5 October 2007 (UTC)[reply]
- Support I'm convinced the bot will not delete valid redirects. In essense this is a mere technical measure that allows a bot to do the grunt work. However, I'll support only under the premise that the bot only be granted the sysop bit after being approved via WP:RFBA and is confined to those tasks being approved for. — Edokter • Talk • 21:57, 5 October 2007 (UTC)[reply]
- Pending this RFA, the bot will go through BRFA, but we will have to give it the flag there in order to allow it to trial. You can see the test results presented in the RFA request, but the folks at WP:BRFA will want a trail. —— Eagle101Need help? 22:04, 5 October 2007 (UTC)[reply]
- Support A clear outline is given, I believe this bot can save admins time to tackle other backlogs. Phgao 22:02, 5 October 2007 (UTC)[reply]
- ViridaeTalk 23:36, 5 October 2007 (UTC)[reply]
- Support I have a Scooba (an iRobot brand robot mop) at home - it mops almost as well I do, but with less work - Let a computer do the mopping here too. Adminship is no big deal - give the bot the mop. Note: I do not like all bots - I hate the Sinebot. Uncle uncle uncle 23:47, 5 October 2007 (UTC)[reply]
- — Dorf, was: AldeBaer 00:37, 6 October 2007 (UTC)[reply]
- Support. I believe this bot could help the backlog problem. I frankly find the opposes about blocking and edit history and comparisons to I, Robot very odd. bibliomaniac15 01:03, 6 October 2007 (UTC)[reply]
- Provided that any change to the source code is noted somewhere before it is implemented. And that any new task, no matter how trivial it is, requires community approval. -Amarkov moo! 01:38, 6 October 2007 (UTC)[reply]
- Please note the bot will not take on any additional tasks. I can't promise 100% with the code change thing... if there is a bug, I'm fixing it. I have already given my promise the task won't expand, and the bot op has also promised the same thing. I will attempt to note about code changes (probably on the bot's user page), but again, I'm fixing bugs if I see them, or they are reported. —Preceding unsigned comment added by Eagle 101 (talk • contribs) 02:33, 6 October 2007 (UTC)[reply]
- Support, as I trust the coders to be responsive to problems. I am also absolutely thrilled at the idea of a not human performing this horribly boring task. Natalie 01:40, 6 October 2007 (UTC)[reply]
- Support Per "need for the tools". ;) WilyD 01:51, 6 October 2007 (UTC)[reply]
- Support. With the guarantee that the bot's scope will not be expanded, I see little danger.--ragesoss 05:01, 6 October 2007 (UTC)[reply]
- Support. Sounds like a good idea to me, and has good enough source that it shouldn't do any harm in the process. --CapitalR 10:42, 6 October 2007 (UTC)[reply]
- Adminship is both a technical and political position – if the computer program only participates in the technical aspect, and doesn't make decisions (political aspect of adminship), then there is no reason to grant it the capability to do its task. Maxim(talk) (contributions) 11:09, 6 October 2007 (UTC)[reply]
- Support See no problems. GDonato (talk) 14:11, 6 October 2007 (UTC)[reply]
- Oppose, not enough WP:AFD not a single meaningful article editing experience, sorry bot. Support of course ;) -- lucasbfr talk 14:35, 6 October 2007 (UTC)[reply]
- Support- A bot is needed to help with the repetitive admin tasks. Wikidudeman (talk) 16:24, 6 October 2007 (UTC)[reply]
- Support- I see no problems with the bot. I almost would rather have bots doing administrative tasks than humans. --Sharkface217 16:34, 6 October 2007 (UTC)[reply]
- Support. I don't see any problems. — TKD::Talk 18:09, 6 October 2007 (UTC)[reply]
- Support Of course. This job is a pain, and completely simple. As to the people saying he doesn't have experience, he knows what the bot should be doing right? He knows how to stop the bot right? Well, there you go. - cohesion 19:10, 6 October 2007 (UTC)[reply]
- Support Looks this will do a valuable task. I somehow doubt that the bot will go rouge and kill us all. --Bfigura (talk) 19:55, 6 October 2007 (UTC)[reply]
- Support. If it goes rogue, call me to pitch in in the cleanup. Until then... :) -- Y not? 00:22, 7 October 2007 (UTC)[reply]
- I trust bots more than humans support —METS501 (talk) 00:46, 7 October 2007 (UTC)[reply]
- Support A bot can definitely handle a non-contentious administrative task such as this: Good judgment is not a prerequisite when your programmer has it. —[[Animum | talk]] 02:08, 7 October 2007 (UTC)[reply]
- Strong support The bot has a stronger password than the majority of the human admins here. The bot has an unambiguous task, and is the responsibility of two well respected admins. There are the normal failsafes in effect for bots, plus some which are frankly unneeded, but add to the apparent security of the system. Other WMF wikis (commons, for instance) have adminbots with no problems.It's about time we automate some of the more menial tasks here. ~Kylu (u|t) 02:10, 7 October 2007 (UTC)[reply]
- Support. Doesn't look bad to me. VoL†ro/\/Force 02:16, 7 October 2007 (UTC)[reply]
- Support Tim Q. Wells 05:25, 7 October 2007 (UTC)[reply]
- Support I trust the bot operators, and the task is simple and straight forward. This is a technical issue, and we trust Wikipedia's own built-in software all the time, the fact that it's technically a bot doesn't make a big difference to me. Until we can make these things a part of MediaWiki, we'll need to use these kinds of setups. -- Ned Scott 05:52, 7 October 2007 (UTC)[reply]
- Support, but with the condition that Eagle must add these into the bot's positronic brain prior to activation. (Just kidding. Seriously, support. Both the coder and the operator can be trusted, and it seems they've even given the bot a specific name to limit any future easy "evolution." Besides, it'll be nice to prove that the sky won't fall...) ʝuѕтɛn 08:18, 7 October 2007 (UTC)[reply]
- Support - The bot is a tool, the task well defined, and I trust the operator & programmer. --Versageek 13:24, 7 October 2007 (UTC)[reply]
- Support The task is something that needs to be done, and the code looks harmless. Shadow1 (talk) 13:33, 7 October 2007 (UTC)[reply]
- Support Reedy Boy 14:25, 7 October 2007 (UTC)[reply]
- Support I used to slog through the broken redirects many months ago until I realized it was a total waste of time (especially because I was not, and still am not, an admin). It's refreshing that someone finally realized a bot could do this task. There is nothing controversial about it, and I hope it works well. Shalom (Hello • Peace) 14:28, 7 October 2007 (UTC)[reply]
- Support: Although per Freakofnurture, I agree that creating a new account for this shouldn't be necessary. Since WJB would likely catch more heat for running the bot as himself, I guess this is the best alternative. —Wknight94 (talk) 14:39, 7 October 2007 (UTC)[reply]
- Support; sensible. It's going to be deleting redirects, not launching ICBMs. No big deal. Antandrus (talk) 15:06, 7 October 2007 (UTC)[reply]
- Support Wikipedia would benefit with this bot having the deletion button. The Wikipedist 18:11, 7 October 2007 (UTC)[reply]
- User blocked by Moreschi as a sockpuppet of Connell66. Indenting support. Acalamari 23:59, 8 October 2007 (UTC)[reply]
- Support Wikipedia would benefit with this bot having the deletion button. The Wikipedist 18:11, 7 October 2007 (UTC)[reply]
- Support - Makes sense. Lara❤Love 19:06, 7 October 2007 (UTC)[reply]
- Support - Just a question to all the opposes: We already trust a monster of a program with database access, and due to its size, has a greater chance to mess up than does this bot. Why not trust this bot with sysop powers? -- Cobi(t|c|b|cn) 19:59, 7 October 2007 (UTC)[reply]
- Strongest possible support. The scaremongering regarding adminbots is misplaced. I have far more concerns about the judgment of some of our human administrators than I do about a 40-line piece of code that absolutely, positively, cannot do anything more than it's been programmed to do. There are some people who I would not want to run bots with administrative powers due to prior examples of poor judgment, but this does not apply to either WJB or Eagle, who have shown nothing but good judgment from day one. Get cracking, RCB. —bbatsell ¿? ✍ 20:42, 7 October 2007 (UTC)[reply]
- Support John254 20:53, 7 October 2007 (UTC)[reply]
- Support -- fairly uncontrovertial housekeeping. If an editor wants to keep a broken redirect, they can always recreate a placeholder page and then change the content back to the redirect. So any controvertial actions (I expect that there will only be a few) can undone with minimal hassle. — xDanielx T/C 22:54, 7 October 2007 (UTC)[reply]
- Support having these admin logs stand on their own is fine for me, and this bot admin request is signifigantly differant then past request I've opposed for numerous reasons (e.g. tor bot (a blocking bot), protection bot (a page protection bot) ) As this account has vowed to only perform this task, removal of admin priv's wouldn't require a drawn-out arbcom case, as this is at it's cor a bot. The bot approvals group can yank bot approvals and indefblock if this started misbehaving, and any other admin is encourgaed to block "malfunctioning" bots that are causing any disruption at any time. As for the "you have two accounts" arguments, bot's who's operators are blocked/banned/put on probation are subject to the same restrictions, as they are simply extensions of their operators. — xaosflux Talk 23:17, 7 October 2007 (UTC)[reply]
- Support. The task is straightforward. As for trust, the bot owner is an admin, a position one gets after being demonstrably worthy of trust, so the next step after casting doubt on his rationale for wanting to sysop the bot is to demand an emergency desyopping of the dangerous bot owner. Since nobody seems to be doing that, I have to assume that this rationale is straw grasping. I am not qualified to check the code, but people who do seem to be qualified have stated that it does what it says on the label. - BanyanTree 00:49, 8 October 2007 (UTC)[reply]
- Support per Cobi. —Crazytales talk/desk 00:53, 8 October 2007 (UTC)[reply]
- Support Seems like a fair task for an adminbot to handle. Homestarmy 01:39, 8 October 2007 (UTC)[reply]
- Support I would like to see a robot be an admin someday, but I hope that it will not block me :) . NHRHS2010 Talk 03:17, 8 October 2007 (UTC)[reply]
- Support Totally uncontroversial task. Should the bot misbehave it can be blocked just like any other bot. Basic housekeeping, there is no need to make a fuss about this. EconomicsGuy Return the fire! 06:20, 8 October 2007 (UTC)[reply]
- ➔ REDVEЯS was here 09:19, 8 October 2007 (UTC)[reply]
- Support - after reading the proposal, the RfA and the answers provided to my questions above. Carcharoth 10:18, 8 October 2007 (UTC)[reply]
- Support Normally with an RfA, there's always the risk that an editor is not what they appear and that despite all research there's still an unknown problem with them. With an adminbot, it's possible to read the source code and know what the bot can and can't do, so there's less risk than with a normal adminship request. --ais523 13:32, 8 October 2007 (UTC)
- Support This is an admin task ideally suited for automation. --Ed (Edgar181) 15:05, 8 October 2007 (UTC)[reply]
- Support, per the thread below. Titoxd(?!? - cool stuff) 18:33, 8 October 2007 (UTC)[reply]
- Support, no convincing reasons not to. Angus McLellan (Talk) 18:50, 8 October 2007 (UTC)[reply]
- Support. It's long past time that adminbots take over tasks which don't require judgment. From reviewing the source and the operators answers to questions, I'm confident that we're not being led up the garden path. Mackensen (talk) 22:39, 8 October 2007 (UTC)[reply]
- Support - Normally takes me a bit to go through contribs, but didn't have to in this case, support as long as the bot only functions in doing what it is approved to do. Dureo 03:47, 9 October 2007 (UTC)[reply]
- Support - I've seen and read I, Robot. I am not afraid. Atropos 04:20, 9 October 2007 (UTC)[reply]
- Support. Maintenance tasks are non-controversial, and of adequate benefit for me to accept this admin-bot despite my general skepticism against automated admins. Sjakkalle (Check!) 13:33, 9 October 2007 (UTC)[reply]
- Support - no reason not to. It's very useful. And it doesn't have a mind of it's own so it's definately trustworthy isn't it? Lradrama 14:04, 9 October 2007 (UTC)[reply]
- Support. I am satisfied that nothing untoward will happen and that the use of administrative tools by a bot is appropriate in this instance. enochlau (talk) 15:26, 9 October 2007 (UTC)[reply]
- Support, I trust WjBScribe. Task is non-controversial and farking tedious and if all else fails, we de-sysop the bot and we've learned a good lesson. Why waste the valuable time of a human admin for something as simple as clicking the delete button a hundred times? A bot can check for non-existent history and delete much faster than a human, and it saves our time. I'm for it. ♠PMC♠ 17:44, 9 October 2007 (UTC)[reply]
- Strong Support I'm all in favor of automating as much of the mechanical tasks as possible. Carlossuarez46 21:07, 9 October 2007 (UTC)[reply]
- Support, a trivial task, I have full confidence in admin running the bot. Tim Vickers 23:15, 9 October 2007 (UTC)[reply]
- Support no reason to oppose. -Lemonflash(O_o) 00:11, 10 October 2007 (UTC)[reply]
- Support. The task this bot will perform does not require judgment, thus a bot will do it well. --Mark (Mschel) 02:49, 10 October 2007 (UTC)[reply]
- Support, this concept is carefully thought through, well-executed, and a task ideally suited to a bot (repetitive, often necessary, and requiring of no human judgment whatsoever). The code looks good, and I trust both the technical skill and good judgment of the bot's creators, so no reason not to support. Seraphimblade Talk to me 03:46, 10 October 2007 (UTC)[reply]
- Support. Repetitive, boring tasks should be carried out by bots whenever possible; given that it has apparently been demonstrated that this bot will not misbehave, I see no reason why it shouldn't be given the ability to delete pages. And if it does misbehave, we can just block it and clean up after it - it's not different from a rogue admin, except that it'd not act out of malice, which actually is an advantage a bot has over a person). -- Schneelocke 12:03, 10 October 2007 (UTC)[reply]
- Support per above. All my serious concerns have been answered. Tally ho! Bearian 14:42, 10 October 2007 (UTC)[reply]
- Support - very satisfied with the As to the Qs. Not worried about a robot uprising although if they did I'm sure there would be some improvements. 1of3 23:21, 10 October 2007 (UTC)[reply]
- Support per my comments on the talk page. —Ilmari Karonen (talk) 03:20, 11 October 2007 (UTC)[reply]
- Support Having read all the questions and answers see no reason why this should not be implemented. Davewild 07:24, 11 October 2007 (UTC)[reply]
- Support – I have reviewed the bot's source code, I trust that it will operate properly as written, and I trust its operators. It's harmless and useful per Wikipedia:Bot policy. That's all that needs to be taken into account. That being said, I find that this is the wrong venue for this discussion. — madman bum and angel 14:40, 11 October 2007 (UTC)[reply]
- Support - seems like a good idea. Is there anyway to restrict the admin/sys set up to only one admin function at at time? then we could easily ensure against mission creep. --Rocksanddirt 18:36, 11 October 2007 (UTC)[reply]
- Support It seems odd to me that we even need to discuss this here. --Gmaxwell 19:30, 11 October 2007 (UTC)[reply]
- Support The bot seems like it will serve a useful purpose, and the function seems very well thought through and narrowly defined. The odds of it screwing up or "going berserk" seem pretty slim. Sχeptomaniacχαιρετε 20:04, 11 October 2007 (UTC)[reply]
- Support The bot appears to be useful, has the emergency button on its user page, and has a public source code. FunPika 20:29, 11 October 2007 (UTC)[reply]
Oppose
- Strongest possible oppose. Bots should not have admin bits. Corvus cornix 20:54, 4 October 2007 (UTC)[reply]
- Why? Moreschi Talk 20:55, 4 October 2007 (UTC)[reply]
- Bots are not people. Bots are left to run on their own with no checks and balances. Bots have no edit histories. Corvus cornix 20:56, 4 October 2007 (UTC)[reply]
- Bots are not people - well, duh. Bots do have human operators, however, and we can certainly trust these operators. They serve as check and balance, and bot will only do what they tell it to. If it ballses up, we can block it and if necessary desysop it in around 30 seconds. Your paranoia is unjustified. Moreschi Talk 21:01, 4 October 2007 (UTC)[reply]
- I should let you know though, the source code is public, thats probably better then an edit history, its like being in the "admin's" mind. ;) —— Eagle101Need help? 20:58, 4 October 2007 (UTC)[reply]
- The code is public, and has been reviewed by - so far - three bot operators. Based on my review, I'm not saying that there is little chance the user will misuse the tools, I am saying that there is no chance whatsoever that we will have problems. --uǝʌǝsʎʇɹnoɟʇs 21:01, 4 October 2007 (UTC)[reply]
- Checks and balances are exactly what bots have. It will not do anything except what the code tells it to do. ((1 == 2) ? (('Stop') : ('Go')) 21:16, 4 October 2007 (UTC)[reply]
- Bot's are nothing except checks and balances. Thats how they work - they make simple checks, and simple decisions. If its not simple, it lets a human decide. Matt/TheFearow (Talk) (Contribs) (Bot) 21:29, 4 October 2007 (UTC)[reply]
- I assume the comment above that bots have no edit history is incorrect. If that were true then I'd be more worried. EdJohnston 02:01, 5 October 2007 (UTC)[reply]
- What do you mean? Of course this bot has no history, in that it was just created, and was not yet approved, so has not begun to operate. --uǝʌǝsʎʇɹnoɟʇs 02:04, 5 October 2007 (UTC)[reply]
- I assume the comment above that bots have no edit history is incorrect. If that were true then I'd be more worried. EdJohnston 02:01, 5 October 2007 (UTC)[reply]
- Bot's are nothing except checks and balances. Thats how they work - they make simple checks, and simple decisions. If its not simple, it lets a human decide. Matt/TheFearow (Talk) (Contribs) (Bot) 21:29, 4 October 2007 (UTC)[reply]
- Bots are not people. Bots are left to run on their own with no checks and balances. Bots have no edit histories. Corvus cornix 20:56, 4 October 2007 (UTC)[reply]
- I really wish RFA discussions were threaded without this support/oppose/neutrul nonsense, but I digress... I'll oppose this request until Q13 is addressed. With sysop abilities, I really think its a good idea to add a capability to stop the bot, without having to be an admin. Mercury 00:37, 5 October 2007 (UTC)[reply]
- This? Daniel 00:56, 5 October 2007 (UTC)[reply]
- There will be a shutoff function, but it will not be the talk page. Talk pages tend to get messages that don't always result in the need to shut the bot off. (Newbies asking why a page was deleted, not realizing that the bot deleted the redirect etc.). These are not something that should shut the bot off. —— Eagle101Need help? 01:18, 5 October 2007 (UTC)[reply]
- Why? Moreschi Talk 20:55, 4 October 2007 (UTC)[reply]
- Strong oppose A bot is not a human. Has anyone here ever seen I, Robot? Basically, that film sums up my point. Cheers,JetLover (Report a mistake) 03:50, 5 October 2007 (UTC)[reply]
- I don't suppose you've read the book I, Robot, the original Asimov? No? In that, the robots prove the protectors of mankind from self-inflicted destruction. No one does read the books these days, but if you haven't, do so. Our Frankenstein complexes are fundamentally irrational. Moreschi Talk 17:53, 5 October 2007 (UTC)[reply]
- In case you have not yet noticed, this whole wiki is nothing but a jumbled mess of code... what shall we do when it gains the capacity of humans? —— Eagle101Need help? 03:53, 5 October 2007 (UTC)[reply]
- My point is that it won't gain the capacity of humans. It's a robot. Cheers,JetLover (Report a mistake) 04:08, 5 October 2007 (UTC)[reply]
- So you're opposing it not for its usefulness, but just because it is an automated process? --DarkFalls talk 04:14, 5 October 2007 (UTC)[reply]
- No. Twinkle is an automated process, I'm opposing it because it doesn't have human judgment. Have you ever seen how many times MartinBot, ClueBot, BetacommandBot have have screwed up? It could go berserk and delete good articles. Cheers,JetLover (Report a mistake) 04:19, 5 October 2007 (UTC)[reply]
- Even in the rare case that the bot goes berserk, admins can patch up the problems easily (it's article restoration...not difficult at all). Overall, I think it's fair to believe that the positives strongly outweigh the negatives. Nishkid64 (talk) 04:22, 5 October 2007 (UTC)[reply]
- But still, why take that chance? The bot could go nuts, sure it could be fixed, but why take the chance? Cheers,JetLover (Report a mistake) 04:25, 5 October 2007 (UTC)[reply]
- If you care to look at the bot's code or the nomination, the bot will only delete redirects and only when the redirects have one edit in the history. Therefore, the bot cannot, and will not be able to delete "good articles". And in another point, the bot cannot do anything out of the ordinary until its source code is changed. It all comes down on the trust you have on the nominators and bot operators, not the bot itself. --DarkFalls talk 04:28, 5 October 2007 (UTC)[reply]
- I really don't see how it could go nuts. It only has a very few lines of code. I suppose I could alter it so it goes nuts- but why would I? I already have an admin account I'm trusted not to go nuts with. Really, human editors are much more likely to go nuts that Bots whose scripts are available for all to see. Its a bit like if you could read my mind. This Bot is nowhere as complicated as MartinBot or ClueBot which actually have to make judgment calls - weighing up factors using a scoring system based on their programmes. This Bot can't do that. It simply checks if a page meets a predetermined criteria and can only delete if it does and it doesn't even go looking for redirects, it works from a list like Special:BrokenRedirects which is generated independently... WjBscribe 04:32, 5 October 2007 (UTC)[reply]
- JetLover, consider the positives to the negatives. There is a minute chance that the bot will go screwy and delete maybe 10-20 articles, but from the other perspective, you can see that the bot can accurately delete thousands of redirects. Mathematically, I think trusting the admin bot is a safe bet. Nishkid64 (talk) 04:34, 5 October 2007 (UTC)[reply]
- Even in the rare case that the bot goes berserk, admins can patch up the problems easily (it's article restoration...not difficult at all). Overall, I think it's fair to believe that the positives strongly outweigh the negatives. Nishkid64 (talk) 04:22, 5 October 2007 (UTC)[reply]
- C'mon, what harm will it do? IT does not need cognitive thought to do what it is proposed to do. Mercury 04:38, 5 October 2007 (UTC)[reply]
- A minute chance? It's still a chance. 10-20 pages could be deleted...not good. Would you vote in an admin that might forget to take his medication and go crazy and delete 10-20 articles? I doubt that. Cheers,JetLover (Report a mistake) 04:40, 5 October 2007 (UTC)[reply]
- Nishkid, there is no chance it will delete any articles ;). The worst that can happen is the bot deletes a few redirects it should not do... in which case they are reported to me and I fix the problem. But I highly doubt that will happen... this task is just that simple. —— Eagle101Need help? 04:43, 5 October 2007 (UTC)[reply]
- (ec) I disagree with Nishkid, there is no such chance. The Bot cannot deleted articles because (1) articles are not listed at Special:BrokenRedirects, (2) articles are not redirects to deleted pages and (3) articles contain more than one revision. WjBscribe 04:45, 5 October 2007 (UTC)[reply]
- But the thing is, it could happen, why let it? Would you vote in a candidate who had a mental illness, and he could forget to take his medication and delete 10-20 redirects? No. Why do it with a bot? Cheers,JetLover (Report a mistake) 04:48, 5 October 2007 (UTC)[reply]
- It can't happen - the code does not allow for it. To take the unfortunate example of a mental illness, we cannot tell whether a person has one or not - you can tell whether this Bot does or doesn't because every line of code is available. It is not capable of doing anything not in that code. WjBscribe 04:57, 5 October 2007 (UTC)[reply]
- Perhaps, but I still, and will never, entrust the sysop tools with a machine. Cheers,JetLover (Report a mistake) 05:00, 5 October 2007 (UTC)[reply]
- Oh please, this bot is bound to be smarter than many of our current sysops. -- John Reaves 08:12, 5 October 2007 (UTC)[reply]
- Perhaps, but I still, and will never, entrust the sysop tools with a machine. Cheers,JetLover (Report a mistake) 05:00, 5 October 2007 (UTC)[reply]
- It can't happen - the code does not allow for it. To take the unfortunate example of a mental illness, we cannot tell whether a person has one or not - you can tell whether this Bot does or doesn't because every line of code is available. It is not capable of doing anything not in that code. WjBscribe 04:57, 5 October 2007 (UTC)[reply]
- But the thing is, it could happen, why let it? Would you vote in a candidate who had a mental illness, and he could forget to take his medication and delete 10-20 redirects? No. Why do it with a bot? Cheers,JetLover (Report a mistake) 04:48, 5 October 2007 (UTC)[reply]
- A minute chance? It's still a chance. 10-20 pages could be deleted...not good. Would you vote in an admin that might forget to take his medication and go crazy and delete 10-20 articles? I doubt that. Cheers,JetLover (Report a mistake) 04:40, 5 October 2007 (UTC)[reply]
- No. Twinkle is an automated process, I'm opposing it because it doesn't have human judgment. Have you ever seen how many times MartinBot, ClueBot, BetacommandBot have have screwed up? It could go berserk and delete good articles. Cheers,JetLover (Report a mistake) 04:19, 5 October 2007 (UTC)[reply]
- So you're opposing it not for its usefulness, but just because it is an automated process? --DarkFalls talk 04:14, 5 October 2007 (UTC)[reply]
- My point is that it won't gain the capacity of humans. It's a robot. Cheers,JetLover (Report a mistake) 04:08, 5 October 2007 (UTC)[reply]
- Do you have any programming experience, JetLover ? Nick 09:12, 5 October 2007 (UTC)[reply]
- Generalising that comment to be "Wikipedia generally" would probably achieve the same response. Daniel 11:21, 5 October 2007 (UTC)[reply]
- This bot is far less likely to screw up than any other admin - in fact, if you trust eagle and wjb, there is no chance whatsoever. This isn't some super-intelligent evolving robot that has the capacity to choose to make decisions, and it's simple enough that I am sure that there is no way it can 'screw up' - it simply is not possible unless the operator tells it to - and we've already trusted him with the flag. --uǝʌǝsʎʇɹnoɟʇs 10:32, 5 October 2007 (UTC)[reply]
- I realize you've got me...but I just can't support. I will not entrust the admin tools to a robot. I am sorry, but I just can't. Cheers,JetLover (Report a mistake) 21:30, 5 October 2007 (UTC)[reply]
- It's not like it matters anyway, because I doubt the bureaucrats are really going to place much value in an oppose based on something that happened in a movie. -- John Reaves 21:51, 5 October 2007 (UTC)[reply]
- The bureaucrats have no right to place more or less "value" in one vote compared to another. Everyone's opinion, given in good faith, counts equally. We are the community; we all have the best interests of Wikipedia at heart; and we have a collective right to decide who becomes an administrator. WaltonOne 11:54, 6 October 2007 (UTC)[reply]
- I beg to differ on this one. As it is not a vote but rather consensus-building, comments carry as much value as resoning behind them. And opposes based on irrational fear add no value to the discussion (JetLover was unable to provide reasoning beyond "it might screw up", which applies to humans as well and was easily refuted). Миша13 21:15, 8 October 2007 (UTC)[reply]
- On the contrary, JetLover said that he doesn't want to support the candidate because he doesn't trust the candidate's judgement (since it's functionally incapable of independent thought). That's one of the most venerable and well-established reasons on the books (as well as one that you've used yourself in the past). If we were talking about a meat user rather than a bot, would anyone be giving him a hard time about this? -Hit bull, win steak(Moo!) 21:40, 8 October 2007 (UTC)[reply]
- I beg to differ on this one. As it is not a vote but rather consensus-building, comments carry as much value as resoning behind them. And opposes based on irrational fear add no value to the discussion (JetLover was unable to provide reasoning beyond "it might screw up", which applies to humans as well and was easily refuted). Миша13 21:15, 8 October 2007 (UTC)[reply]
- The bureaucrats have no right to place more or less "value" in one vote compared to another. Everyone's opinion, given in good faith, counts equally. We are the community; we all have the best interests of Wikipedia at heart; and we have a collective right to decide who becomes an administrator. WaltonOne 11:54, 6 October 2007 (UTC)[reply]
- It's not like it matters anyway, because I doubt the bureaucrats are really going to place much value in an oppose based on something that happened in a movie. -- John Reaves 21:51, 5 October 2007 (UTC)[reply]
- I realize you've got me...but I just can't support. I will not entrust the admin tools to a robot. I am sorry, but I just can't. Cheers,JetLover (Report a mistake) 21:30, 5 October 2007 (UTC)[reply]
- Strong Oppose. Not nearly enough editing experience [2], especially in project space. Someguy1221 06:36, 5 October 2007 (UTC)[reply]
- All the editing experience in the world couldn't help it... Please look above, it's a bot. --DarkFalls talk 06:40, 5 October 2007 (UTC)[reply]
- I'll do you one better! Here is the source a.k.a. the bot's brain. Enjoy... it can't do anything not specified there. —— Eagle101Need help? 06:53, 5 October 2007 (UTC)[reply]
- I'd understand if you didn't trust a bot with the bit or something, but did you actually look at the contributions you linked? Surely you would notice the bot has no contributions at all, let alone any in mainspace, and that this is a special case... --krimpet⟲ 08:00, 5 October 2007 (UTC)[reply]
- a.)You are insane. b.) haha, good joke. Please choose one. -- John Reaves 08:07, 5 October 2007 (UTC)[reply]
- All the editing experience in the world couldn't help it... Please look above, it's a bot. --DarkFalls talk 06:40, 5 October 2007 (UTC)[reply]
- Oppose - I'm really really sorry to do this, but I really don't think this is a good idea. I'm more than happy with us having admin bots - properly sourced bots could take away quite a few tedious admin tasks. However, for me to support a user running an admin bot, I would have to see a lot of previous experience running bots. I trust Eagle (and everyone else who understands the code) that it would work and do a good job, however they still have the potential to go wrong. WJB, I fully trust your judgement, but in area's that you have expertise (which is just about everywhere else). I just don't feel comfortable giving the most powerful bot to someone with very little bot experience. Ryan Postlethwaite 09:47, 5 October 2007 (UTC)[reply]
- I don't see a huge problem with that - if there is an issue, Eagle can just as easily fix the code and send it to WJB, and WJB is surely able to stop the bot in the meantime. --uǝʌǝsʎʇɹnoɟʇs 10:34, 5 October 2007 (UTC)[reply]
- Ryan, Eagle has a lot of programming experience and has run bots on Wikipedia before, as well as coding and running a number of IRC "RC" type bots. If you care to visit #wikipedia-en-image-uploads, you'll see one of his bots in action right now, I should think (I'm hoping I've got the channel correct). Nick 10:51, 5 October 2007 (UTC)[reply]
- It's not that I don't trust the bot per se, I trust Eagle's bot writing abilities fully and am sure this is a well worked code. I just don't believe it should be operated by a rookie bot operator. If Eagle was running this, I would support, but he's not. Most other bots, a new bot owner would have my full support, I mean we learn experience - but not an admin bot when there are plenty of other extremely qualified users to run it. Ryan Postlethwaite 11:10, 5 October 2007 (UTC)[reply]
- Ryan, with respect the "most powerful" Bot argument is rubbish. Its 40 lines of code. Its probably the most simple Bot that is going to be running on Wikipedia. Yes its final action will be to delete content - I have quite a lot of experience of deleting redirects (both speedy and as after discussion) but as far as its programming goes it is extremely simple. The specific of the task are layed out and the code that allows it to be done is public. I have difficulty in seeing what else you might want from us... WjBscribe 11:04, 5 October 2007 (UTC)[reply]
- It has got more tools at it's dispense than any other bot on the project so it is more powerful and I strongly disagree that the arguement for that is rubbish. I trust the code, I trust the your experience in deleting redirects, I just don't see any experience of you running a bot which I believe would be essential for someone running an adin bot. Ryan Postlethwaite 11:10, 5 October 2007 (UTC)[reply]
- How would whether or not I have operated a Bot before make a difference? Do you think I will use it at a greater speed than the agreed rate? Or modify its code so that it does something its not approved to do? In either case this would be spotted pretty easily and I suspect rightly the Community would have serious issues with that. I can't see how by accident or design I am any more likely to misuse this Bot simply because I have no previously operated one... WjBscribe 11:14, 5 October 2007 (UTC)[reply]
- It's just about knowing how to use the bot, sorting problems with it if it goes wrong, always having to go and hunt someone to fix it, making changes by accident that cause problems. I know you wouldn't purposely abuse the bot, I know you too well to suspect any differently - I just think there's potential here to go wrong if things screw up with the bot or settings get changed. For me anyway, experience is key if someone is going to be allowed to run an admin bot. Ryan Postlethwaite 11:21, 5 October 2007 (UTC)[reply]
- From what I can tell from the above discussion, WJBscribe has no intention of changing the code himself or doing anything besides turning it on/off and entering the login password. While I'm not familiar enough with Perl to know for sure, there do not seem to be any settings to change that could affect the operation of the bot as designed. Mr.Z-man 17:02, 5 October 2007 (UTC)[reply]
- It's just about knowing how to use the bot, sorting problems with it if it goes wrong, always having to go and hunt someone to fix it, making changes by accident that cause problems. I know you wouldn't purposely abuse the bot, I know you too well to suspect any differently - I just think there's potential here to go wrong if things screw up with the bot or settings get changed. For me anyway, experience is key if someone is going to be allowed to run an admin bot. Ryan Postlethwaite 11:21, 5 October 2007 (UTC)[reply]
- How would whether or not I have operated a Bot before make a difference? Do you think I will use it at a greater speed than the agreed rate? Or modify its code so that it does something its not approved to do? In either case this would be spotted pretty easily and I suspect rightly the Community would have serious issues with that. I can't see how by accident or design I am any more likely to misuse this Bot simply because I have no previously operated one... WjBscribe 11:14, 5 October 2007 (UTC)[reply]
- Very strong oppose This program has a Pre-alpha AI, cannot help in CAT:UNBLOCK, and it cannot block stupid vandals. βcommand 13:08, 5 October 2007 (UTC)[reply]
- I'm having a difficult time understanding your oppose reasons. The bot is programmed to do a specific and simple task, why would it need AI? Also, how is it relevant that the bot won't block or unblock anyone? Chaz Beckett 14:06, 5 October 2007 (UTC)[reply]
- Strong oppose / Illegal RFA. Current wikipedia rules forbid non-humans from being admins. See the top of the RFA page. "Requests for adminship (RfA) is the process by which the Wikipedia community decides who will become administrators" Note the "who". Who is a person. Also note the WP:administrator which states "Wikipedia practice is to grant administrator status to anyone who has been an active and regular Wikipedia contributor for at least a few months, is familiar with and respects Wikipedia policy, and who has gained the trust of the community." This bot has made no mainstream edits and has no brain so cannot respect WP policy. Furthermore, this bot has not answered any of the questions although some proxy has done the job for it. EIJ initials 16:16, 5 October 2007 (UTC)[reply]
- You registered an account just to Wikilawyer over this RfA? Why? Chaz Beckett 16:24, 5 October 2007 (UTC)[reply]
- User blocked indefinitely as a troll. - auburnpilot talk 16:36, 5 October 2007 (UTC)[reply]
- You registered an account just to Wikilawyer over this RfA? Why? Chaz Beckett 16:24, 5 October 2007 (UTC)[reply]
- Oppose I'm just not crazy about deletions by bot. An admin should at least spend the time it takes to read the redirect to decide if maybe there's some other action to take...maybe the redirect should be retargeted, or maybe there should be an article there...or maybe in some cases the target does need an article after all (depending on why the target was deleted if that's why the redirect is broken)...whatever, a final pair of eyes is needed. I'm also worried about a general increase in acceptance of bot deletions for the same reasons, don't want to see this become a precedent for other requests of this kind. I don't want this to go through just because of a general enthusiasm for technology. Unless there's a compelling reason to keep the broken redirect backlog clear I don't see why we need this. And just a quick comment, I certainly trust the developer(s) and operators....it's not an issue of the bot going sideways or anything. RxS 16:37, 5 October 2007 (UTC)[reply]
- Oppose WJBcribe is already an administrator and granting a person access to 2 accounts with administrator access is prohibited according to Wikipedia rules. The operator of the bot could run the bot without sysop tools and simply monitor its findings. He does not need 2 sysop privileges, which is against Wikipedia policy. AS 001 21:55, 5 October 2007 (UTC)Note: User registered today and has very few edits. Miranda 22:00, 5 October 2007 (UTC)[reply]
- Running it without sysop tools wouldnt work. He can do that himself, it just takes time. It is prohibited for two users to have two accounts with sysop for their use - bot accounts are exempt from that. AntiVMan 22:51, 5 October 2007 (UTC)[reply]
- Think about the reason why it is "prohibited by Wikipedia rules". Probably because admins' actions should be transparent and having two different admin accounts creates confusion. That is definitely not the case here (it is a bot, useful for automation). And yes, the operator could have the bot look for redirects and delete them himself, but that is an unnecessary waste of time when we can have a sysop-bot instead. Wikilawyering about the letter of Wikipedia policy isn't a very good tactic at RFA. Melsaran (talk) 14:20, 7 October 2007 (UTC)[reply]
- Oppose. I fail to see why this is considered necessary and it seems silly to give adminship to a computer program. Adminship is a position of trust in our community, intended to intelligently and appropriately carry out important tasks. Everyking 22:58, 5 October 2007 (UTC)[reply]
- 1) It is necessary because it takes away the workload from our admins and gives them more time to spend on tasks that need human attention 2) This bot will appropriately carry out important (maintenance) tasks; how would you need human attention and intelligence for something as simple as deleting broken redirects? Melsaran (talk) 14:20, 7 October 2007 (UTC)[reply]
- So how much time is being sucked up by this nom? How much combined time is being spent by all the many editors participating here? Everyking 11:45, 8 October 2007 (UTC)[reply]
- 1) It is necessary because it takes away the workload from our admins and gives them more time to spend on tasks that need human attention 2) This bot will appropriately carry out important (maintenance) tasks; how would you need human attention and intelligence for something as simple as deleting broken redirects? Melsaran (talk) 14:20, 7 October 2007 (UTC)[reply]
- Oppose Nothing personal WJB, but Ryan Postlethwaite sums up my concerns perfectly. Carbon Monoxide 00:53, 6 October 2007 (UTC)[reply]
- Oppose along the same lines as RxS and AmiDaniel. I don't take issue in particular with giving a bot administrator tools. This, though, is frankly unnecessary. Many red redirects could be profitably retargeted by us living beings (and those without a history could be the result of someone making a typo when actually trying to create a profitable redirect). It's fine if any particular editor doesn't want to take the time to do examine the redirects - including the admins involved here, who I respect - but it's not necessary to perform the task in a slipshod way. If a normal bot tags the whole list for speedy deletion, someone will examine the redirects. If it just generates a list and throws it up somewhere accessible, I'm even willing to do a large percentage of it myself. Isn't that a better solution? Dekimasuよ! 08:28, 6 October 2007 (UTC)[reply]
- I've been throwing up a list for a while now, if you're interested in working it, btw :) SQL(Query Me!) 08:39, 6 October 2007 (UTC)[reply]
- Hi Dekimasu, I'd like to try and address a couple of your concerns if you don't mind. I've been performing this cleanup task manually for a couple of months and have never retargeted a one edit broken redirect. I suspect it just that if one creates a redirect with a typo the big red letters that result are just too obvious and its quickly fixed. Broken redirects I find myself retargeting are those that have pointed to a series of targets, the most recent of which has been deleted, but the Bot won't touch those as by definition they'll have more than one edit. Tagging them for deletion is no improvement on just working from Special:BrokenRedirects - the deleting admins still has to make sure the tagger (human or Bot) was correct. I accept that you're willing to do a large percentage of the work (so am I), I just suspect our time would be better spent on tasks that couldn't be done by a Bot. I would not have proposed this Bot if I thought a human could do the job better - I suspect that the Bot will in fact be more effecient as a bored human can be a little careless... WjBscribe 17:22, 6 October 2007 (UTC)[reply]
- As a member of the notoriously smarmy bot approvals group, I wholeheartedly approve this task and have total confidence in your ability to satisfactorily perform it in an fully automated fashion. At the same time, I oppose this RFA, and vociferously protest the community's expectation that you undergo it. There are literally dozens of tasks which, while requiring sysop privileges, can be safely automated, and it would be rather stupid to admin-promote a single-purpose account for each. Just do it. —freak(talk) 14:00, 6 October 2007 (UTC)[reply]
- So you oppose that he has to go to through this RfA, and do so my making it harder to get through? If your objection is to the actual system as opposed to the candidate, perhaps this would go better on the talk page. ((1 == 2) ? (('Stop') : ('Go')) 14:30, 6 October 2007 (UTC)[reply]
- He already went through RFA. I thought I made it clear enough that I oppose the notion that a separate account should be created and promoted (by any process) for this task, though I completely support the task itself. —freak(talk) 14:32, 6 October 2007 (UTC)[reply]
- Ahh, well that makes more sense. Can't say I agree though. Thanks for clarifying. ((1 == 2) ? (('Stop') : ('Go')) 14:39, 6 October 2007 (UTC)[reply]
- So you oppose that he has to go to through this RfA, and do so my making it harder to get through? If your objection is to the actual system as opposed to the candidate, perhaps this would go better on the talk page. ((1 == 2) ? (('Stop') : ('Go')) 14:30, 6 October 2007 (UTC)[reply]
- Oppose (but willing to reconsider) - I started editing when I saw a dead link and created an article which others eventually started to edit. If the bot was in place, it would have cleaned up way too fast and caused at least one editor not to begin to contribute to WP and his knowledge forever lost from WP. Also admin powers are never (for practical purposes) taken away so this bot could evolve and we would have no recourse. WJBScribe hasn't explained why the bot cannot just use his sysop powers. He'll shut off the bot when he wants to edit or delete an article (he could request a user name change to WJB/RedirectClnUpBot) UTAFA 17:37, 6 October 2007 (UTC)[reply]
- Just to address the part about WJB running the bot from his account - admins are not supposed to run bots from their accounts (it probably happens but without anyone knowing about it), so WJBscribe must run it from a different account. The bot could not evolve, as WJBscribe would have to get community approval to do any further tasks with it and trust me, WJB is a very trusted user who would not use this bot for anything other than it has approval for. Ryan Postlethwaite 17:45, 6 October 2007 (UTC)[reply]
- Hi UTAFA. I think you may have misunderstood what this Bot will do but I'm not sure. Are you saying you read an article that had a redlink in it and created the target of the redlink? Because the Bot isn't going to affect that at all - its not going to remove redlinks (they're important for that very reason). It is going to delete redirects (see Duke of buckingham [3], which isn't an article but a page that moves people to Duke of Buckingham where there actually is an article, if the target article is deleted. Redirects aid navigation. But its only going to do that in very narrow circumstances - when the page it deletes has only 1 edit on it. Perhaps if you linked us to the page you are talking about, we could check what effect the Bot would have had on it, if any. WjBscribe 18:08, 6 October 2007 (UTC)[reply]
- To be fair, I don't think the attitude of "we shouldn't take away opportunities for people to make their first edit" holds much water. My first edit was to fix a typo; should we leave typos in place to entice new editors? EVula // talk // ☯ // 19:18, 6 October 2007 (UTC)[reply]
- Very Strong Opose - Although I like the idea of this bot in essence, I have serious issues with a bot recieving administrator rights. Bots are not human opperators, they're lines of programing and taking the human element out of this leaves a huge potential for error and abuse. Rackabello 20:43, 6 October 2007 (UTC)[reply]
- The essence of this bot goes hand-in-hand with the bot being an admin; how can you like one but not the other? The bot is only going to concentrate on a single, narrowly defined area (and will only be pulling its list of pages to look at from Special:BrokenRedirects) and will be closely monitored by not only WJBscribe and Eage101 (both of whom I trust implicitly) but probably more than a couple dozen or so other editors (some of whom, no doubt, are admins, and will block on sight the moment there is a screw up, and can restore any article that is mistakenly deleted). There is an infinitesimally small potential for error. The human element is not needed to delete broken redirects. EVula // talk // ☯ // 21:28, 6 October 2007 (UTC)[reply]
- Oppose. I don't think that bots should have the authority to delete pages, even in a limited way as described in this nomination. -Hit bull, win steak(Moo!) 17:48, 8 October 2007 (UTC)[reply]
- Any particular reason you don't think bots should be able to delete pages? The bot can only delete what it's instructed to and many editors have already reviewed the publicly available source code. Any deletions outside of the described scope would certainly lead to a fix and/or shutdown of the bot. Chaz Beckett 18:11, 8 October 2007 (UTC)[reply]
- I worry about instruction creep (since other functions could be snuck through the door after this bot's admin approval is nothing but a footnote), and I dislike the idea of deletions being carried out without some degree of direct human oversight. When there's no oversight for a process, it becomes too easy to game the system, even if no possible exploits are immediately apparent. There's no reason that these jobs can't be done manually, as they have been up until now. -Hit bull, win steak(Moo!) 20:52, 8 October 2007 (UTC)[reply]
- By "instruction creep", I'm assuming you mean that you're worried the bot will perform duties other than its specifically described task. However, it's been explicitly stated several times in this RfA that this will not happen. Since it would be noticed very quickly if the bot was doing anything other than cleaning up redirects, I don't see this as a realistic scenario. Also, although the job could be done manually, why make humans perform a repetitive, mindless job that can easily be handled by a bot? Chaz Beckett 22:39, 8 October 2007 (UTC)[reply]
- I wasn't thinking of some kind of byzantine covert augmentation. By "instruction creep", I mean that in six months, there might be an unobtrusive little entry on WP:BOTREQ (or some equivalent) asking for consensus to add some other piece of supposedly noncontroversial functionality to the bot's portfolio, with its performance in this task as evidence that doing so would be No Big Deal. And then three months later, another one... all done in the public eye, sort of, but not with an appropriate degree of reflection and oversight by the community at large. -Hit bull, win steak(Moo!) 00:09, 9 October 2007 (UTC)[reply]
- No. It is clearly agreed upon that a new task that requires a reapproval by the BAG will require a new RFA. At that point, we restart the circus. Titoxd(?!? - cool stuff) 01:30, 9 October 2007 (UTC)[reply]
- Yeah, it's agreed upon now. Will it still be agreed upon in six months? I've seen too many human RFA nominees who said they'd never be involved with Admin Action X and then subsequently went back on their word for a Very Good Reason to place much weight on such reassurances. -Hit bull, win steak(Moo!) 13:25, 9 October 2007 (UTC)[reply]
- No. It is clearly agreed upon that a new task that requires a reapproval by the BAG will require a new RFA. At that point, we restart the circus. Titoxd(?!? - cool stuff) 01:30, 9 October 2007 (UTC)[reply]
- I wasn't thinking of some kind of byzantine covert augmentation. By "instruction creep", I mean that in six months, there might be an unobtrusive little entry on WP:BOTREQ (or some equivalent) asking for consensus to add some other piece of supposedly noncontroversial functionality to the bot's portfolio, with its performance in this task as evidence that doing so would be No Big Deal. And then three months later, another one... all done in the public eye, sort of, but not with an appropriate degree of reflection and oversight by the community at large. -Hit bull, win steak(Moo!) 00:09, 9 October 2007 (UTC)[reply]
- By "instruction creep", I'm assuming you mean that you're worried the bot will perform duties other than its specifically described task. However, it's been explicitly stated several times in this RfA that this will not happen. Since it would be noticed very quickly if the bot was doing anything other than cleaning up redirects, I don't see this as a realistic scenario. Also, although the job could be done manually, why make humans perform a repetitive, mindless job that can easily be handled by a bot? Chaz Beckett 22:39, 8 October 2007 (UTC)[reply]
- I worry about instruction creep (since other functions could be snuck through the door after this bot's admin approval is nothing but a footnote), and I dislike the idea of deletions being carried out without some degree of direct human oversight. When there's no oversight for a process, it becomes too easy to game the system, even if no possible exploits are immediately apparent. There's no reason that these jobs can't be done manually, as they have been up until now. -Hit bull, win steak(Moo!) 20:52, 8 October 2007 (UTC)[reply]
- To this bot, there is no notion of authority, only one of ability (whether it can technically delete the page or not). So instead of asking whether or not RedirectCleanupBot should have authority to delete pages, maybe a better question would be whether or not WJBscribe should have the authority to run a bot that has the ability to delete pages. The issue, then, is the trustworthiness of both the bot operator and the bot's code. After examining both I'm sure you'll find no reason to oppose the operation of the bot. GracenotesT § 18:40, 8 October 2007 (UTC)[reply]
- The distinction is semantic, and functionally irrelevant as well. To use your preferred wording, I don't think that WJBscribe (or anyone else, for that matter) should "have the authority to run a bot that has the ability to delete pages", since I don't think that bots should have the ability to perform admin-level functions, including page deletion. This is a long-standing position of mine (see Wikipedia:Requests for adminship/TawkerbotTorA, for example), and I ask that you not try to browbeat me into changing it with loaded emotional appeals. I don't think that bots should have admin priviliges. Period. -Hit bull, win steak(Moo!) 20:52, 8 October 2007 (UTC)[reply]
- Emotional appeal? Sorry you read it that way; it was meant to be a logical one. You've provided no rationale so far, so I assumed from your oppose statement (perhaps incorrectly) that you thought the notion of authority meant something to a bot. GracenotesT § 23:21, 8 October 2007 (UTC)[reply]
- Next time you want to make a logical appeal, don't use "WJBscribe's a good fellow, and I'm sure he'd never do anything that would hurt the project" as the pole at the center of your tent. I've already provided you with two rationales for opposing: 1) Giving a bot the necessary permissions to perform admin tasks opens up potential avenues for exploitation or inadvertent damage, since it's an automated system that lacks the ability to reflect on its actions as it performs them and 2) Once the bot has its bit, it could have additional (more controversial) tasks added to its programming through a public request, with no real recourse as desysopping is in practice extremely difficult to achieve. Here's a third: I also think that if the bot becomes the primary means of deleting these redirects, and then stops working (if the code breaks in some esoteric way or the operator leaves the project, something like that), then human admins may allow a backlog to build up (since they will continue to assume that the bot is taking care of things for some time after it ceases operating). Those are perfectly valid reasons, and if you don't agree with them, then I can't help that. I didn't hector you about your !vote; please extend me the same courtesy. -Hit bull, win steak(Moo!) 00:09, 9 October 2007 (UTC)[reply]
- This is your answer to #2 and this is your answer to #1. -- Cobi(t|c|b|cn) 08:12, 9 October 2007 (UTC)[reply]
- For #2, see my reply to Titoxd above. For #1, the Mediawiki example is actually a good counterexample as to why we shouldn't have new adminbots. How many times have vandals found a way to exploit quirks, by adding penis vandalism to templates and such? It works pretty well now, but there have been a lot of bumps along the way (and honestly, there will probably be a few more in the future). To be satisfied that this bot isn't exploitable in any way, I'd need to see it run for an extended period of time (at least a couple of months) on some kind of test wiki, with people actively trying to break it in one way or another (and even then, I'd probably have misgivings). -Hit bull, win steak(Moo!) 13:25, 9 October 2007 (UTC)[reply]
- This is your answer to #2 and this is your answer to #1. -- Cobi(t|c|b|cn) 08:12, 9 October 2007 (UTC)[reply]
- Next time you want to make a logical appeal, don't use "WJBscribe's a good fellow, and I'm sure he'd never do anything that would hurt the project" as the pole at the center of your tent. I've already provided you with two rationales for opposing: 1) Giving a bot the necessary permissions to perform admin tasks opens up potential avenues for exploitation or inadvertent damage, since it's an automated system that lacks the ability to reflect on its actions as it performs them and 2) Once the bot has its bit, it could have additional (more controversial) tasks added to its programming through a public request, with no real recourse as desysopping is in practice extremely difficult to achieve. Here's a third: I also think that if the bot becomes the primary means of deleting these redirects, and then stops working (if the code breaks in some esoteric way or the operator leaves the project, something like that), then human admins may allow a backlog to build up (since they will continue to assume that the bot is taking care of things for some time after it ceases operating). Those are perfectly valid reasons, and if you don't agree with them, then I can't help that. I didn't hector you about your !vote; please extend me the same courtesy. -Hit bull, win steak(Moo!) 00:09, 9 October 2007 (UTC)[reply]
- Actually, it is a good thing to question anyone's opinion here if you seek clarification, or even disagree. This is a discussion, not a vote after all. If you don't want people offering critique on your opinions, this is not the best forums. ((1 == 2) ? (('Stop') : ('Go')) 03:55, 9 October 2007 (UTC)[reply]
- I'm fine with back-and-forth dialogue, but it seems like the only people who are being questioned in this RFA are the Neutrals and the Opposes (as is often the case at such things). At the time I write this, there have been thirteen comments in response to 150 Supports (by my quick thumbnail count), and more than that number in response to just the first two Opposes by themselves. I've stated my position on the matter up for discussion in a way that I feel is reasonably clear and unambiguous. I've given it a lot of thought, and I kind of resent the incessant but-but-but that's the inevitable response, because it implies that I'm some kind of idiot or infant. If you said it up above, and I disagree with you anyway, it doesn't mean that y'all need to say the exact same thing down here, only louder and more slowly. It just means that I disagree with you. -Hit bull, win steak(Moo!) 13:25, 9 October 2007 (UTC)[reply]
- Please see the supports... I have clarified and have responded to support comments as well. ;) —— Eagle101Need help? 17:50, 9 October 2007 (UTC)[reply]
- Very true (and your doing so is appreciated, believe me), but you're just about the only one. You've got almost as many replies in that area as the rest of the community combined. -Hit bull, win steak(Moo!) 18:06, 9 October 2007 (UTC)[reply]
- Please see the supports... I have clarified and have responded to support comments as well. ;) —— Eagle101Need help? 17:50, 9 October 2007 (UTC)[reply]
- I'm fine with back-and-forth dialogue, but it seems like the only people who are being questioned in this RFA are the Neutrals and the Opposes (as is often the case at such things). At the time I write this, there have been thirteen comments in response to 150 Supports (by my quick thumbnail count), and more than that number in response to just the first two Opposes by themselves. I've stated my position on the matter up for discussion in a way that I feel is reasonably clear and unambiguous. I've given it a lot of thought, and I kind of resent the incessant but-but-but that's the inevitable response, because it implies that I'm some kind of idiot or infant. If you said it up above, and I disagree with you anyway, it doesn't mean that y'all need to say the exact same thing down here, only louder and more slowly. It just means that I disagree with you. -Hit bull, win steak(Moo!) 13:25, 9 October 2007 (UTC)[reply]
- Emotional appeal? Sorry you read it that way; it was meant to be a logical one. You've provided no rationale so far, so I assumed from your oppose statement (perhaps incorrectly) that you thought the notion of authority meant something to a bot. GracenotesT § 23:21, 8 October 2007 (UTC)[reply]
- The distinction is semantic, and functionally irrelevant as well. To use your preferred wording, I don't think that WJBscribe (or anyone else, for that matter) should "have the authority to run a bot that has the ability to delete pages", since I don't think that bots should have the ability to perform admin-level functions, including page deletion. This is a long-standing position of mine (see Wikipedia:Requests for adminship/TawkerbotTorA, for example), and I ask that you not try to browbeat me into changing it with loaded emotional appeals. I don't think that bots should have admin priviliges. Period. -Hit bull, win steak(Moo!) 20:52, 8 October 2007 (UTC)[reply]
- Any particular reason you don't think bots should be able to delete pages? The bot can only delete what it's instructed to and many editors have already reviewed the publicly available source code. Any deletions outside of the described scope would certainly lead to a fix and/or shutdown of the bot. Chaz Beckett 18:11, 8 October 2007 (UTC)[reply]
- Oppose - Per Ryan Postlethwaite, I'm sorry but it just doesn't feel right granting admin rights to bot managed by a user with little experience running bots, if it was to malfunction we could end up with some real chaos and that's not something I'm willing to risk just to get rid of a few monotone tasks. - Caribbean~H.Q. 05:56, 10 October 2007 (UTC)[reply]
Neutral
Neutral pending reassuranceswitched to support. As a non-botwise person, I have an irrational phobia of rampaging admin Franken-bots that I’m really trying to overcome. Do I understand right that, if bots start getting sysop bits, a two step process will be required? First, it goes through WP:RFA, with a detailed summary of exactly what tasks it will perform. If that is successful, it goes through WP:BRFA, which will verify that it does what it claims to, and only what it claims to. If it passes both, a 'crat gives it a sysop flag and a bot flag. Every time a new task that requires sysop permission is thought up, either a new bot is nominated for RFA, or the old bot goes thru RFA again. Right? We won’t be in a position where, as time goes by, new tasks for a previously-approved bot only go through BRFA?
The reason I mention this here, on this relatively uncontroversial task, is that this is obviously the thin end of the wedge. I don’t share Corvus cornix's complete rejection of admin bots, and I think it would be good in some cases for the community to approve bots with sysop permission. But I wouldn’t want to go down a slippery slope where soon, admin bots are given new tasks without the entire community (i.e. RFA) being able to approve.
If it is already specifically enshrined somewhere that any new admin-related task has to be approved by the entire community, rather than just at BRFA, I’d support (and please point me in the right general direction of where that policy is). If not, I’d oppose until such an assurance was made. --barneca (talk) 21:31, 4 October 2007 (UTC)[reply]- If you read the statement, this bot will accept no further tasks. The code will only change to make the bot better at what it does. I will probably add in a mechanism by which the bot will confirm that the page that was redirected to was deleted over a day ago before deleting the redirect. —— Eagle101Need help? 21:35, 4 October 2007 (UTC)[reply]
- I read the statement, and certainly believe WJBscribe that the scope of this particular bot will never change. Again, I'm just concerned that if we're going to start giving sysop bits to a bot, that there should be some mechanism to ensure that it will always be like this, and I'd prefer that mechanism be in place first. If this were not the thin end of the wedge, I would support. But it is, hence my question. --barneca (talk) 21:42, 4 October 2007 (UTC)[reply]
- What mechanism would you like? It seems to me that if this Bot doesn't do what it says here, any of our 1000+ admins may block it. Admins are amazingly trigger happy in blocking Bots that do something they're not authorised to do (or do it too fast). Warnings are not given, the Bot operator is not consulted. Because the task is so simple, it will be very easy to show it is doing something wrong - whereas admin actions by humans are open to more interpretation :-) ... WjBscribe 21:46, 4 October 2007 (UTC)[reply]
- All bots need approval by the bot approval group which understands how bots work. ((1 == 2) ? (('Stop') : ('<;font color="Green">Go')) 21:46, 4 October 2007 (UTC)[reply]
- No problem, please do remember that this is setting a precedent and a good method for approving admin bots. There is no policy on it yet, but policy can always follow the practice. As it is now, the practice is following what you consider to be the ideal case, and this is what I consider to be the ideal case. One task per admin bot account. Each new one goes under a new account, and is approved at RFA, rather then BRFA (which is not all that public as far as people that see those pages). —— Eagle101Need help? 21:48, 4 October 2007 (UTC)[reply]
- If that was policy, instead of good practice, my concerns would be addressed. Since policy can change overnight anyway, I’m not sure I actually need to see the policy written in stone first, although that’s what I would prefer. I’ll sleep on it. But I’d just feel better about this whole thing knowing that current consensus is that each new bot task requiring a sysop bit will trigger a new RFA (or some other, less-formal community-wide process that hasn’t been thought up yet), and that there wouldn’t be much support for a bold bot operator sometime in the future adding a task they considered uncontroversial. As for minor upgrades to the bot code, I’m perfectly happy to have BAG approve new version of the bot, and maybe not even that is necessary (not sure what current practice is) as long as there was no task creep. --barneca (talk) 22:02, 4 October 2007 (UTC)[reply]
- There will not be task creep, but I can be sure the code will be changed, but if it is, the changes will be open to all to see. —— Eagle101Need help? 22:09, 4 October 2007 (UTC)[reply]
- If that was policy, instead of good practice, my concerns would be addressed. Since policy can change overnight anyway, I’m not sure I actually need to see the policy written in stone first, although that’s what I would prefer. I’ll sleep on it. But I’d just feel better about this whole thing knowing that current consensus is that each new bot task requiring a sysop bit will trigger a new RFA (or some other, less-formal community-wide process that hasn’t been thought up yet), and that there wouldn’t be much support for a bold bot operator sometime in the future adding a task they considered uncontroversial. As for minor upgrades to the bot code, I’m perfectly happy to have BAG approve new version of the bot, and maybe not even that is necessary (not sure what current practice is) as long as there was no task creep. --barneca (talk) 22:02, 4 October 2007 (UTC)[reply]
- What mechanism would you like? It seems to me that if this Bot doesn't do what it says here, any of our 1000+ admins may block it. Admins are amazingly trigger happy in blocking Bots that do something they're not authorised to do (or do it too fast). Warnings are not given, the Bot operator is not consulted. Because the task is so simple, it will be very easy to show it is doing something wrong - whereas admin actions by humans are open to more interpretation :-) ... WjBscribe 21:46, 4 October 2007 (UTC)[reply]
- I read the statement, and certainly believe WJBscribe that the scope of this particular bot will never change. Again, I'm just concerned that if we're going to start giving sysop bits to a bot, that there should be some mechanism to ensure that it will always be like this, and I'd prefer that mechanism be in place first. If this were not the thin end of the wedge, I would support. But it is, hence my question. --barneca (talk) 21:42, 4 October 2007 (UTC)[reply]
Very Weak Support - I've always hated the idea of having a bot become an admin, but ressurence counts. I'll give it a go for now. --Hirohisat 紅葉 21:44, 4 October 2007 (UTC)[reply]
- Is this in the wrong section? --barneca (talk) 22:02, 4 October 2007 (UTC)[reply]
- Oops --Hirohisat 紅葉 22:22, 4 October 2007 (UTC)[reply]
- Is this in the wrong section? --barneca (talk) 22:02, 4 October 2007 (UTC)[reply]
- Neutral leaning towards support - I'm not an expert with bots and my little experience (aka run-ins) with bots have been rather . . . odd. I do have faith in the nominators, but I only hope this little Bot doesn't decide to go rogue... -WarthogDemon 22:20, 4 October 2007 (UTC)[reply]
- Yeah, I don't know much about bots and stuff, but I don't see the bot going nuts and blocking folks or deleting the main page... Or, um, killing people and stealing our spaceship. And there will be a shutoff function. So... --Jeffrey O. Gustafson - Shazaam! - <*> 01:36, 5 October 2007 (UTC)[reply]
- I'm paranoid too...so I can't support, basically as per Warthog Demon. Not that it'll mattter. — Dihydrogen Monoxide (H2O) 01:13, 5 October 2007 (UTC)[reply]
- Neutral. I have absolutely no problem with the purpose of this bot and no wish to block its sysopping, but generally am opposed to bot admins. I have opposed other bot RFAs which have seemed more controversial, but this seems non-problematic. I decline to support based not on mistrust of the operators, opposition to the bot, or any other qualms about this particular RFA, but out of general paranoia about and mistrust of bot admins.--Grand Slam 7 | Talk 22:13, 5 October 2007 (UTC)[reply]
- I hope this bot proves you wrong then :) (by that I mean that they can be trusted if the operator is trusted) —— Eagle101Need help? 22:36, 5 October 2007 (UTC)[reply]
- Neutral - while I trust the operators and coders of this bot, I am still wary of automation. As the comments in the coding section indicate, even the best programmers may overlook something. While it's not big enough for me to oppose over, I'll consider this a trial run of the concept. --Haemo 02:31, 6 October 2007 (UTC)[reply]
Neutral, leaning towards support, and can be brainwashed into supporting.(switched to support) My concerns are more technical than anything. While I trust WJBscribe to not use the bot to screw over the wiki, he has no on-wiki experience with bots. If the MediaWiki framework that the bot uses changes, how fast can you modify the source code to fix the bot? Also, if you can't, but Eagle (whom I also trust implictly) can, will Eagle have access to the computer running the bot? Who else has access to that computer? While the concept is sound, these are several things making me pause and not support. Titoxd(?!? - cool stuff) 23:50, 7 October 2007 (UTC)[reply]- The task will run approx every 2-3 days, so if there are any problems he should be able to contact me in that time. I've been on practically every day. I don't not have access to the computer its running on, but I really don't need it (and its not my bot account, so why should I have access to his password?). —— Eagle101Need help? 00:03, 8 October 2007 (UTC)[reply]
- Hmm. So, let's say that there is a change to Special:Brokenredirects and the way it is presented. So, after the bot run is done, you'll have three days to check it, but at the same time, the fixes wouldn't come live until ~ 72 hours after the fixes are made, until the best run. If there are any problems, be it in the code or in applying a patch, a further fix would be visible in an extra 72 hours. I guess that what I'm saying is that I'd be more comfortable if you had a shell account in that computer, but that may not be feasible. Titoxd(?!? - cool stuff) 00:06, 8 October 2007 (UTC)[reply]
- I'm not sure if I get you here, if there is a problem on one run, I will simply be notified, and I'll fix it. The bot op will know to wait till the fix before running the bot. —— Eagle101Need help? 00:21, 8 October 2007 (UTC)[reply]
- This does bring up the question of who is responsible for what. The complications of having two people (you and WJBScribe) involved is that it is slightly less clear who is responsible for what. If something were to go wrong, and it was some unforeseen thing and it was not clear who should or could fix it (you or WJBScribe), or who should be taking responsibility, then this could cause confusion. A clearer way to put this is to ask whether any real consequences (you both, well one of you two at least, talk offhand, jokingly I hope, about saying you will hand in your sysop flags if things go wrong) would be borne by both of you collectively. ie. if something goes wrong in the programming, will you both take responsibility, or will WJBScribe say "not my problem, ask him", and if something goes wrong with the computer it is running on, will Eagle 101 say "not my problem, ask him"? Or will you both take responsibility? Not likely to be a problem here, but I hope this illustrates how two people working as a team can create extra difficulties (as well as solving some difficulties - ie. if you are in different time zones, you are theoretically twice as responsive). Of course, most things can wait for a post-mortem after an emergency shut-down, which brings me to a final point - could such post-mortems be just a little bit more comprehensive and clearly explained and documented than normal? If it would be better, I can add this to my questions in the discussion section above. Thanks. Carcharoth 00:33, 8 October 2007 (UTC)[reply]
- You summarized my concerns better than I could. I'm worried about a "it's his fault, go bug him" situation. Titoxd(?!? - cool stuff) 03:10, 8 October 2007 (UTC)[reply]
- OK, let me try and break that down so I can answer:
- Responsibility. I offered up my sysop flag if this Bot's scope was expanded beyond the scope of the task for which permission is being asked in this RfA, not for an possible malfunction however minor. As I see it, the responsibility (by which here were are I think talking "blame") is something that falls primarily on me - if the code changes it will (a) be tested and (b) the onous must be on me to watch it to ensure nothing is going wrong when it first starts to run on-wiki with that code.
- "Not my problem ask him". Erm - I want the Bot to work, not to be sitting there doing nothing. If the Bot stops working or has to be shut down due to error and I cannot solve the problem, I will be bugging Eagle to patch the code. If he isn't available then some of our other Perl programmers who have kindly offered their services will get bugged. I don't see why other people would end up running round trying to get this fixed though...
- Timezones. The Bot's run has to be manually started by me (for one thing this means no copy of the password is stored), meaning that I will be online when it is running. If anything goes wrong (or it doesn't complete its run due to an error) I should become aware of this pretty quickly and be able to take steps to get it fixed. Any period when the Bot isn't running just puts us back to the present situation - a human admin will have to delete the redirects.
- I hope that addresses what you were getting at... WjBscribe 04:31, 8 October 2007 (UTC)[reply]
- It does. Thank-you. Carcharoth 09:32, 8 October 2007 (UTC)[reply]
- Sigh, as the code is now it should not do anything harmful besides perhaps delete a few redirects if things go wrong. If things go wrong simply block the bot, and notify the bot operator and me (or just him, he will contact me). I don't understand exactly what you mean by blame, if the bot would ever say block everyone on wiki, then its not the same code thats here, and that means that the bot op ran the wrong code. I would hope that simple non malicious bugs would simply be reported and I be allowed to fix the bugs without drastic measures besides the blocking of the bot (which is not drastic at all). It is WJBscrib's responsibility to execute the correct code. (That which is posted on toolserver), I'll be insuring the code is correct/uptodate. If there is a major fault with the code (lets say I go nuts and put some lines in there that blocks everyone, moves all pages to XXX ON WHEELS etc, and WJBscribe executes that code (that is on toolserver) its my fault. I'm having a really hard time thinking of where this bot could go wrong. You have to understand the worst case situation here is a few redirects or articles get deleted and quickly undone. The bot won't be deleting at any rouge speed. (probably 4 per minute), and should be easily undone.
- I understand this is the first adminbot and folks are a tad nervous about that. But realistically this bot has very very little chance of doing any harm unintentionally. If for some reason something goes wrong I'll fix it. I'm not going to put my head on the chopping block because we find a corner case and the bot deletes a redirect it should not have, I'd rather fix the code and move on. Please do understand that any coder could cause massive havok on their own admin accounts... we (collectively as a programmer) don't need to have a seperate account to do harm. Put it this way, if I maliciously change the code to do something evil, please desysop me. I'm sure the same goes for WJBscribe. I don't know if this addresses what you want or not, but the way I see it is I'll never intentionally harm wikipedia, and I'm sure the smae goes for WJBscribe. —— Eagle101Need help? 04:48, 8 October 2007 (UTC)[reply]
- Well, technically it will be the first mandated admin bot. Don't get me wrong here. I'm very likely to support, and I would have hoped that a series of thoughtful questions would have been met by unfailingly polite responses. The impression I'm getting from the "Sigh" comment is that you are finding the process of answering the questions (and some are repetitive, I admit) to be a little bit exasperating. That alone makes me stop and wonder what the attitude will be when people are asking questions about the bot while it is operating. There is also a lot of "trust us, it will work" attitude. I do trust you both to operate the bot correctly, but please allow us (the community) the time and space to ask the questions without offputting "sigh" comments and "we would never harm Wikipedia" responses. That just distracts from the answers I'm trying to get. Carcharoth 09:32, 8 October 2007 (UTC)[reply]
- I appologize, I just feel as if I'm repeating myself here, I think I have said this 2-3 times already, then again to read through all the questions here 31 and counting! (not counting questions in other sections that are not formally marked as such) must be a bit of work as well. Please remember we are all volumteers here. :) —— Eagle101Need help? 17:49, 8 October 2007 (UTC)[reply]
- Well, you do have to consider that this will be the first adminbot, so it will create a precedent for other adminbots, and primarily bots that operate under the supervision of two users. Additionally, you do have to consider that there are two unique aspects to a Request for Adminbot: That the task be appropriate for automation, and the particular setup and programming for the task. It is actually a good thing that these questions are being asked, as that demonstrates engagement of a large part of the community in these technical matters. But I digress. I'm not sure where you got the impression that I was wondering about desysopping, as the bot won't go kill off all the accounts in the English Wikipedia. That is just not even in the equation of the things I'm considering. My questions were more of the "If it breaks, how long does it get to fix it and who fixes it" type, and you and WJBscribe have answered most of them to my satisfaction. I do realize that this is probably more tedious than a Request for Barbecuing, so I do appreciate the patience, and I do understand the exasperation. So, I'm switching to support. Titoxd(?!? - cool stuff) 18:34, 8 October 2007 (UTC)[reply]
- I appologize, I just feel as if I'm repeating myself here, I think I have said this 2-3 times already, then again to read through all the questions here 31 and counting! (not counting questions in other sections that are not formally marked as such) must be a bit of work as well. Please remember we are all volumteers here. :) —— Eagle101Need help? 17:49, 8 October 2007 (UTC)[reply]
- Well, technically it will be the first mandated admin bot. Don't get me wrong here. I'm very likely to support, and I would have hoped that a series of thoughtful questions would have been met by unfailingly polite responses. The impression I'm getting from the "Sigh" comment is that you are finding the process of answering the questions (and some are repetitive, I admit) to be a little bit exasperating. That alone makes me stop and wonder what the attitude will be when people are asking questions about the bot while it is operating. There is also a lot of "trust us, it will work" attitude. I do trust you both to operate the bot correctly, but please allow us (the community) the time and space to ask the questions without offputting "sigh" comments and "we would never harm Wikipedia" responses. That just distracts from the answers I'm trying to get. Carcharoth 09:32, 8 October 2007 (UTC)[reply]
- OK, let me try and break that down so I can answer:
- You summarized my concerns better than I could. I'm worried about a "it's his fault, go bug him" situation. Titoxd(?!? - cool stuff) 03:10, 8 October 2007 (UTC)[reply]
- This does bring up the question of who is responsible for what. The complications of having two people (you and WJBScribe) involved is that it is slightly less clear who is responsible for what. If something were to go wrong, and it was some unforeseen thing and it was not clear who should or could fix it (you or WJBScribe), or who should be taking responsibility, then this could cause confusion. A clearer way to put this is to ask whether any real consequences (you both, well one of you two at least, talk offhand, jokingly I hope, about saying you will hand in your sysop flags if things go wrong) would be borne by both of you collectively. ie. if something goes wrong in the programming, will you both take responsibility, or will WJBScribe say "not my problem, ask him", and if something goes wrong with the computer it is running on, will Eagle 101 say "not my problem, ask him"? Or will you both take responsibility? Not likely to be a problem here, but I hope this illustrates how two people working as a team can create extra difficulties (as well as solving some difficulties - ie. if you are in different time zones, you are theoretically twice as responsive). Of course, most things can wait for a post-mortem after an emergency shut-down, which brings me to a final point - could such post-mortems be just a little bit more comprehensive and clearly explained and documented than normal? If it would be better, I can add this to my questions in the discussion section above. Thanks. Carcharoth 00:33, 8 October 2007 (UTC)[reply]
- I'm not sure if I get you here, if there is a problem on one run, I will simply be notified, and I'll fix it. The bot op will know to wait till the fix before running the bot. —— Eagle101Need help? 00:21, 8 October 2007 (UTC)[reply]
- Hmm. So, let's say that there is a change to Special:Brokenredirects and the way it is presented. So, after the bot run is done, you'll have three days to check it, but at the same time, the fixes wouldn't come live until ~ 72 hours after the fixes are made, until the best run. If there are any problems, be it in the code or in applying a patch, a further fix would be visible in an extra 72 hours. I guess that what I'm saying is that I'd be more comfortable if you had a shell account in that computer, but that may not be feasible. Titoxd(?!? - cool stuff) 00:06, 8 October 2007 (UTC)[reply]
- Neutral, since I'm not sure the bot will function properly, I want to see the bot in CAT:RECALL. @pple complain 09:20, 8 October 2007 (UTC)[reply]
- What makes you say that you're unsure that the bot will function properly ? Nick 10:41, 8 October 2007 (UTC)[reply]
- Neutral (leaning to support) While I have 200% confidence in this bot not screwing up anything and with the absolute trust in the operator and coder, I stand by the fact that one edit can totally rip off a new contributor's editing career here. All newbies are different in their own aspect, and we might get one who likes to fix broken or double redirects. This is the sole concern that I have about this bot's sysopping, otherwise I would've supported. O2 (息 • 吹) 22:25, 08 October 2007 (GMT)
- Please see my comment above about typos (response to O#12); the idea that improving the encyclopedia discourages editors from improving the encyclopedia is circular and perplexing. EVula // talk // ☯ // 01:44, 9 October 2007 (UTC)[reply]
- I have read all that even before offering my input. However, absolutely no one can predict what a potential contributor might do on his/her first edit. As I've already said, this is something that may cause me to oppose, but I'd really like to support, but can't. O2 (息 • 吹) 01:28, 10 October 2007 (GMT)
- Please see my comment above about typos (response to O#12); the idea that improving the encyclopedia discourages editors from improving the encyclopedia is circular and perplexing. EVula // talk // ☯ // 01:44, 9 October 2007 (UTC)[reply]
- Neutral I don't like the precedent of allowing admin bots. This task, deleting single edit redirects to non-existant targets, is sufficiently narrowly drawn that it does seem amenable to bot activity. And I trust the individual running the bot to stay within the requested task. But I stand in opposition to allowing the bot any further tasks or any expansion of this task without another full RfA. This is a caution to the Bot Approvals Group - as they must be the ones to enforce the restriction once an admin flag is granted. GRBerry 13:54, 10 October 2007 (UTC)[reply]
- Caution noted. It's already been stated several times in this RfA that is indeed is how any further expansion will be done. — madman bum and angel 14:45, 11 October 2007 (UTC)[reply]
- The above adminship discussion is preserved as an archive of the discussion. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the talk page of either this nomination or the nominated user). No further edits should be made to this page.