Before website owners decide to expend their resources on deterring webbots, they should ask themselves a few questions.
What can a webbot do with your website that a person armed with a browser cannot do?
Are your deterrents keeping desirable spiders (like search engines) from accessing your web pages?
Does an automated agent (that you want to thwart) pose an actual threat to your website? Is it possible that it may even provide a benefit, as a procurement bot might?
If your website contains information that needs to be protected from webbots, should that information really be online in the first place?
If you put information in a public place, do you really have the right to bar certain methods of reading it?
If you still insist on banning webbots from your website, keep in mind that unless you deliberately develop measures like the ones near the end of this chapter, you will probably have little luck in defending your site from rogue webbots.