Library SEO re-revisited 

Five years ago I complained publicly and loudly about commercial discovery layers killing discoverability through search engines with restrictive robots.txt files (

Cornell what are you doing?

Stanford gets it though:

(This SEO moment prompted by @platypus)

· · Web · 1 · 0 · 1

Library SEO re-revisited 

@dbs in Cornell’s defense, they were getting spiderDDOS. SpiDOS?

Library SEO re-revisited 

@platypus Sure, but
"Disallow: /" for all user agents is drastic!

There are ways to block the bad bots and slow down the desirable ones so that the site can still be indexed and searched & so that the Internet Archive can archive it.

Really bad bots will just ignore robots.txt anyway...

Library SEO re-revisited 

@platypus Evergreen is really easy to DDOS so I've had to learn some tricks :/

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!