Looks like I will be recreating my Bot blocker to some degree. I don't think I will put this into an open source lib, think I will just keep it to my self. I seen the re-emergince of Randomly garbage User-Agent strings showing up in my log. And Chat GPT bot hitting my site, like my website is giving Free beer or something Blah. I was hoping to avoid most of this crap, but downloading over 4000+ pages daily, considering I maybe change a few comics a week is a bit overkill. And I do not see any benifit for my website here in referrerals. Most websites don't send back referreral now days which is sad state of the internet.
I am thinking of putting up a seperate site for browser DB and life user-agent parser based on my new version of the Browser Detective. I will need to work on the bot blocker so that website doesn't get killed as well. As scrapers would want to suck down all the files, and generated pages. Considering I http header data gathered from this site going back to 2006. The Header data from September 2024 to present comes from the Dot.net Core version of this website, and I no longer get raw access to the headers, but a preprocessed version so some things are cleanup now, that were not previously. But it works for the new version well enough.