"Capitalism creates innovation" - one of the best jokes out there
Olgratin_Magmatoe
For me personally, it was easier to just write a few python scripts to parse the CSV files from my banks.
I switched over to this last month, and I'm already a fan of it. It did take a bit of time to port over my old shtitty excel tracking, and to write a few python scripts to parse the export CSV files from each of my bank accounts, but it is very clearly worth it.
One of the nice things is that it uses a SQLite db to store everything, so if shit ever hits the fan with one of my drives or the software, it isn't the end of the world.
You can self host it so you can access it over the web, or you can choose to run it entirely locally. It's pretty nifty, and I've elected to using it for the second option.
Ignoring the myriad of other issues listed in this thread, the bit about training AI is pretty misleading. It's not hard to scrape webpages for whatever kind of data you like, even if loops doesn't outright hand things over for third parties for that purpose.
And the kind of people who are downloading the entire internet to train AIs are the type to be willing to just scrape without permission.
Hopefully instead of turning into a bunch of e-waste, a bunch of "useless" desktops flood refurbishers, and refurbished desktops become even cheaper. I wouldn't mind replacing my dying media server.
The Emperor protects
To be fair, some penguins aren't exactly small
Not every GET request is simple enough to cache, and not everyone is running something big enough to need a sysadmin.
Near zero isn't zero though. And not everyone is using caching.
It has nothing to do with a sysadmin. It's impossible for a given request to require zero processing power. Therefore there will always be an upper limit to how many get requests can be handled, even if it's a small amount of processing power per request.
For a business it's probably not a big deal, but if it's a self hosted site it quickly can become a problem.
Agreed