For instance, say I search for "The Dark Knight" on my Usenet indexer. It returns to me a list of uploads and where to get them via my Usenet provider. I can then download them, stitch them together, and verify that it is, indeed, The Dark Knight. All of this costs only a few dollars a month for me.
My question is, why can't copyright holders do this as well?
They could follow the same process, and then send takedown requests for each individual article which comprises the movie. We already know they try to catch people torrenting so why don't they do this as well?
I can think of a few reasons, but they all seem pretty shaky.
- The content is hosted in countries where they don't have to comply with takedown requests.
It seems unlikely to me that literally all of it is hosted in places like this. Plus, the providers wouldn't be able to operate at all in countries like the US without facing legal repercussions.
- The copyright holders feel the upfront cost of indexer and provider access is greater than the cost of people pirating their content.
This also seems fishy. It's cheap enough for me as an individual to do this, and if Usenet weren't an option, I'd have to pay for 3+ streaming services to be able to watch everything I do currently. They'd literally break even with this scheme if they could only remove access to me.
- They do actually do this, but it's on a scale small enough for me not to care.
The whole point of doing this would be to make Usenet a non-viable option for piracy. If I don't care about it because it happens so rarely, then what's the point of doing it at all?
You're second point is a good one, but you absolutely can log the IP which requested robots.txt. That's just a standard part of any http server ever, no JavaScript needed.