this post was submitted on 02 Feb 2024
675 points (99.1% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

One of Google Search's oldest and best-known features, cache links, are being retired. Best known by the "Cached" button, those are a snapshot of a web page the last time Google indexed it. However, according to Google, they're no longer required.

"It was meant for helping people access pages when way back, you often couldn’t depend on a page loading,” Google's Danny Sullivan wrote. “These days, things have greatly improved. So, it was decided to retire it."

you are viewing a single comment's thread
view the rest of the comments
[–] modus@lemmy.world 8 points 9 months ago (2 children)

Isn't caching how anti-paywall sites like 12ft.io work?

[–] megaman@discuss.tchncs.de 8 points 9 months ago (1 children)

At least some of these tools change their "user agent" to be whatever google's crawler is.

When you browse in, say, Firefox, one of the headers that firefox sends to the website is "I am using Firefox" which might affect how the website should display to you or let the admin knkw they need firefox compatibility (or be used to fingerprint you...).

You can just lie on that, though. Some privacy tools will change it to Chrome, since that's the most common.

Or, you say "i am the google web crawler", which they let past the paywall so it can be added to google.

[–] sfgifz@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

Or, you say "i am the google web crawler", which they let past the paywall so it can be added to google.

If I'm not wrong, Google has a set range of IP addresses for their crawlers, so not all sites will let you through just because your UA claims to be Googlebot

[–] lud@lemm.ee 5 points 9 months ago

I dunno, but I suspect that they aren't using Google's cache if that's the case.

My guess is that the site uses its own scrapper that acts like a search engine and because websites want to be seen to search engines they allow them to see everything. This is just my guess, so it might very well be completely wrong.