this post was submitted on 21 Nov 2024
125 points (97.0% liked)

Technology

59495 readers
3041 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Niantic, the company behind the extremely popular augmented reality mobile games Pokémon Go and Ingress, announced that it is using data collected by its millions of players to create an AI model that can navigate the physical world. 

In a blog post published last week, first spotted by Garbage Day, Niantic says it is building a “Large Geospatial Model.” This name, the company explains, is a direct reference to Large Language Models (LLMs) Like OpenAI’s GPT, which are trained on vast quantities of text scraped from the internet in order to process and produce natural language. Niantic explains that a Large Geospatial Model, or LGM, aims to do the same for the physical world, a technology it says “will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems. As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”

By training an AI model on millions of geolocated images from around the world, the model will be able to predict its immediate environment in the same way an LLM is able to produce coherent and convincing sentences by statistically determining what word is likely to follow another.

you are viewing a single comment's thread
view the rest of the comments
[–] paraphrand@lemmy.world 12 points 6 hours ago (5 children)

I’ve found myself thinking “well, you just helped teach the AI about that one…” various times when reading content online.

It’s a strange thing to know that a form of the basilisk is real. Things posted will help AI get better, if only my teeny tiny increments each time.

[–] webghost0101@sopuli.xyz 11 points 6 hours ago (4 children)

AI learning isn't the issue, its not something we will be able to put a lid on either way. Either it destroys or saves the world. It doesn't need to learn much to do so besides evolving actual self-agency and sovereign thought.

What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.

And then acting all normal about It.

[–] paraphrand@lemmy.world 5 points 6 hours ago (1 children)

I didn’t say it was an issue. I just said it was a strange feeling to know AI is watching us talk past each other.

[–] webghost0101@sopuli.xyz 2 points 6 hours ago* (last edited 6 hours ago)

I sort of misread your comment as saying the basilisk is inevitable which is a thought i would describe as least oopsie-issue-level.

Still there are many other people bent on directly poisoning AI to counteract the learning but i just fear that will get it to dangerously rogue mentally challenged AI faster then if we aimed for maximum coherent intelligence and hope that benevolence is an emergent behavior from it.

But more at hand. If we build AI by grossly exploiting our own fellow-humans. How do we expect it will treat us once it reaches a state of independent learning.

load more comments (2 replies)
load more comments (2 replies)