this post was submitted on 18 Mar 2024
130 points (93.3% liked)
Technology
59605 readers
3501 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I hadn't really even considered that apple wouldn't be working on their own LLM. Seems like everyone is making their own LLM these days.
They possibly are (or at least have people doing research), it's just not very good (yet?) https://aimodels.substack.com/p/apple-is-working-on-multimodal-ai
Remember the early days of Apple Maps?
If that’s an indication, Apple’s AI offerings will someday be as good or better than Google’s. Cause Apple Maps is pretty great these days, but was absolute garbage when they rolled it out.
Apple is working on models, but they seem to be focusing on ones that use tens of gigabytes of RAM, compared to tens of terabytes.
I wouldn't be surprised Apple ships an "iPhone Pro" with 32GB of RAM dedicated to AI models. You can do a lot of really useful stuff with a model like that... but it can't compete with GPT4 or Gemini today - and those are moving targets. OpenAI/Google will have even better models (likely using even more RAM) by the time Apple enters this space.
A split system, where some processing happens on device and some in the cloud, could work really well. For example analyse every email/message/call a user has ever sent/received with the local model, but if the user asks how many teeth a crocodile has... you send that one to the cloud.
Tbf, Google has versions of Gemini that will run locally on phones too, and their open source Gemini models run on 16GB of ram or so.