this post was submitted on 13 May 2024
81 points (80.0% liked)

Technology

59534 readers
3168 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.

you are viewing a single comment's thread
view the rest of the comments
[–] Bishma@discuss.tchncs.de 22 points 6 months ago (10 children)

Maybe this is wishful thinking but this, at first glance, seems like a sign that we're already entering the LLM plateau. Like when they got the point with phones that each new version is just more cameras, smoother UI, and harder glass.

[–] Nevoic@lemm.ee 18 points 6 months ago* (last edited 6 months ago) (1 children)

18 months ago, chatgpt didn't exist. GPT3.5 wasn't publicly available.

At that same point 18 months ago, iPhone 14 was available. Now we have the iPhone 15.

People are used to LLMs/AI developing much faster, but you really have to keep in perspective how different this tech was 18 months ago. Comparing LLM and smartphone plateaus is just silly at the moment.

Yes they've been refining the GPT4 model for about a year now, but we've also got major competitors in the space that didn't exist 12 months ago. We got multimodality that didn't exist 12 months ago. Sora is mind bogglingly realistic; didn't exist 12 months ago.

GPT5 is just a few months away. If 4->5 is anything like 3->4, my career as a programmer will be over in the next 5 years. GPT4 already consistently outperforms college students that I help, and can often match junior developers in terms of reliability (though with far more confidence, which is problematic obviously). I don't think people realize how big of a deal that is.

[–] KevonLooney@lemm.ee 10 points 6 months ago (2 children)

There's a basic problem with replacing human experts with AI. Where will they get their info from with no one to scrape? Other AI generated content?

They can't learn anything and are just "standing on the shoulders of giants". These companies will fire their software developers, just to hire them back as AI trainers.

[–] Nevoic@lemm.ee 5 points 6 months ago

"they can't learn anything" is too reductive. Try feeding GPT4 a language specification for a language that didn't exist at the time of its training, and then tell it to program in that language given a library that you give it.

It won't do well, but neither would a junior developer in raw vim/nano without compiler/linter feedback. It will roughly construct something that looks like that new language you fed it that it wasn't trained on. This is something that in theory LLMs can do well, so GPT5/6/etc. will do better, perhaps as well as any professional human programmer.

Their context windows have increased many times over. We're no longer operating in the 4/8k range, but instead 128k->1024k range. That's enough context to, from the perspective of an observer, learn an entirely new language, framework, and then write something almost usable in it. And 2024 isn't the end for context window size.

With the right tools (e.g input compiler errors and have the LLM reflect on how to fix said compiler errors), you'd get even more reliability, with just modern day LLMs. Get something more reliable, and effectively it'll do what we can do by learning.

So much work in programming isn't novel. You're not making something really new, but instead piecing together work other people did. Even when you make an entirely new library, it's using a language someone else wrote, libraries other people wrote, in an editor someone else wrote, on an O.S someone else wrote. We're all standing on the shoulders of giants.

[–] abhibeckert@lemmy.world 3 points 6 months ago

Where will they get their info from with no one to scrape?

It's not like there's a shortage of human generated content. And the content that has already been generated isn't going anywhere. It will be available effectively forever.

just “standing on the shoulders of giants”.

So? If you ask an LLM a question, you often get a very useful response. That's ultimately all that matters.

load more comments (8 replies)