this post was submitted on 05 Jan 2024
137 points (95.4% liked)

Technology

59569 readers
3431 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] BlackEco@lemmy.blackeco.com 27 points 10 months ago (3 children)

I'm afraid that if AI ends up being just a fad, Mozilla won't be able to recover from this bet.

[–] loobkoob@kbin.social 33 points 10 months ago (3 children)

I don't think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there's somewhat of a hype bubble surrounding AI, though - it's the hot, new buzzword that a lot of companies are mentioning to bring investors on board. "We're planning to use some kind of AI in some way in the future (but we don't know how yet). Make cheques out to ________ please"

I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a "solution looking for a problem". Like, I'm a fairly normal person and I've found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can't write code myself), etc. Whereas I've never touched anything to do with crypto.

AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don't see AI going away.

[–] bionicjoey@lemmy.ca 13 points 10 months ago (3 children)

The thing is, AI has been around for a really long time and has lots of established use-cases. Unfortunately, none of them are to do with generative language/image models. AI is mainly used for classifying data as part of data science. But data science is extremely unsexy to the average person, so for them AI has become synonymous with the ChatGPTs and DALLEs of the world.

[–] LWD@lemm.ee 2 points 10 months ago* (last edited 10 months ago)
[–] ramblinguy@sh.itjust.works 2 points 10 months ago

Don't worry, once the hype fades, we can start calling LLMs "machine learning" again

[–] Ephera@lemmy.ml 1 points 10 months ago

Yeah, so far we've had discriminative AI (takes complex input, gives simple output).
Now we have generative AI (takes simple input, gives complex output).

I imagine, the discussion above is about generative AI...

[–] rsuri@lemmy.world 8 points 10 months ago (1 children)

I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc.

I'd caution against using it for these things due to its tendency to make stuff up. I've tried using ChatGPT for both, but in my experience if I can't find something on google myself, ChatGPT will claim to know the answer but give me something that just isn't true. For coding it can do basic things, but if I wanna use a library or some other more granular task it'll do something like make up a function call that doesn't exist. The worst part is that it looks right, so I used to waste time trying to figure out why it doesn't work for me, when it turns out it doesn't work for anybody. For factual information, I had to correct a friend who gave me fake stats on airline reliability to help me make a flight choice - he got them from GPT 4 and while the numbers looked right, they deviated from other info. In general you never want to trust any specific numbers from LLMs because they're trained to look right rather than to actually be right.

For me LLMs have proven most useful for things like brainstorming or coming up with an image I can use for illustration purposes. Because those things don't need to be exactly right.

[–] loobkoob@kbin.social 3 points 10 months ago

I agree completely. I think AI can be a valuable tool if you use it correctly, but it requires you to be able to prompt it properly and to be able to use its output in the right way - and knowing what it's good at and what it's not. Like you said, for things like brainstorming or looking for inspiration, it's great. And while its artistic output is very derivative - both because it's literally derived from all the art it's been trained on and simply because there's enough other AI art out there that it doesn't really have a unique "voice" most of the time - you could easily use it as a foundation to create your own art.

To expand on my asking it questions: the kind of questions I find it useful for are ones like "what are some reasons why people may do x?" or "what are some of the differences between y and z?". Or an actual question I asked ChatGPT a couple of months ago based on a conversation I'd been having with a few people: "what is an example of a font I could use that looks somewhat professional but that would make readers feel slightly uncomfortable?" (After a little back and forth, it ended up suggesting a perfect font.)

Basically, it's good for divergent questions, evaluative questions, inferent questions, etc. - open-ended questions - where you can either use its response to simulate asking a variety of people (or to save yourself from looking through old AskReddit and Quora posts...) or just to give you different ideas to consider, and it's good for suggestions. And then, of course, you decide which answers are useful/appropriate. I definitely wouldn't take anything "factual" it says as correct, although it can be good for giving you additional things to look into.

As for writing code: I've only used it for simple-ish scripts so far. I can't write code, but I'm just about knowledgeable enough to read code to see what it's doing, and I can make my own basic edits. I'm perfectly okay at following the logic of most code, it's just that I don't know the syntax. So I'm able to explain to ChatGPT exactly what I want my code to do, how it should work, etc, and it can write it for me. I've had some issues, but I've (so far) always been able to troubleshoot and eventually find a solution to them. I'm aware that if want to do anything more complex then I'll need to expand my coding knowledge, though! But so far, I've been able to use it to write scripts that are already beyond my own personal coding capabilities which I think is impressive.

I generally see LLMs as similar to predictive text or Google searches, in that they're a tool where the user needs to:

  1. have an idea of the output they want
  2. know what to input in order to reach that output (or something close to that output)
  3. know how to use or adapt the LLM's output

And just like how people having access to predictive text or Google doesn't make everyone's spelling/grammar/punctuation/sentence structure perfect or make everyone really knowledgeable, AIs/LLMs aren't going to magically make everyone good at everything either. But if people use them correctly, they can absolutely enhance that person's own output (be it their productivity, their creativity, their presentation or something else).

[–] Meltrax@lemmy.world 7 points 10 months ago (1 children)

It's not. It's massively expensive though. There's money pouring into it because it's the next big thing. Eventually, the companies that can afford to consistently power a massive LLM learning server farm will be the ones to keep going, the rest will flounder, get acquired, or disappear. Mozilla isn't a big enough fish and won't get acquired. AI is not a fad, but it's not a sustainable business model for a company like Mozilla so I hope all their eggs aren't going in that basket.

[–] spaduf@slrpnk.net 5 points 10 months ago* (last edited 10 months ago)

Hell I think there's a solid argument to be made that it's not even a sustainable model for the biggest players. As it stands they're offering remarkably little functionality for how much it costs them. On the other hand, mozillas work in this space up until now has largely been on bringing previously unimaginable functionality to locally hosted open source models and datasets. And that does look to be a sustainable business model.

[–] EfreetSK@lemmy.world 5 points 10 months ago

I was very sceptical towards the recent hypes (space exploration, cryptocurencies, self driving cars, ...) which turned out to be fads but this time ... this time I'm going to guess it isn't going to be a fad. Well it depends what we imagine by "AI" - will you have a robot pal like in movie I Robot or AI Artificial Intelligence? Probably not. Will AI predictions and learning be put into majority of programms and quite clever AI voice-assistants will appear like in movie Her? Yeah, I guess this could happen. My main reasons are:

  1. It actually isn't that difficult, machine learning isn't new and very theoretically speaking, as long as you have enough computation power, nothing is stopping you. Like at the moment I can't think of any limit
  2. Laws to stop it would be very difficult. You cannot just say "No AI!", I mean people can run it at home, how do you want to stop it? Which leads me to other point
  3. The OpenSource community had also made progress in the area
  4. Major players are heavily investing into it
[–] autotldr@lemmings.world 12 points 10 months ago

This is the best summary I could come up with:


Over the last few years, Mozilla also started making startup investments, including into Mastodon’s client Mammoth, for example, and acquired Fakespot, a website and browser extension that helps users identify fake reviews.

Indeed, when Mozilla launched its annual report a few weeks ago, it also used that moment to add a number of new members to its board — the majority of which focus on AI.

Surman told me that the leadership team had been planning these efforts for almost a year, but as public interest in AI grew, he “pushed it out of the door.” But then Draief pretty much moved it right back into stealth mode to focus on what to do next.

Surman believes that no matter the details of that, though, the overall principles of transparency and freedom to study the code, modify it and redistribute it will remain key.

The licenses aren’t perfect and we are going to do a bunch of work in the first half of next year with some of the other open source projects around clarifying some of those definitions and giving people some mental models.”

Then, he noted, when the smartphone arrived, there were a few smaller projects that aimed to create alternatives, including Mozilla (and at its core, Android is obviously also open source, even as Google and others have built walled gardens around the actual user experience).


The original article contains 1,252 words, the summary contains 229 words. Saved 82%. I'm a bot and I'm open source!

[–] LWD@lemm.ee 10 points 10 months ago* (last edited 10 months ago) (1 children)
[–] Ephera@lemmy.ml 8 points 10 months ago (1 children)

I didn't ask for it, but I'm lowkey happy to have them in this. I imagine, in a few years from now, all the start-ups will have run out of money or been acquired, and as per the usual, only big tech companies remain.

Traditional search engines will basically be dead, completely swamped with AI-generated spam. And even non-techies will generally depend on generative AIs for information and communication.
If those are exclusively controlled by big tech, we'll have tons of censorship (e.g. if you want to export an LLM to China, it has to pretend to not know about the Uyghurs) and just generally no control.

I don't expect Mozilla to save the world here, they're too small for that. But they're already providing useful tools, raising the entrypoint for independent devs.

[–] LWD@lemm.ee -2 points 10 months ago* (last edited 10 months ago) (1 children)
[–] TheGrandNagus@lemmy.world 1 points 10 months ago (1 children)

You should actually read the plans about their AI. It runs entirely locally, using your own data that never leaves your PC.

[–] LWD@lemm.ee 1 points 10 months ago* (last edited 10 months ago) (1 children)
[–] TheGrandNagus@lemmy.world 1 points 10 months ago* (last edited 10 months ago) (1 children)

I've not heard about what you're saying, so I'd like to learn more.

Their AI system will collect zero data, though, and run entirely locally. And that's what this is about.

Like it or not, this is a hyped feature that people want. The cat's out of the bag. It's not a feature that I want, but it is one the market wants.

It's good to have a privacy-respecting option when we all know in a few years the likes of Google, Microsoft, and Apple will dominate the market. And we know that they won't respect our privacy.

[–] LWD@lemm.ee 1 points 10 months ago* (last edited 10 months ago)
[–] randon31415@lemmy.world 3 points 10 months ago

That reminds me, I still need to play around with Llamafiles: https://justine.lol/oneliners/