this post was submitted on 11 Apr 2024
48 points (80.0% liked)

Technology

59589 readers
2838 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] TommySoda@lemmy.world 38 points 7 months ago (3 children)

Besides novelty, the majority of AI I have used has just added extra steps to my work process instead of making it easier. Can we just stop already? It's not a tool for literally everything and I'm tired of companies thinking it is.

[–] Madrigal@lemmy.world 34 points 7 months ago (1 children)

Just give it a couple of years for the hype/boom/bust cycle to complete, then it’ll settle down and people will start using the tech appropriately.

[–] guy_threepwood@lemmy.world -4 points 7 months ago (2 children)

Yep, in the exact same was as blockchain: nowhere.

[–] hoshikarakitaridia@lemmy.world 29 points 7 months ago (2 children)

Unlike block chain, there is a solid chunk of new use cases to be conquered with AI. These might be very technical in nature, but for example, text suggestions on smartphones might already be done with AI, depending on your OS.

[–] micka190@lemmy.world 6 points 7 months ago (2 children)

We already have text prediction that works more efficiently (from a power and computing point of view) by using things like trees.

There's very few use-cases I've seen where AI is more efficient than an algorithm, and it's mostly in areas where it does a bunch of tests/research/simulation inputs by throwing random shit at the wall that users wouldn't normally try really fast.

AI is basically useless when you're doing something that's easily repeatable, because it's easier to actually implement tools that use algorithms to do that kind of thing.

[–] 4am@lemm.ee 5 points 7 months ago

My brother in Christ, a LLM is a tree

[–] jlh@lemmy.jlh.name 2 points 7 months ago

neural network tools seem really powerful for image filtering and video compression.

[–] zaph@sh.itjust.works -4 points 7 months ago

That could explain why SwiftKey sucks now

[–] Shnog@lemmy.world 2 points 7 months ago

Google and partners have been showing off some pretty cool use cases for Gemini, mostly related to GCP, at Next 24.

[–] QuadratureSurfer@lemmy.world 3 points 7 months ago (1 children)

Depends on your work, what you're trying to do, and how you use it.

As a developer I run my own local version of Dolphin Mixtral 8x7B (LLM) and it's great at speeding up my productivity. I'm not asking for it to do everything all at once but usually just small snippets here and there to see if there's a better or more efficient way.

I, for one, am looking forward to hardware improvements that can help us run larger models, so news like this is very welcome.

But you are correct, a large number of companies misunderstand how to use this technology when they should really be treating it like someone at an intern level.

It's great to give small and simple (especially repetitive) tasks, but you'll still need to verify everything.

[–] jamyang@lemmy.world 1 points 7 months ago (1 children)

Hey, I might give Dolphin Mixyral a try. Do you know where I might install it?
Also, are you a web dev?

[–] QuadratureSurfer@lemmy.world 1 points 7 months ago

Well that's a loaded question.

There are probably some websites that let you try out the model while they run it on their own equipment (or have it rented out through Amazon, etc.). But the biggest advantage to these models is being able to run it locally if you have the hardware to handle it (beefy GPU for quicker responses and a lot of RAM).

To quickly answer your question, you can download the model from here:
https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF
I would recommend Q5_K_M.

But you'll also need some software to run it.

A large number of users are using "Text-Generation-WebUI" https://github.com/oobabooga/text-generation-webui
There's also "LM Studio" https://lmstudio.ai/
Ollama https://github.com/ollama/ollama
And more.

I know that LM Studio supports Both NVIDIA and AMD GPUs.
Text-Generation-WebUI can support AMD GPUs as well, it just requires some additional setup to get it working.

Some things to keep in mind...
Hardware requirements:
- RAM is the biggest limiting factor with which model you can run while your GPU/CPU will decide how quickly the LLM can respond.
- If you can fit the entire model inside of your GPU's VRAM you'll get the most speed. In this case I would suggest using a GPTQ model instead of GGUF https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GPTQ
- Even the newest consumer grade GPUs only have 24GB of VRAM right now (RTX 4090, RTX 3090, and RX 7900 XTX). And the next generation of Consumer GPUs are looking like they will be capped at 24GB of VRAM as well unless AMD decides this is their way of competing with NVIDIA.
GGUF models let you compensate for VRAM limitations by loading the model first in VRAM and anything leftover will get loaded into system RAM.

Context Length: Think of an LLM like something that only has a fixed amount of short term memory. The bigger you set the context length, the more short term memory you can give it (the maximum length you can set depends on the model you're using and setting it to the max also requires more RAM). Mixtral 8x7B models have a Max context length of 32k.

[–] 4am@lemm.ee 3 points 7 months ago

This always happens when something new and novel has “potential”. VC money has been funding loss-leaders for two decades and they wanna cash in on the next gold rush. Just like blockchain, expect to see this beaten to death and shoehorned into places it really has no real use. They’ll be a few really solid things that are found for it to do though, and it will excel in those places. Then we’ll all laugh about “remember when they thought LLMs were the next big thing? What a bubble that turned out to be, like pets.com all over again”

[–] noxy@yiffit.net 10 points 7 months ago

about time Apple cornered the artificial insemination market

[–] dink@lemmy.world 10 points 7 months ago* (last edited 7 months ago) (4 children)

“Apple is reportedly planning a big loss of revenue due to the M4 Mac upgrade”

EDIT: Do majority of users really want AI in their computers?

[–] 4am@lemm.ee 12 points 7 months ago* (last edited 7 months ago)

I mean that depends? Does it actually work? Does it record all my data constantly and send it to Apple in a continuous stream, or is it fully local? will it actually provide some useful service like better search, will it be able to bootstrap projects for me? Learn my workflow patterns and assist by preparing stuff preemptively, or make intelligent suggestions on how to improve efficiency? Can it attempt to organize my awful, trash filing system of pictures, movies, half-started projects into something less painful to look at? Can it help my computer get out of my way when I don’t feel like being all computery (which I love but sometimes it shouldn’t be the focus)? Can it do these things well and without leaking them to some cloud?

Because that’s what we’ve been trying to make computers do since the PC first came along and I’d fuckin pay for that.

EDIT: to more directly answer your question, no one wants SaaS AIs by the FANG because they’re labor-replacement machines aimed at the ultimate grift; extracting capital out of the capitalists at the expense of the everyday workers. But that doesn’t mean that every AI is a speculative language model trained on the stolen data of all humanity to be used as a tool for class warfare; if they can create a neural net that can focus on other things besides language (or hell? Even language) that runs locally- if they can eat Microsoft’s lunch and create an actual co-pilot; they’d charge a premium as they always do but if it works well enough, it might be worth it.

[–] terminhell@lemmy.world 5 points 7 months ago

Doesn't matter what customers want, it's the shareholders.

[–] pulaskiwasright@lemmy.ml 4 points 7 months ago* (last edited 7 months ago)

They want tools that do things and toys that are fun. So maybe. It depends on what Apple will use it for. I enjoy being able to search my photos even though I never tagged them. That’s a useful kind of AI. I like how I can automatically select the subject of a photo so I can place it on other backgrounds also.

[–] QuadratureSurfer@lemmy.world 1 points 7 months ago

Do the majority of users really want AI in their computers?

What this could mean is the ability to replace (or upgrade) something like Siri into a model that runs locally on your machine. This means that it wouldn't need to route your questions/requests through someone else's computer (the cloud). You wouldn't even need to connect the computer to the internet and you would still be able to work with that model.

Besides, there are many companies that don't want you to pass on their internal documents to companies like OpenAI (ChatGPT). With locally run models, there aren't any problems with this as that data will not be uploaded anywhere.

[–] RIP_Cheems@lemmy.world 4 points 7 months ago

So people don't like that samsung used ai to make white circles look like the moon...so what will people think of this?

[–] Ginger666@lemmy.world -4 points 7 months ago

The beginning of the end for computers