this post was submitted on 04 Aug 2025
343 points (96.7% liked)

Technology

73734 readers
3939 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] McLarny@lemmy.world 17 points 2 days ago (3 children)

So is it smart to short on the ai bubble ? 👉👈

[–] whyrat@lemmy.world 11 points 2 days ago

The question is when, not if. But it's an expensive question to guess the "when" wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.

Best of luck!

load more comments (2 replies)
[–] 9point6@lemmy.world 23 points 3 days ago (1 children)

I didn't have the US becoming a banana republic on my bingo card tbf

[–] acosmichippo@lemmy.world 21 points 3 days ago (1 children)
[–] aesthelete@lemmy.world 6 points 2 days ago

Yeah ten years seems like plenty of notice

[–] brucethemoose@lemmy.world 18 points 3 days ago* (last edited 3 days ago) (1 children)

Open models are going to kick the stool out. Hopefully.

GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.

[–] dubyakay@lemmy.ca 7 points 2 days ago (7 children)

I did not understand half of what you've written. But what do I need to get this running on my home PC?

[–] brucethemoose@lemmy.world 5 points 2 days ago* (last edited 2 days ago)

I am referencing this: https://z.ai/blog/glm-4.5

The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.

GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.

You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).

https://github.com/ikawrakow/ik_llama.cpp/

But these are…not lightweight models. If you don’t want a homelab, there are better ones that will fit on more typical hardware configs.

load more comments (6 replies)
[–] Vinstaal0@feddit.nl 8 points 2 days ago

Not only the tech bubble is doing that.

It's also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system

[–] Doomsider@lemmy.world 13 points 3 days ago

Ooowee, they are setting up the US for a major bust aren't they. I guess all the wealthy people will just have to buy up everything when it becomes dirt cheap. Sucks to have to own everything I guess.

[–] sbv@sh.itjust.works 11 points 3 days ago (1 children)

Recognizing from history the possibilities of where this all might lead, the prospect of any serious economic downturn being met with a widespread push of mass automation—paired with a regime overwhelmingly friendly to the tech and business class, and executing a campaign of oppression and prosecution of precarious manual and skilled laborers—well, it should make us all sit up and pay attention.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›