bloodfart

joined 1 year ago
[–] bloodfart@lemmy.ml 1 points 1 month ago

Maybe industry specific stuff like photoshop or something.

Web browsers and normal stuff will keep on trucking as long as the os has a valid root certificate.

[–] bloodfart@lemmy.ml 2 points 1 month ago (1 children)

Oh this is entirely different than soldering the ram to the motherboard (which is really common on pc laptops now too, it’s harder to find one with sockets now than it’s ever been!).

The ram is inside the cpu. The processor isn’t “just” a cpu (although you can’t call even the old pentium “just” a cpu, they do so much nowadays!), it’s got the video card, bus controllers, ram and all kinds of other stuff built into that one IC!

It’s a SoC, System on a Chip, just like the processors that run phones and tablets and stuff.

[–] bloodfart@lemmy.ml 1 points 1 month ago (3 children)

If you go the cheap m1 route, get the most ram you can find in it. The m series have ram built into the chip, so you can’t upgrade it later.

Also if the previous owner says it’s getting slow then nuke the ssd with the dd command after you have confirmed ownership is transferred. You’ll have a longer process to reinstall the os from first principles but it’ll fix slowness from the ssds old blocks having never been rewritten.

[–] bloodfart@lemmy.ml 1 points 1 month ago (5 children)

Maybe not as expensive as you think. The classic getting into the mac game choice is the 2012 mbp 12”, which can run a supported macos with opencore legacy patcher and costs <$200 with 16gb ram and an ssd.

The next best starter option is probably to make the big long leap to a first gen m1 air which can be had for ~$400 if you keep your eyes open.

Those are both expensive to me lol, but not the multiple thousands for a new computer.

[–] bloodfart@lemmy.ml 1 points 1 month ago (7 children)

The alternative route I took is maintaining a mac computer for when I need to “be normal”.

[–] bloodfart@lemmy.ml 2 points 1 month ago (11 children)

A good project between now and then is to investigate the iot sku. It has everything “unnecessary” cut out because it’s intended to be installed on refrigerators and has a much longer support window (2032?) for the same reason.

[–] bloodfart@lemmy.ml 12 points 1 month ago (13 children)

You should set up dual boot now so you don’t get surprised by differences when support ends and you feel the need to switch to an ltsc sku or use Linux.

Don’t wait, prepare!

Keep a hold of windows for a little while so that if something critical comes up that you can’t figure out you have a fallback.

[–] bloodfart@lemmy.ml 5 points 1 month ago* (last edited 1 month ago)

No need to worry, disk failures almost never result in fires or hazardous conditions.

A-yuk-yuk-yuk.

Seriously: you have a disk that has failed, based just on that little snippet of the logs, internally (ICRC ABRT). You can either use a tool like spinrite to try and repair it, but you may lose all the data in the process, or replace it.

A user suggested bad cabling and that’s a possibility, one you can check easily if the error is reproducible by swapping the cable. Before I swap cables often I’ll confirm the diagnosis using smartctl and look for whatever the drive manufacturer calls the errors that happen between the media and disk controller chip on the drive. If it has those then there’s no point in trying a cable swap, the problem is not happening there.

People will say that you can’t “fix” bad disks with tools like spinrite or smartctl. I’ve found that to be incorrect. There are certainly times when the disk is kaput but most of the time it’ll work fine and can go back into service.

Of course, that’s recovering from errors when I get an email or text the first time and going back to service in a multi-parity array so lowered criticality and early detection could have lots to do with that experience.

[–] bloodfart@lemmy.ml 2 points 1 month ago

I don’t know of any msi or asus boards with problems. Of course, I rejected coreboot as a requirement so that plays into it.

My personal experience is: don’t overclock and everything will run fine for at least ten years.

Blender works faster with nvidia and it’s been the optimal hardware for maybe two decades now. There’s just so much support and knowledge out there for getting every feature in the tool working with it that I couldn’t in good faith recommend a person use amd cards to have a slightly nicer Wayland experience or a little better deal.

If you’re only doing llm text work then a case could be made for a non cuda (non-nvidia) accelerator. Of course at that point you’d be better served by one of those coral doodads.

Were you only doing text based ml work or was there image recognition/diffusion/whatever in there too?

[–] bloodfart@lemmy.ml 4 points 1 month ago

Shit that’s a good point! I’ll edit my post.

[–] bloodfart@lemmy.ml 10 points 1 month ago

Yeah when you build from source you gotta dl some blobs from busybox and some other projects. It works fine with the ones the developer claims their build is based off of, the ones whose checksums are listed in the docs and match what you get when you ask for them from the repos for the aforementioned busybox or whatever.

I haven’t pulled apart a binary release of ventoy to check and see if it actually has those documented blobs or something else.

I’ll look at glim. Might be cool.

[–] bloodfart@lemmy.ml 8 points 1 month ago* (last edited 1 month ago) (6 children)

Push ctrl-alt-3 or 2 or something till you get a terminal. Run the command ls /dev/sd*

Post what it says back to you.

E: if you don’t see two drives, do ls /dev/nvm*

view more: ‹ prev next ›