waitmarks

joined 1 year ago
[–] waitmarks@lemmy.world 5 points 2 weeks ago

You shouldn't be comparing with DIMMs, those are a dead end at this point. CAMMs are replacing DIMMs and what future systems will use.

Intel likely designed Lunar lake before the LPCAMM2 standard was finalized and why it went on package. Now that LPCAMMs are a thing, it makes more sense to use those as they provide the same speed benefits while still allowing user replaceable RAM.

[–] waitmarks@lemmy.world 4 points 4 months ago

AWS has multiple teirs of storage options in s3, some replicate and some dont. by default those that do replicate do so in multiple availability zones, but not across regions. unless you turn on cross-region replication (CRR) which is an additional charge.

So, for example without CRR if your bucket is in us-east-1 and 1 availability zone goes down you can still access the data, but if all of us-east-1 is down, you cannot.

[–] waitmarks@lemmy.world 4 points 4 months ago

All that stuff you talked about in the tabletop lore is literally talked about in the game. It’s not hitting you in the face in the main quest line, but if you play the side quests you find tons of fucked up shit that the corps are doing.

[–] waitmarks@lemmy.world 27 points 5 months ago (2 children)

As if managers even know what RISC-V is

[–] waitmarks@lemmy.world 3 points 5 months ago

Its the server world that is demanding it. For most consumers 4.0 is more than enough, but servers are already maxing out 5.0 and will probably immediately max out 6.0 when devices actually become available.

365
submitted 5 months ago* (last edited 5 months ago) by waitmarks@lemmy.world to c/technology@lemmy.world
[–] waitmarks@lemmy.world 2 points 6 months ago

the thing about Recaptcha is that it didn’t always gate keep a google provided service, so that logic doesn’t really work. i agree though that we all benefit from less bots.

[–] waitmarks@lemmy.world 2 points 7 months ago* (last edited 7 months ago)

There is one extra step. I have an 6700xt, and with the docker containers, you just have to pass the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.0 to allow that card to work. For cards other than 6000 series, you would need to look up the version to pass for your generation.

Here's an example compose file that I use for ollama that runs ai models on my 6700xt.

version: '3'
services:
  ollama:
    image: ollama/ollama:rocm
    container_name: ollama
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    group_add:
      - video
    ports:
      - "11434:11434"
    environment:
      - HSA_OVERRIDE_GFX_VERSION=10.3.0
    volumes:
      - ollama_data:/root/.ollama

volumes:
  ollama_data:
[–] waitmarks@lemmy.world 2 points 7 months ago (2 children)

have you tried the rocm docker containers that amd makes for your needs? it pretty much makes installing rocm on the base OS unneeded for me. https://hub.docker.com/u/rocm https://github.com/ROCm/ROCm-docker

[–] waitmarks@lemmy.world 5 points 8 months ago (1 children)

it doesn’t, what this is suggesting is the vpn was routing traffic through it so they could analyze snapchat traffic. not the contents of it but essentially meta analysis of the traffic. how often it was sending data, how much data, where it was going etc.

[–] waitmarks@lemmy.world 2 points 8 months ago

just a small correction, /etc does get snapshotted when upgrades happen and will roll back along with everything else. you are correct though that home does not get snapshotted and is fully mutable.

[–] waitmarks@lemmy.world 6 points 8 months ago

I don’t have an answer to your nvidia question, but before you go and spend $2000 on an nvidia card, you should give the rocm docker containers a shot worh your existing card. https://hub.docker.com/u/rocm https://github.com/ROCm/ROCm-docker

it’s made my use of rocm 1000x easier than actually installing it on my system and was sufficient for my uses of running inference on my 6700xt.

view more: next ›