I recently bought a new computer, and, after again trying Windows 11 for a bit, I decided I wanted to keep using OpenSUSE.
Probably unpopular here: I enjoy screwing around with local LLM models. I used LM Studio on my old computer (on which I had also installed OpenSUSE Leap). I also tested it on my new computer in Windows 11, and it worked very nicely. Now I’m trying on my new computer with OpenSUSE Leap 16, and it doesn’t work at all.
Specifically: no runtimes nor engines are present, and my hardware isn’t recognised at all - not my GPU nor my CPU, nothing.


I’m thinking it’s a driver issue. I’ve looked around quite a bit, and also looked up (what seems to me) the most important error messages I got when running the AppImage from the console:
spoiler
[BackendManager] Surveying hardware with backends with options: {"type":"newAndSelected"}
[BackendManager] Surveying new engine 'llama.cpp-linux-x86_64-avx2@2.12.0'
[ProcessForkingProvider][NodeProcessForker] Spawned process 13407
[ProcessForkingProvider][NodeProcessForker] Exited process 13407
21:17:29.644 › Failed to survey hardware with engine 'llama.cpp-linux-x86_64-avx2@2.12.0': LMSCore load lib failed - child process with PID 13407 exited with code 127
[BackendManager] Survey for engine 'llama.cpp-linux-x86_64-avx2@2.12.0' took 9.47ms
[BackendManager] Surveying new engine 'llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0'
[ProcessForkingProvider][NodeProcessForker] Spawned process 13408
[ProcessForkingProvider][NodeProcessForker] Exited process 13408
21:17:29.648 › Failed to survey hardware with engine 'llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0': LMSCore load lib failed - child process with PID 13408 exited with code 127
[BackendManager] Survey for engine 'llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0' took 3.70ms
[BackendManager] Surveying new engine 'llama.cpp-linux-x86_64-vulkan-avx2@2.12.0'
[ProcessForkingProvider][NodeProcessForker] Spawned process 13409
[ProcessForkingProvider][NodeProcessForker] Exited process 13409
21:17:29.651 › Failed to survey hardware with engine 'llama.cpp-linux-x86_64-vulkan-avx2@2.12.0': LMSCore load lib failed - child process with PID 13409 exited with code 127
[BackendManager] Survey for engine 'llama.cpp-linux-x86_64-vulkan-avx2@2.12.0' took 3.57ms
This is my system with installed drivers:

I did get Ollama to work... Any thoughts?
This looks like a sandboxing issue. Using the "no-sandbox" flag has never worked on AppImage from what I remember, except for very light runtimes. Running with sudo will throw that error because the root user has no display manager running.
Just try running the installer if you don't want to mess around with debugging the AppImage. Check the GitHub Issues for related keywords and see if others are running into the same issue, maybe it's just a specific release, or SELinux causing the problem.
You mean the Debian installer? Seems like a bad idea on OpenSUSE.
EDIT: looking in the bug reports on Github, I’ve found a very recent bug report identical to what I’ve described, so it doesn’t seem to be isolated at least.
They have a simple bash installer from what I see. You can also install everything via pip as well. Couple quick commands.
That bug report mentions a few versions, so maybe just go back to whatever version was working on your other machine.