They don't intend for you to, it's just easier to make a giant button combo that their generic HID driver handles as a special case than to create a custom keyboard protocol with their special key enums and a custom driver that only windows supports.
teawrecks
No one is advocating X11. It's hard to have a constructive conversation about the shortcomings of Wayland when every apologist seems to immediately go off topic.
"I don't want to listen because you don't know the technical challenges. Oh, you have a long list of credentials? I don't want to listen to an argument from authority. X11 bad, therefore Wayland good."
OP even brings up Mir, but you never see Wayland proponents talk about why they think Wayland is better.
I'm learning a lot, so I'm not a fan of the people flaming and downvoting OP for having genuine confusion. I want us to incentivize more posts like this.
Yes, the jitting is specific to the graphics APIs. DXVK is doing runtime translation from DX to VK. When possible, they are certainly just making a 1:1 call, but since the APIs aren't orthogonal, in some cases it will need to store state and "emulate" certain API behavior using multiple VK calls. This is much more the case when translating dx9/11.
Ultimately the native vs. non-native distinction doesn't really matter, and arguably this distinction doesn't even really exist
Alright. Just letting you know you're going to have a hard time communicating with people in this industry if you continue rejecting widely accepted terminology. Cheers.
So, here's the thing, I don't consider myself an expert in many things, but this subject is literally my day job, and it's possibly the only thing I do consider myself an expert in. And I'm telling you, you are confused and I would gladly help clear it up if you'll allow me.
They could do what AMD does on Linux and rely on the openGL upstream implementation from Mesa
Nvidia's OGL driver is a driver. Mesa's radv backend is a driver. Nouveau, the open source Nvidia meds backend is a driver. An opengl implementation does a driver make.
There was a time they did, yes
What GPU did Microsoft's driver target? Or are you referring to a software implementation?
Yes and No... DirectX 3D was always low-level
You literally said that Mantle was inspired by DX12, which is false. You can try to pivot to regurgitating more Mantle history, but I'm just saying...
No its not, see above...
Yes, it is, see above my disambiguation of the term "low-level". The entire programming community has always used the term to refer to how far "above the metal" you are, not how granular an API is. The first party DX9 and DX12 drivers are equally "low-level", take it from someone who literally wrote them for a living. The APIs themselves function very differently to give finer control over the API, and many news outlets and forums full of confused information (like this one) like to infer that that means it's "lower level".
Your last statement doesn't make sense, so I don't know how to correct it.
you still have additional function calls and overhead wrapping lower level libraries
But it all happens at compile time. That's the difference.
You probably wouldn’t consider C code non-native
This goes back to your point above:
It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language
C is just a language, it's not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it's not jitted for the platform it's running on.
usually you consider compilers that use C as a backend to be native code compilers too
I assume you're not talking about a compiler that generates C code here, right? If it's outputting C, then no, it's not native code yet.
so why would you consider HLSL -> SPIR-V to be any different?
Well first off, games don't ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).
Shader code is jitted on all PC platforms, yes. This is why I said above that shader code has its own quirks, but on platforms where the graphics API effectively needs to be interpreted at runtime, the shaders have to be jitted twice.
SDL isn't adding any runtime translation overhead, that's the difference. SDL is an abstraction layer just like UE's RHI or the Unity Render backends. All the translation is figured out at compile time, there's no runtime jitting instructions for the given platform.
It's a similar situation with dynamic libraries: using a DLL or .so doesn't mean you're not running code natively on the CPU. But the java or .net runtimes are jiting bytecode to the CPU ISA at runtime, they are not native.
I'm sorry if I'm not explaining myself well enough, I'm not sure where the confusion still lies, but using just SDL does not make an app not-native. As a linux gamer, I would love it if more indie games used SDL since it is more than capable for most titles, and would support both windows and Linux natively.
An app running on SDL which targets OGL/vulkan is going through all the same levels of abstraction on windows as it is Linux. The work needed at runtime is the same regardless of platform. Therefore, we say it natively supports both platforms.
But for an app running DX, on windows the DX calls talk directly to the DX driver for the GPU which we call native, but on Linux the DX calls are translated at runtime to Vulkan calls, then the vulkan calls go to the driver which go to the hardware. There is an extra level of translation required on one platform that isn't required on the other. So we call that non-native.
Shader compilation has its own quirks. DX apps don't ship with hlsl, they precompile their shaders to DXIL, which is passed to the next layer. On windows, it then gets translated directly to native ISA to be executed on the GPU EUs/CUs/whatever you wanna call them. On Linux, the DXIL gets translated to spir-v, which is then passed to the vulkan driver where it is translated again to the native ISA.
But also, the native ISA can be serialized out to a file and saved so it doesn't have to be done every time the game runs. So this is only really a problem the first time a given shader is encountered (or until you update the app or your drivers).
Finally, this extra translation of DXIL through spir-v often has to be more conservative to ensure correct behavior, which can add overhead. That is to say, even though you might be running on the same GPU, the native ISA that's generated through both paths is unlikely to be identical, and one will likely perform better, and it's more likely to be the DXIL->ISA path because that's the one that gets more attention from driver devs (ex. Nvidia/amd engineers optimizing their compilers).
I think you are confused about the difference between the opengl spec and an actual implementation of the spec, and who is responsible for shipping what.
- Nvidia ships their own opengl implementation with their drivers, because that's what a driver is.
- Microsoft doesn't ship "opengl binaries", they don't have any hardware. Maybe you mean they published their own fork of the ogl spec before giving up and making DX? That may be true.
- Mantle predates DX12, both vulkan and dx12 took inspiration from it, not the other way around.
- There are two interpretations being thrown around for "low level":
- The more traditional meaning is "how far are you from natively talking to hardware?" which is not determined by the rendering API, but the specific implementation. Ex. Nvidia's dx9 driver is equally "low level" as their DX12 driver, in that the API calls you make are 1 step away from sending commands directly to GPU hardware. Meanwhile, using DX12 via DXVK would be 2 steps away from hardware, which is "higher level" than just using Nvidia's DX9 implementation directly. Again, "level" is not determined by the API.
- the other interpretation is what I would call "granularity" or "terse-ness" of the API, i.e. how much control over the hardware does it expose. In this case, yes, dx12 and vulkan give finer control over the hardware vs dx9 and ogl.
- your last statement...doesn't make sense, I don't understand it. Maybe you're trying to say that DX12/VK are made to be thinner, with less internal state tracking and less overhead per call, and therefore now all that state tracking is the app's responsibility? Yes, that is true. But I wouldn't say that code is "specific to a GPU".
Honestly, that seems like the nicest way to solve the problem. Afaik Valve would be fully within their rights to C&D them from unofficially rehosting their binaries. In any other situation, that would be a blatant security risk.