-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Pick dedicated GPU with highest VRAM (Vulkan) #13755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pick dedicated GPU with highest VRAM (Vulkan) #13755
Conversation
The nvidia 5090 has 32 GB of VRAM already, there's nothing to suggest consumer GPUs won't have > 64 GB of VRAM during SDL3's lifetime. Why not make the device rank 64 bits instead of 16 bits? |
Yep! You are right! I'll get on it tomorrow. |
…of future GPUs with huge VRAM
…and if the device already meets all the requirements.
This should make it work just like I said: if there's multiple dedicated GPUs on the computer, pick the one that meets all requirements. If all of them meet the requirements, pick the one with the highest VRAM (which theoretically should be the most powerful one) |
Looks right to me, just want to hear from @flibitijibibo and then it can go in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the low-power check this makes sense, can be merged when CI comes out green (started it just now!)
The tests actually passed, macOS just stalled on the last part... |
By the way this is actually my first merged PR ever! 🥳 |
Fixes (in the Vulkan backend) the issue in the later comments of #12682.
Description
I turned
deviceRank
into anUint16
to avoid overflow issues if the VRAM is too large. It may still have weird issues if someone has more than 65 gigabytes of VRAM, but I doubt that's going to happen any time soon.Testing on the GPU examples shows this change correctly picks my more powerful GPU. This is important because, since SDL will not expose having the user choose the GPU directly, players having issues with games forcibly using their less powerful GPU and without any possible fix would be terrible.
To make sure this truly works though I'd advice finding another weirdo with multiple dedicated GPUs in their computer to test. :)
Existing Issue(s)
I sadly don't have access to a Windows or Mac PC, so I wouldn't be able to test this change in the other backends. However it should be reasonably simple to implement.