greatgib 15 hours ago

   AMD is unifying its Linux Vulkan driver strategy and has decided to discontinue the AMDVLK open-source project, throwing our full support behind the RADV driver as the officially supported open-source Vulkan driver for Radeon™ graphics adapters.
Scary title but good news in the end I think.
  • willvarfar 14 hours ago

    What level of support will they give RADV? Or is it just that AMD ultimately do less?

    • account42 14 hours ago

      They have done pretty well with the open source OpenGL drivers that were also initially developed outside AMD.

      AMDVLK was always a weird regression in the openness of the development model compared to that. Maybe understandable that the bean counters wanted to share the effort between the Windows and AMD drivers but throwing away the community aspect in order to achieve that made that approach doomed from the start IMO. The initial release being incredibly late (even though Vulkan was modeled after AMD's own Mantle) was the cherry on top that allowed RADV to secure the winning seat but probably only accelerated the inevitable.

      • pjmlp 11 hours ago

        So well that my Asus Netbook went from OpenGL 4.1 down to OpenGL 3.3, and when it finally got OpenGL 4.1 back, several years later, it died a couple of months later.

        • account42 10 hours ago

          Yes exactly, they (or someone else) did eventually add OpenGL 4.1 support for your GPU to the open source drivers which never had it before.

          That you were "forced" to switch away from the old proprietary driver for some reason does not negatively implicated AMD's contribution to the open source drivers.

          • pjmlp 10 hours ago

            The reason being the old proprietary driver were dropped from Linux distros without feature parity, and given how great Linux drivers work across kernel versions, everyone got a downgraded experience for several years.

            • bigyabai 5 hours ago

              ...and you're telling us it's Linux' fault that you didn't want to pin the package?

              • michaelmrose 4 hours ago

                Over a period of years people get new machines, upgrade existing machines to new distro versions, and update other packages in a way that is oft incompatible with keeping an older package pinned as its requirements may become incompatible with the requirements of newer packages, kernels, and of course distro versions.

                I think they are blaming the vendor who received their money not the nebulous and non-specific Linux community.

                Despite being lauded compared to closed source Nvidia AMD has had painful support issues as well.

      • tonyhart7 13 hours ago

        why we have 2 project anyway??? what is the history???

        I thought mesa is always default since I use fedora kde

        • account42 12 hours ago

          AMD developed their closed source Vulkan driver for Windows based on their proprietary shader compiler from their existing proprietary OpenGL driver (amdgpu-pro). They promised to release this driver as open source but didn't want to release the shader compiler for who knows what reason so this took them a while. Meanwhile David Airlie (Red Hat) and Bas Nieuwenhuizen (student at the time) didn't want to wait for that or were just looking for a challenge and wrote their own open source Vulkan driver (radv) which got pretty good results from the start. Linux distributions prefer open source drivers so this one quickly became the default. One AMD released the open-source version of their driver (amdvlk) it was faster than radv in some games but not decidedly so. It was also not an open project but rather just an open source release of their proprietary driver with a different shader compiler. So there wasn't really any reason for the open source developers to abandon their work on radv and switch to amdvlk. But they could and did use amdvlk to learn from it and improve radv so it was still useful. When Valve decided to contribute directly to Linux graphics drivers, radv was already winning so they backed that one as well.

          Note that this is only about the user-space portion of the driver - the kernel part of the Linux drivers is shared by all of these as well as the OpenGL drivers - there used to be a proprietary kernel driver from AMD as well but that was abandoned with the switch to the "amdgpu-pro" package.

        • giancarlostoro 9 hours ago

          Idk but MESA never worked for me, ever. Any time I installed a distro to try, if MESA was running, I basically had a non-functioning desktop. I think part of it may have been Wayland related, which is frustrating, but these days its gotten drastically better.

          • tankenmate 7 hours ago

            What hardware are you running?

    • arghwhat 14 hours ago

      They already work on radv, which is already the better vulkan driver.

      This is a matter of AMD no longer wasting time on a pointless duplicate project no-one is really interested in. They can allocate more resources for amdgpu and radv and ultimately do less overall by getting rid of the redundant project.

      Win-win.

    • greatgib 4 hours ago

      I think that the customer base of AMD cpu and GPU is exploding thanks to their goodwill to work and provide what is needed for linux and open source drivers, so I don't see why they would reduce their effort when it so easily yield so much positive effect for them.

      Almost no one is scared anymore to buy AMD for linux desktop and servers knowing that it normally works well and the same kind of person will be the one doing recommendation for their families and relatives or relative companies even if these one are using windows.

  • sylware 12 hours ago

    It is dangerous for RADV which already has its own issues. And when you look at AMDVLK, you don't want those devs anywhere near RADV.

CBLT 19 hours ago

https://www.phoronix.com/news/AMDVLK-Discontinued

> This is a good but long overdue decision by AMD. RADV has long been more popular with gamers/enthusiasts on Linux than their own official driver. Thanks to Valve, Google, Red Hat, and others, RADV has evolved very nicely.

andy_ppp 12 hours ago

I always think just open sourcing the whole software stack for graphics cards would be an excellent thing for hardware manufacturers, in the end these are free pieces of software and I've certain there would be a big community contributing loads of cool things for free. AMD (say) would also sell a load more hardware as enthusiast features would be added by the community.

Maybe I'm just naive but the downsides of doing this seem absolutely minimal and the upsides quite large.

  • Symmetry 8 hours ago

    Those are essentially the reasons that Intel has always(?) had open source GPU drivers and AMD has been supporting open source since around 2009. As a result I think most people would recommend AMD cards for people interested in gaming on Linux, the experience can be a lot smoother than using NVidia's closed source drivers.

  • ChocolateGod 8 hours ago

    That's easier said than done, AMD and Nvidia probably have licensed and patented code etc in their closed-source drivers, which would make it difficult to open source, where as a project open source from the get go won't have these issues.

    Nvidia got around this on their kernel driver by moving most of it to the cards firmware.

  • giancarlostoro 8 hours ago

    I still do not understand why they don't it makes their hardware basically good for life, since now you can run it on any OS if you really want to put the effort in to wire it all up.

    • andy_ppp 5 hours ago

      It's tragic that the patent system, that is meant to make sharing IP better, actually is used to extract rents from the everyone... Software patents much like maths should clearly be illegal.

  • fidotron 8 hours ago

    > I always think just open sourcing the whole software stack for graphics cards would be an excellent thing for hardware manufacturers,

    > Maybe I'm just naive

    Yep.

    There are things hidden in the design of very widely used hardware that would make people's heads explode from how out there they are. They are trade secrets, and used to maintain a moat in which people can make money. (As opposed to patents which require publishing publicly).

    If you live in open source land you cannot make money from selling software. If there is no special sauce in the hardware you won't be able to make money from that either. Then we can all act surprised that the entire tech landscape is taken over by ads and fails to meaningfully advance.

    • yencabulator 6 hours ago

      Yes clearly that's why the from-scratch RADV driver was often faster.

      The dirty open secret in the tech industry is that the special sauce almost always just isn't all that special.

      • fidotron 6 hours ago

        > Yes clearly that's why the from-scratch RADV driver was often faster.

        Because AMD didn't actually care about the Linux driver since it didn't make them moeny.

        > The dirty open secret in the tech industry is that the special sauce almost always just isn't all that special.

        Only among people where that's true. In the computer industry just look at the M series of chips where it's very clear that their direct competitors can't establish why it does what it does.

        • yencabulator 6 hours ago

          > can't establish why it does what it does

          This is weird Apple fanboy head-in-the-sand thinking. The Mx chips have been dug into plenty and are just good engineering, not magic. AMD's horribly-named "Ryzen AI Max+ 395" chip is definitely moving in the same direction.

          • fidotron 6 hours ago

            Right, so we have other ARM64 implementation that are this good?

            Bonus points for ones without ex-Apple employees involved in their design, because maybe those people might know something about it.

            • michaelmrose 4 hours ago

              Qualcomm Snapdragon X Elite. Qualcomm has been making ARM chips prior to Apple. Admittedly its 4nm vs 3nm for the M4.

              • fidotron 3 hours ago

                Those did involve ex-Apple people, and there isn't proof that they are quite as good either, but they are the closest that anyone has publicly come.

                Qualcomm have never actually caught up with Apple performance wise since the introduction of Arm64. They had a very nice 32 bit implementation and were completely caught off guard. Prior to their NuVia acquisition their 64 bit efforts were barely improvements on what you can just license from Arm directly, to the point for a while that is all they were.

tracker1 8 hours ago

I've been running an RX 9070XT since close to release... I've also been running PopOS Cosmic Alpha for the past few months and for better game compatibility been sticking close to the latest mainline kernel.

Just yesterday, I tried getting ROCm working to see of I could use StableDiffusion. Well, in the end 6.16 is currently unsupported and after a few hours of fail, I managed to get the in the box kernel module working again and gave up. It is emphatically nice that many/most games now run through Mesa/Vulkan+Proton without issue... but it would be nice to actually be able to use some of the vaulted AI features in AMD's top current card in the leading edge Linux Kernel release with their platform.

Hopefully sooner than later, this will all "just work" mostly and won't be nearly the exercise in frustration for someone who hasn't been actively in the AI culture. I could create a partition for a prior distro/kernel or revert back, but I probably shouldn't have to, in general I tend to expect leading edge releases to work in the Linux ecosystem, or at least relatively quickly patched up.

  • mindcrime 3 hours ago

    Hopefully sooner than later, this will all "just work" mostly and won't be nearly the exercise in frustration for someone who hasn't been actively in the AI culture.

    There's definitely a lot of variation in experiences. In my case, on my box with an RX 9090 XTX, installing ROCm via apt did "just work" and I can compile and run programs against the GPU, and things like Ollama work with GPU acceleration with no weird fiddling or custom setup. And from what I hear, I'm definitely not the only person having this kind of experience.

  • tylerflick 7 hours ago

    ROCm is a mess. I gave up on it and decided to run OpenCL on Vulkan: https://github.com/kpet/clvk

    • tracker1 5 hours ago

      Thanks... I haven't really followed ANY AI stuff up to this point, other than awareness that it exists... so have to say, my shallow dive yesterday was a bit off-putting to say the least.

      I'll dig into this over the weekend when I invariably try again.

  • Symmetry 5 hours ago

    On Ubuntu 25.04 I use this to get ROCm up

        sudo apt install hipcc rocm-smi rocminfo clinfo
    
    This won't get the newest, shiniest ROCm but it's fine for my purposes.
  • JonChesterfield 4 hours ago

    Rocm is vulnerable to kernel version. You want linux 6.11 for rocm 6.4 and linux 6.14 for rocm 7, and that isn't very negotiable.

potwinkle 16 hours ago

This is great news for RADV development, I'm hoping someday we can even use ROCm on the open source stack.

  • account42 14 hours ago

    The kernel level of the stack was already open though, this only changes the Vulkan front end which AFAIK is irrelevant to ROCm.

  • suprjami 11 hours ago

    Depending on what you want to do, you already can.

    llama.cpp and other inference servers work fine on the kernel driver.

    • yencabulator 6 hours ago

      Where "fine" unfortunately still means "don't push it too hard on a busy desktop system or your graphical session might crash". Make sure to keep enough RAM free or you start seeing GPU resets, the stack can't cope with transient errors :-(

shmerl 19 hours ago

What will AMD do with Windows Vulkan driver, didn't they use amdvlk there? There was some radv on Windows experiment, it would be cool if AMD would use that.

  • trynumber9 19 hours ago

    No, it was a third driver.

    Per AMD

    >Notably, AMD's closed-source Vulkan driver currently uses a different pipeline compiler, which is the major difference between AMD's open-source and closed-source Vulkan drivers.

    • kimixa 15 hours ago

      The windows driver has 2 paths, the internal compiler, and the same LLVM as in the open source amdvlk release (though there might be things like not-yet-upstreamed changes, experimental new hardware support etc. that differ from the public version, it was fundamentally the same codebase). The same for DX12 (and any other driver that might use their PAL layer). If you want to confirm you can see all the llvm symbols in the driver's amdvlk{32,64}.dll and amdxc{32,64}.dll files. From what I remember, the internal compiler path is just stripped out for the open source amdvlk releases.

      I believe the intent was to slowly deprecate the internal closed compiler, and leave it more as a fallback for older hardware, with most new development happening on LLVM. Though my info is a few months out of date now, I'd be surprised if the trajectory changed that quickly.

      • account42 14 hours ago

        AFAIK the closed source shader compiler was/is also available for Linux in the amdgpu-pro package, just not in the open source releases.

    • shmerl 18 hours ago

      Why are they using different compilers?

      • account42 14 hours ago

        Either licensing issues (maybe they don't own all parts of the closed source shader compiler) or fears that Nvidia/Intel could find out things about the hardware that AMD wants to keep secret (the fears being Unfounded doesn't make the possibility of them being a reason any less likely). Or alternatively they considered it not worth releasing it (legal review isn't free) because the LLVM back-end was supposed to replace it anyway.

        • AnthonyMouse 12 hours ago

          > or fears that Nvidia/Intel could find out things about the hardware that AMD wants to keep secret (the fears being Unfounded doesn't make the possibility of them being a reason any less likely)

          When the fears are unfounded the reason isn't "Nvidia/Intel could find out things about the hardware", it's "incompetence rooted in believing something that isn't true". Which is an entirely different thing because in one case they would have a proper dilemma and in the other they would need only extricate their cranium from their rectum.

          • mschuster91 12 hours ago

            > When the fears are unfounded the reason isn't "Nvidia/Intel could find out things about the hardware"

            Good luck trying to explain that to Legal. The problem at the core with everything FOSS is the patent and patent licensing minefield. Hardware patents are already risky enough to get torched by some "submarine patent" troll, the US adds software patents to that mix. And even if you think you got all the licenses you need, it might be the case that the licensing terms ban you from developing FOSS drivers/software implementing the patent, or that you got a situation like the HDMI2/HDCP situation where the DRM <insert derogatory term here> insist on keeping their shit secret, or you got regulatory requirements on RF emissions.

            And unless you got backing from someone very high up the chain, Corporate Legal will default to denying your request for FOSS work if there is even a slight chance it might pose a legal risk for the company.

        • shmerl 3 hours ago

          > the LLVM back-end was supposed to replace it anyway.

          Is this still the case? I.e. why shut down the open amdvlk project then? They could just make it focused on Windows only.

          • kimixa 2 hours ago

            The open source release of amdvlk has never been buildable for windows as all the required Microsoft integration stuff has to be stripped out before release.

            So at best it'll be of limited utility for a reference, I can see why they might decide that's just not worth the engineering time of maintaining and verifying their cleaning-for-open-source-release process (as the MS stuff wasn't the only thing "stripped" from the internal source either).

            I assume the llvm work will continue to be open, as it's used in other open stacks like rocm and mesa.

      • jacquesm 15 hours ago

        Bluntly: because they don't get software and never did. The hardware is actually pretty good but the software has always been terrible and it is a serious problem because NV sure could use some real competition.

        • AnthonyMouse 13 hours ago

          I wish hardware vendors would just stop trying to write software. The vast majority of them are terrible at it and even within the tiny minority that can ship something that doesn't non-deterministically implode during normal operation, the vast majority of those are a hostile lock-in play.

          Hardware vendors: Stop writing software. Instead write and publish hardware documentation sufficient for others to write the code. If you want to publish a reference implementation that's fine, but your assumption should be that its primary purpose is as a form of documentation for the people who are going to make a better one. Focus on making good hardware with good documentation.

          Intel had great success for many years by doing that well and have recently stumbled not because the strategy doesn't work but because they stopped fulfilling the "make good hardware" part of it relative to TSMC.

          • exDM69 12 hours ago

            > I wish hardware vendors would just stop trying to write software.

            How would/should this work? Release hardware that doesn't have drivers on day one and then wait until someone volunteers to do it?

            > Intel had great success for many years by doing that well

            Not sure what you're referring to but Intel's open source GPU drivers are mostly written by Intel employees.

            • adrian_b 10 hours ago

              The documentation can be published in advance of the product launch.

              Intel and AMD did this in the past for their CPUs and accompanying chipsets, when any instruction set extensions or I/O chipset specifications were published some years in advance, giving time to the software developers to update their programs.

              Intel still somewhat does it for CPUs, but for GPUs their documentation is delayed a lot in comparison with the product launch.

              AMD now has significant delays in publishing the features actually supported by their new CPUs, even longer than for their new GPUs.

              In order to have hardware that works on day one, most companies still have to provide specifications for their hardware products to various companies that must design parts of the hardware or software that are required for a complete system that works.

              The difference between now and how this was done a few decades ago, is that then the advance specifications were public, which was excellent for competition, even if that meant that there were frequently delays between the launch of a product and the existence of complete systems that worked with it.

              Now, these advance specifications are given under NDA to a select group of very big companies, which design companion products. This ensures that now it is extremely difficult for any new company to compete with the incumbents, because they would never obtain access to product documentation before the official product launch, and frequently not even after that.

          • mschuster91 12 hours ago

            The problem is, making hardware is hard. Screw something up, in the best case you can fix it in ucode, if you're not that lucky you can get away with a new stepping, but in the worst case you have to do a recall and not just deal with your own wasted effort, but also the wasted downstream efforts and rework costs.

            So a lot of the complexity of what the hardware is doing gets relegated to firmware as that is easier to patch and, especially relevant for wifi hardware before the specs get finalized, extend/adapt later on.

            The problem with that, in turn, is patents and trade secrets. What used to be hideable in the ASIC masks now is computer code that's more or less trivially disassemblable or to reverse engineer (see e.g. nouveau for older NVDA cards and Alyssa's work on Apple), and if you want true FOSS support, you sometimes can't fulfill other requirements at the same time (see the drama surrounding HDMI2/HDCP support for AMD on Linux).

            And for anything RF you get the FCC that's going to throw rocks around on top of that. Since a few years, the unique combination of RF devices (wifi, bt, 4G/5G), antenna and OS side driver has to be certified. That's why you get Lenovo devices refusing to boot when you have a non-Lenovo USB network adapter attached at boot time or when you swap the Sierra Wireless modem with an identical modem from a Dell (that only has a different VID/PID), or why you need old, long outdated Lenovo/Dell/HP/... drivers for RF devices and the "official" manufacturer ones will not work without patching.

            I would love a world in which everyone in the ecosystem were forced to provide interface documentation, datasheets, errata and ucode/firmware blobs with source for all their devices, but unfortunately, DRM, anti-cheat, anti-fraud and overeager RF regulatory authorities have a lot of influence over lawmakers, way more than FOSS advocates.