godelski a day ago

I was expecting something like TensorRT or Triton, but found "Vibe Coding"

The project seems very naive. CUDA programming sucks because there's a lot of little gotchas and nuances that dramatically change performance. These optimizations can also significantly change between GPU architectures: you'll get different performances out of Volta, Ampere, or Blackwell. Parallel programming is hard in the first place, and it gets harder on GPUs because of all these little intricacies. People that have been doing CUDA programming for years are still learning new techniques. It takes a very different type of programming skill. Like actually understanding that Knuth's "premature optimization is the root of evil" means "get a profiler" not "don't optimize". All this is what makes writing good kernels take so long. That's even after Nvidia engineers are spending tons of time trying to simplify it.

So I'm not surprised people are getting 2x or 4x out of the box. I'd expect that much if a person grabbed a profiler. I'd honestly expect more if they spent a week or two with the documentation and serious effort. But nothing in the landing page is convincing me the LLM can actually significantly help. Maybe I'm wrong! But it is unclear if the lead dev has significant CUDA experience. And I don't want something that optimizes a kernel for an A100, I want kernelS that are optimized for multiple architectures. That's the hard part and all those little nuances are exactly what LLM coding tends to be really bad at.

  • jaberjaber23 a day ago

    totally agree. we're not trying to replace deep CUDA knowledge:) just wanted to skip the constant guess and check.

    every time we generate a kernel, we profile it on real GPUs (serverless) so you see how it runs on specific architectures. not just "trust the code" we show you what it does. still early, but it’s helping people move faster

    • godelski a day ago

      Btw, I'm not talking deep CUDA knowledge. That takes years. I'm specifically talking about novices. The knowledge you get from a few weeks. I'd be quite hesitant to call someone an expert in a topic when they have less than a few years of experience. There's exceptions but expertise isn't quickly gained. Hell, you could have years of experience but if all you did is read medium blogs and stack overflow you'd probably still be a novice.

      I get that you profile. I liked that part. But even as the other commenter says, it's unclear how to evaluate given the examples. Showing some real examples would be critical to sell people on this. Idk, maybe people blindly buy too but personally I'd be worried about integrating significant tech debt. It's easy to do that with kernels or anytime you're close to the metal. The nuances dominate these domains

      • jaberjaber23 17 hours ago

        Do you have a place where we can chat? Linkedin,....

        • godelski 13 hours ago

          Sorry, I'm not the CUDA expert you should be looking for. My efforts are in ML and I only dabble in CUDA and am friends with systems people. I'd suggest reaching out to system people.

          I'd suggest you use that NVIDIA connection and reach out to the HPC teams there. Anyone working on CUTLASS, TensorRT, cuTensor, or maybe event the CuPy team could give you a lot better advice than me.

          • jaberjaber23 11 hours ago

            I really appreciate that!! thanks:D

  • germanjoey a day ago

    TBH, the 2x-4x improvement over a naive implementation that they're bragging about sounded kinda pathetic to me! I mean, it depends greatly on the kernel itself and the target arch, but I'm also assuming that the 2x-4x number is their best case scenario. Whereas the best case for hand-optimized could be in the tens or even hundreds of X.

    • godelski a day ago

      I'm a bit confused. It sounds like you are disagreeing ("TBH") but the content seems like a summary of my comment. So, I agree.

      Fwiw, they did say they got up to 20x improvement but given the issues we both mention this may not be surprising given that this seems to be an outlier by their own omission.

    • jaberjaber23 a day ago

      absolutely. it really depends on the kernel type, target architecture, and what you're optimizing for. the 2x-4x isn’t the limit, it's just what users often see out of the box. we do real-time profiling on actual GPUs, so you get results based on real performance on a specific arch, not guesses. when the baseline is rough, we’ve seen well over 10x

cjbgkagh a day ago

The website appears vibe coded, as do the product-hunt reviews with "RightNow AI is an impressive..." appearing more than would be expected by random chance.

Either someone is good at writing CUDA Kernels and a 1-10% perf improvement is impressive, or they're bad at writing CUDA Kernels and a 2x-4x over naïve very often isn't impressive.

What percentage of people who do write custom CUDA kernels are bad at it? How many are so bad at it that they leave 20x on the table as claimed on the website?

What could have helped sell it to me as a concept is an example of a before and after.

EDIT: One of the reviews states "RightNow AI is an innovative tool designed to help developers profile and optimize CUDA code efficiently. Users have praised its ability to identify bottlenecks and enhance GPU performance. For example, one user stated, "RightNow AI is a game-changer for GPU optimization."" I think some of the AI prompt has leaked into the output.

  • godelski a day ago

      > helps me optimize  kernels without spending nights debugging.
        - Vender Relations Manager
    
      > Wishing you best of luck.You managed to take one of the most painful parts of CUDA dev and turn it effortless.
        - Smart Home Innovators
    
      > No more wrestling with annual performance tuning - just hit go and let AI handle the heavy lifting boosting your CUDA code by up to 20x with zero extra effort.
        - B2B SaaS Growth Marketing Consultant
    
      > great!This is what I want!
        - Serial entrepreneur, started in finance 
    
    I didn't even look at the Product Hut reviews until you mentioned it. Is it always this botty?
    • cjbgkagh 17 hours ago

      This is the only product I've ever looked up on Product Hunt and I only looked it up because I was wondering who was giving them positive reviews. It appears that Product Hunt is a growth hacker den and I guess they 'hacked' it with bots. There does appear to be a lack of quality control for reviews on that site - which I guess is Product Hunts own version of growth hack. I think this is why people are retreating to cloistered private communities where users can have a higher degree of confidence that they're interacting with real people.

      • godelski 10 hours ago

        Fair enough. I've been even been considering dumping HN for months, and is my only refuge left. Can't seem to find anywhere I can have strong confidence I'm talking to real people anymore. Dark Forest I guess...

  • jaberjaber23 a day ago

    2x-4x improvements are normal when starting from a naive kernel, but sometimes we see gains well over 10x. Every kernel is profiled live on real GPUs (serverless), so you get accurate performance data for the specific architecture.

    Before-and-after examples would definitely help, and we’re adding those soon. Thanks for the feedback.

techbro92 a day ago

Cuda optimization actually doesn’t suck that much. I think NSight studio is amazing and super helpful for profiling and identifying bottlenecks in kernels

  • jaberjaber23 a day ago

    Totally, NSight is great. We do something similar: generate kernels, profile them on real GPUs, then optimize based on that:D

saberience 19 hours ago

A vibe-coded product on top of a vibe-coded website, with a load of AI generated product hunt comments.

PontifexCipher a day ago

No examples of before/after? Maybe I missed something.

jaberjaber23 3 days ago

We’re RightNow AI. We built a tool that automatically profiles, detects bottlenecks, and generates optimized CUDA kernels using AI.

If you’ve written CUDA before, you know how it goes. You spend hours tweaking memory access, digging through profiler dumps, swapping out intrinsics, and praying it’ll run faster. Most of the time, you're guessing.

We got tired of it. So we built something that just works.

What RightNow AI Actually Does Prompt-based CUDA Kernel Generation Describe what you want in plain English. Get fast, optimized CUDA code back. No need to know the difference between global and shared memory layouts.

Serverless GPU Profiling Run your code on real GPUs without having local hardware. Get detailed reports about where it's slow and why.

Performance Optimizations That Deliver Not vague advice like “try more threads.” We return rewritten code. Our users are seeing 2x to 4x improvements out of the box. Some hit 20x.

Why We Built It We needed it for our own work. Our ML stack was bottlenecked by GPU code we didn’t have time to optimize. Existing tools felt ancient. The workflow was slow, clunky, and filled with trial and error.

We thought: what if I could just say "optimize this kernel for A100" and get something useful?

So we built it.

RightNow AI is live. You can try it for freee: https://www.rightnowai.co/

If you use it and hit something rough, tell us. We’ll fix it.

  • 3abiton a day ago

    Howis this different than what unsloth is doing?

    • jaberjaber23 a day ago

      We profile and optimize kernels live on real GPUs!! So we’re different than unsloth