Ask HN: What tools are you using for AI evals? Everything feels half-baked

4 points by fazlerocks a day ago

We're running LLMs in production for content generation, customer support, and code review assistance. Been trying to build a proper evaluation pipeline for months but every tool we've tested has significant limitations.

What we've evaluated:

- OpenAI's Evals framework: Works well for benchmarking but challenging for custom use cases. Configuration through YAML files can be complex and extending functionality requires diving deep into their codebase. Primarily designed for batch processing rather than real-time monitoring.

- LangSmith: Strong tracing capabilities but eval features feel secondary to their observability focus. Pricing starts at $0.50 per 1k traces after the free tier, which adds up quickly with high volume. UI can be slow with larger datasets.

- Weights & Biases: Powerful platform but designed primarily for traditional ML experiment tracking. Setup is complex and requires significant ML expertise. Our product team struggles to use it effectively.

- Humanloop: Clean interface focused on prompt versioning with basic evaluation capabilities. Limited eval types available and pricing is steep for the feature set.

- Braintrust: Interesting approach to evaluation but feels like an early-stage product. Documentation is sparse and integration options are limited.

What we actually need: - Real-time eval monitoring (not just batch) - Custom eval functions that don't require PhD-level setup - Human-in-the-loop workflows for subjective tasks - Cost tracking per model/prompt - Integration with our existing observability stack - Something our product team can actually use

Current solution:

Custom scripts + monitoring dashboards for basic metrics. Weekly manual reviews in spreadsheets. It works but doesn't scale and we miss edge cases.

Has anyone found tools that handle production LLM evaluation well? Are we expecting too much or is the tooling genuinely immature? Especially interested in hearing from teams without dedicated ML engineers.

VladVladikoff 9 hours ago

>We're running LLMs in production for content generation, customer support, and code review assistance.

Sounds like a nightmare. How do you deal with the nondeterministic behaviour of the LLMs when trying to debug why they did something wrong?

PaulHoule a day ago

I worked at more than one startup that was trying to develop and commercialize foundation models before the technology was ready. We didn't have the "chatbot" paradigm and were always focused on evaluation for a specific task.

I built a model trainer with eval capabilities that I felt was a failure, I mean it worked, but it felt like a terrible bodge just like the tools you're talking about. Part of it is that some the models we were training were small and could be run inside scikit-learn's model selection tools which I've come to seen as "basically adequate" for classical ML but other models might take a few days to train on a big machine which required us to develop inferior model selection tools that worked with processes too big to fit in a single address space but also gave us inferior model selection for small models. (The facilities for model selection in hugginface are just atrocious in my mind)

I see a lot of bad frameworks for LLMs that make the same mistakes I was making back then but I'm not sure what the answer is, although I think it can be solved for particular domains. For instance, I have a design for a text classifier trainer which I think could handle a wide range of problems where the training set is between 50-500,000 examples.

I saw a lot of lost opportunities in the 2010s where people could have built a workable A.I. application if they were willing to build training and eval sets and they wouldn't. I got pretty depressed when I talked to tens of vendors in the full text search space and didn't find any that were using systematic evaluation to improve their relevance. I am really hopeful today that evaluation is a growing part of the conversation.