AI Benchmarks for Human Flourishing

Monday · 2026-05-04 · 6:45 PM - 8:45 PM

Most AI benchmarks ask whether a model is smart. HumaneBench asks whether it's actually good for the person using it. The team tested how models hold up when users are stressed or vulnerable, and found that 67% of the time the model stops looking out for the user. Talk on the methodology, then a working session on what a benchmark like this would look like for your own product. Best for builders who want to ship AI that doesn't quietly fail the users who need it most.

Presented by Rally SF

Want more like this?

Get weekly picks matched to what you're into.

One email a week. No noise.

AI Benchmarks for Human Flourishing | Rally SF