You can now control a computer with just your voice.
We built on Replit's template to combine Hume’s empathic voice interface with Anthropic's computer use API. EVI processes speech in real time, sends instructions to the agentic computer control loop, explains its actions with voice, and can even be interrupted to change course.
This works because Hume’s frontier speech-LLM, EVI 2, can generate its own language, but can also read out lines from other language models like an actor reading a script. EVI is the only voice-to-voice model that’s both interoperable with any LLM and available today as an API.
Learn more: