New features, improvements, and fixes.
LiveKit integration is now generally available: drop Anam’s expressive real-time avatars into any LiveKit Agents app so your AI can join LiveKit rooms as synchronised voice + video participants.
It turns voice-only agents into face-and-voice experiences for calls, livestreams, and collaborative WebRTC spaces, with LiveKit handling infra and Anam handling the human layer. Docs
Server-side optimisations cuts average end-to-end latency by 330 ms for all customers, thanks to cumulative engine optimisations across transcription, frame generation, and frame writing, plus upgraded Deepgram Flux endpointing for faster, best in class turn-taking without regressions in voice quality or TTS.
• Overhaul to avatar video upload and management system
• Upgraded default Cartesia voices to Sonic 3
• Standardised voice model selection across the platform
• Enhanced share link management capabilities
• Corrected LiveKit persona type identification logic
• Server-side optimisations to our frame buffering to reduce latency of responses by ~250ms for all personas.
• Changed timeout behavior to never time out based on heartbeats; only time out when websocket is disconnected for 10 seconds or more.
• Fixed intermittent issue where persona stopped responding
• Set pix_fmt for video output, moving from yuvj420p (JPEG) to yuv420 color space to avoid incorrect encoding/output.
• Added timeout in our silence breaking logic to prevent hangs.
Build and deploy AI agents in Anam that can engage alongside you.
With Anam Agents, your Personas can now interact with your applications, access your knowledge, and trigger workflows directly through natural conversation. This marks Anam’s evolution from conversational Personas to agentic Personas that think, decide, and execute.
Give your Personas access to your company’s knowledge. Upload documents to the Lab, and they’ll use semantic retrieval to find and integrate the right information into responses, from product docs to internal manuals. Docs for Knowledge Base
Personas can now control your interface in real time. They can open checkout pages, display modals, navigate to specific sections, or update UI states creating guided, voice-driven experiences that feel effortless for users. Docs for Client Tools
Connect your Personas to external APIs and services. They can check order status, create support tickets, update CRM records, or fetch live data from your systems. Configure endpoints, headers, and response types directly in the Lab. Docs for Webhook Tools
Each Persona’s LLM determines when to call a tool based on user intent, not scripts. If a user asks for an order update, the Persona knows to fetch data. If they request a demo, it books one instantly.
You can create and manage tools on the new Tools page in the Lab and attach them to any Persona from the Build page.
Anam Agents are available today in beta for all Anam users: https://lab.anam.ai/login
• Cartesia Sonic-3 voices: the most expressive TTS model.
• Voice modal with expanded options, support for 50+ languages, voice samples. Added Cartesia TTS provider as the default.
• Session reports now work for custom LLMs
• Prevented auto-logout when switching contexts
• Fixed race conditions in cookie handling
• Resolved legacy session token issues
• Removed voices that were issue prone
• Player and streaming: corrected aspect ratios for mobile devices.
• Deepgram Flux support for turn-taking (still using Whisper for transcription)
• Server-side optimisation to reduce GIL-contention and reduce latency
• Server-side optimisation to reduce connection time
• Bug-fix to stop dangling LiveKit connections
• Our first open-source library!
A big milestone for our customers and partners. Anam now meets the standards required for HIPAA compliance, the U.S. regulation that protects sensitive health information. This means healthcare organizations and companies handling medical data can use Anam with confidence that their data is protected and processed securely.
What HIP compliance means.
HIPAA (Health Insurance Portability and Accountability Act) sets national standards for safeguarding medical information. Compliance confirms that Anam maintains strict administrative, physical, and technical safeguards, covering how data is stored, encrypted, accessed, and shared.
An independent assessment verified that Anam’s systems and policies meet the HIPAA Security Rule and Privacy Rule requirements.
Security is built into Anam.
Security has been a core principle since day one. Achieving HIPAA compliance reinforces our commitment to keeping your data private and secure while ensuring reliability and transparency for regulated industries.
Access our Trust Center.
You can review our security policies, data handling procedures, subprocessors, and compliance documentation, including our HIPAA attestation, at the Anam Trust Center.
• Enhanced voice selection
You can now search voices by your use case or conversational style! We also support 50+ languages that can now be previewed in the lab all at once.
• Product tour update
We updated our product tour to help you find the right features and find the right plans for you.
• Streamlined One-Shot avatar creation
Redesigned one-shot flow with clearer step progression and enhanced mobile responsiveness.
• Naming personas is now automatic
Auto-generated new persona names based on selected avatar.
• Session start time
Expected improvement by 1.1 sec for each session start up time.
• Share links
Fixed share-link sessions taking extra concurrency slots.
• Improve tts pronunciation
Improve tts pronunciation for all languages by adapting our input text chunking.
• Traceability and monitoring of session IDs
Send session IDs through all LLM calls to improve traceability and monitoring.
• Increased audio sampling rate
Internal Audio Sampling Rate increased from 16khz to 24khz sampling rate, allowing even more amazing audio for Anam Personas.
• Websocket size increase
Increased the maximum websocket size for larger talk stream chunks (from 1Mb to 16Mb).
• Concurrency calculation fix
Fixed concurrency calculation to only consider sessions from last 2 hours.
• Less freezing for slower LLMs
Slower LLMs will now result in less freezing, but shorter "chunks" of speech.
Once a conversation ends, how do you review what happened? To help you understand and improve your Persona's performance, we're launching Session Analytics in the Lab. Now you can access a detailed report for every conversation, complete with a full transcript, performance metrics, and AI-powered analysis.
You can find your session history on the Sessions page in the Lab. Click on any past session to explore the new analytics report. This is available today for all session types, except for LiveKit sessions. For privacy-sensitive applications, session logging can be disabled via the SDK.
• Improved Voice Discovery
The Voices page has been updated to be more searchable, allowing you to preview voices with a single click, and view new details like gender, TTS-model and language.
• Fixed share-link session bug
Fixed bug of share-link sessions taking an extra concurrency slot.
• Small improvement to connection time
Tweaks to how we perform webrtc signalling which allows for slightly faster connection times (~900ms faster for p95 connection time).
• Improvement to output audio quality for poor connections
Enabled Opus in-band FEC to improve audio quality under packet loss.
• Small reduction in network latency
Optimisations have been made to our outbound media streams to reduce A/V jitter (and hence jitter buffer delay). Expected latency improvement is modest (<50ms).
• Fix for livekit sessions with slow TTS audio
Stabilizes LiveKit streaming by pacing output and duplicating frames during slowdowns to prevent underflow.
The performance of LLM endpoints can be highly variable, with time-to-first-token latencies sometimes fluctuating by as much as 500ms from one day to the next depending on regional load. To solve this and ensure your personas respond as quickly and reliably as possible, we've rolled out a new intelligent routing system for LLM requests. This is active for both our turnkey customers and for customers using their own server-side Custom LLMs if they deploy multiple endpoints.
This new system constantly monitors the health and performance of all configured LLM endpoints by sending lightweight probes at regular intervals. Using a time-aware moving average, it builds a real-time picture of network latency and processing speed for each endpoint. When a request is made, the system uses this data to calculate the optimal route, automatically shedding load from any overloaded or slow endpoints within a region.
• Generate one-shot avatars from text prompts
You can now generate one-shot avatars from text prompts within the lab, powered by Gemini’s new Nano Banana model. The one-shot creation flow has been redesigned for speed and ease-of-use, and is now available to all plans. Image upload and webcam avatars remain exclusive to Pro and Enterprise.
• Improved management of published embed widgets
Published embed widgets can now be configured and monitored from the lab at https://lab.anam.ai/personas/published.
• Automatic failover to backup data centres
To ensure maximum uptime and reliability for our personas, we’ve implemented automatic failover to backup data centres.
• Prevent session crash on long user speech
Previously, unbroken user speech exceeding 30 seconds would trigger a transcription error and crash the session. We now automatically truncate continuous speech to 30 seconds, preventing sessions from failing in these rare cases.
• Allow configurable session lengths of up to 2 hours for Enterprise plans
We had a bug where sessions had a max timeout of 30 mins instead of 2 hours for enterprise plans. This has now been fixed.
• Resolved slow connection times caused by incorrect database region selection
An undocumented issue with our database provider led to incorrect region selection for our databases. Simply refreshing our credentials resolved the problem, resulting in a ~1s improvement in median connection times and ~3s faster p95 times. While our provider works on a permanent fix, we’re actively monitoring for any recurrence.
Embed personas directly into your website with our new widget. Within the lab's build page click Publish then generate your unique html snippet. This snippet will work in most common website builders, eg Wordpress.org or SquareSpace.
For added security we recommend adding a whitelist with your domain url. This will lock down the persona to only work on your website. You can also cap the number of sessions or give the widget an expiration period.
• ONE-SHOT avatars available via API
Professional and Enterprise accounts can now create one-shot avatars via API. Docs here.
• Spend caps
It's now possible to set a spend cap on your account. Available in profile settings.
• Prevent Cartesia from timing out when using slow custom LLMs.
We’ve added a safeguard to prevent Cartesia contexts from unexpectedly closing during pauses in text streaming. With slower llms or if there’s a break or slow-down in text being sent, your connection will now stay alive, ensuring smoother, uninterrupted interactions.