AI
Overview
VoIPBin’s AI is a built-in AI agent that enables automated, intelligent voice interactions during live calls. Designed for seamless integration within VoIPBin’s flow, the AI utilizes ChatGPT as its AI engine to process and respond to user inputs in real time. This allows developers to create dynamic and interactive voice experiences without requiring manual intervention.
How it works
Action component
The AI is integrated as one of the configurable components within a VoIPBin flow. When a call reaches an AI action, the system triggers the AI to generate a response based on the provided prompt. The response is then processed and played back to the caller using text-to-speech (TTS). If the response is in a structured JSON format, VoIPBin executes the defined actions accordingly.

TTS/STT + AI Engine
VoIPBin’s AI is built using TTS/STT + AI Engine, where speech-to-text (STT) converts spoken words into text, and text-to-speech (TTS) converts responses back into audio. The system processes these in real time, enabling seamless conversations.

Voice Detection and Play Interruption:
In addition to basic TTS and STT functionalities, VoIPBin incorporates voice detection to create a more natural conversational flow. While the AI is speaking (i.e., playing TTS media), if the system detects the caller’s voice, it immediately stops the TTS playback and routes the caller’s speech (via STT) to the AI engine. This play interruption feature ensures that if the user starts talking, their input is prioritized, enabling a dynamic interaction that more closely resembles a real conversation.
Context Retention
VoIPBin’s AI supports context saving. During a conversation, the AI remembers prior exchanges, allowing it to maintain continuity and respond based on earlier parts of the interaction. This provides a more natural and human-like dialogue experience.
Multilingual support
VoIPBin’s AI supports multiple languages. See supported languages: supported languages.
External AI Agent Integration
For users who prefer to use external AI services, VoIPBin offers media stream access via MCP (Media Control Protocol). This allows third-party AI engines to process voice data directly, enabling deeper customization and advanced AI capabilities.
MCP Server
A recommended open-source implementation is available here:
Using the AI
Initial Prompt
The initial prompt serves as the foundation for the AI’s behavior. A well-crafted prompt ensures accurate and relevant responses. There is no enforced limit to prompt length, but we recommend keeping this confidential to ensure consistent performance and security.
Example Prompt:
Pretend you are an expert customer service agent.
Please respond kindly.
AI Talk
AI Talk enables real-time conversational AI with voice in VoIPBin, powered by ElevenLabs’ voice engine for natural-sounding speech.

Key Features
Real-time Voice Interaction: AI generates responses in real-time based on user input and delivers them as speech.
Interruption Detection & Listening: If the other party speaks while the AI is talking, the system immediately stops the AI’s speech and switches to capturing the user’s voice via STT. This ensures a smooth and continuous conversation flow.
Low Latency Response: For longer prompts, AI Talk does not wait for the entire response to finish. Instead, it generates and plays speech in smaller chunks, reducing perceived response time for the user.
ElevenLabs Voice Engine: High-quality, natural-sounding voice output ensures the AI feels like a real conversation partner.
Built-in ElevenLabs Voice IDs
VoIPBin uses a predefined set of voice IDs for various languages and genders. Here are the default ElevenLabs Voice IDs currently in use:
Language |
Male Voice ID (Name) |
Female Voice ID (Name) |
Neutral Voice ID (Name) |
---|---|---|---|
English (Default) |
|
|
|
Japanese |
|
|
|
Chinese |
|
|
|
German |
|
|
|
French |
|
|
|
Hindi |
|
|
|
Korean |
|
|
|
Italian |
|
|
|
Spanish (Spain) |
|
|
|
Portuguese (Brazil) |
|
|
|
Dutch |
|
|
|
Russian |
|
|
|
Arabic |
|
|
|
Polish |
|
|
|
Other ElevenLabs Voice ID Options
Voipbin allows you to personalize the text-to-speech output by specifying a custom ElevenLabs Voice ID. By setting the voipbin.tts.elevenlabs.voice_id variable, you can override the default voice selection.
voipbin.tts.elevenlabs.voice_id: <Your Custom Voice ID>
See how to set the variables here.
AI Summary
The AI Summary feature in VoIPBin generates structured summaries of call transcriptions, recordings, or conference discussions. It provides a concise summary of key points, decisions, and action items based on the provided transcription source.

Supported Resources
AI summaries work with a single resource at a time. The supported resources are:
Real-time Summary: * Live call transcription * Live conference transcription
Non-Real-time Summary: * Transcribed recordings (post-call) * Recorded conferences (post-call)
Choosing Between Real-time and Non-Real-time Summaries
Developers must decide whether to use a real-time or non-real-time summary based on their needs:
Use Case |
Summary Type |
Recommendation |
---|---|---|
Live call monitoring |
Real-time |
Use AI summary with a live call transcription |
Live conference insights |
Real-time |
Use AI summary with a live conference transcription |
Post-call analysis |
Non-real-time |
Use AI summary with transcribe_id from a completed call |
Recorded conference summary |
Non-real-time |
Use AI summary with recording_id |
AI Summary Behavior
The summary action processes only one resource at a time.
If multiple AI summary actions are used in a flow, each executes independently.
If an AI summary action is triggered multiple times for the same resource, it only returns the most recent segment.
In conference calls, the summary is unified across all participants rather than per speaker.
Ensuring Full Coverage
Since starting an AI summary action late in the call results in missing earlier conversations, developers should follow best practices: * Enable transcribe_start early: This ensures that transcriptions are available even if an AI summary action is triggered later. * Use transcribe_id instead of call_id: This allows summarizing a full transcription rather than just the latest segment. * For post-call summaries, use recording_id: This ensures that the full conversation is summarized from the recorded audio.
AI
AI
{
"id": "<string>",
"customer_id": "<string>",
"name": "<string>",
"detail": "<string>",
"engine_type": "<string>",
"init_prompt": "<string>",
"tm_create": "<string>",
"tm_update": "<string>",
"tm_delete": "<string>"
}
id: AI’s ID.
customer_id: Customer’s ID.
name: AI’s name.
detail: AI’s detail.
engine_type: AI’s engine type. See detail here
init_prompt: Defines AI’s initial prompt. It will define the AI engine’s behavior.
Example
{
"id": "a092c5d9-632c-48d7-b70b-499f2ca084b1",
"customer_id": "5e4a0680-804e-11ec-8477-2fea5968d85b",
"name": "test AI",
"detail": "test AI for simple scenario",
"engine_type": "chatGPT",
"tm_create": "2023-02-09 07:01:35.666687",
"tm_update": "9999-01-01 00:00:00.000000",
"tm_delete": "9999-01-01 00:00:00.000000"
}
Type
AI’s type.
Type |
Description |
---|---|
chatGPT |
Openai’s Chat AI. https://chat.openai.com/chat |
clova |
Naver’s Clova AI(WIP). https://clova.ai/ |