AI
Overview
VoIPBin’s AI is a built-in AI agent that enables automated, intelligent voice interactions during live calls. Designed for seamless integration within VoIPBin’s flow, the AI utilizes ChatGPT as its AI engine to process and respond to user inputs in real time. This allows developers to create dynamic and interactive voice experiences without requiring manual intervention.
How it works
Action component
The AI is integrated as one of the configurable components within a VoIPBin flow. When a call reaches a AI action, the system triggers the AI to generate a response based on the provided prompt. The response is then processed and played back to the caller using text-to-speech (TTS). If the response is in a structured JSON format, VoIPBin executes the defined actions accordingly.

TTS/STT + AI Engine
VoIPBin’s AI is built using TTS/STT + AI Engine, where speech-to-text (STT) converts spoken words into text, and text-to-speech (TTS) converts responses back into audio. The system processes these in real time, enabling seamless conversations.

Voice Detection and Play Interruption:
In addition to basic TTS and STT functionalities, VoIPBin incorporates voice detection to create a more natural conversational flow. While the AI is speaking (i.e., playing TTS media), if the system detects the caller’s voice, it immediately stops the TTS playback and routes the caller’s speech (via STT) to the AI engine. This play interruption feature ensures that if the user starts talking, their input is prioritized, enabling a dynamic interaction that more closely resembles a real conversation.
External AI Agent Integration
For users who prefer to use external AI services, such as VAPI or other AI agent service providers, VoIPBin offers media stream access. This allows third-party AI engines to process voice data directly, enabling deeper customization and advanced AI capabilities.
Multiple AI Actions in a Flow
VoIPBin allows multiple AI actions within a single flow. Developers can configure different AI interactions at various points, enabling flexible and context-aware automation.
Handling Responses
Text String Response: The AI’s response is played as speech using TTS.
JSON Response: The AI returns a structured JSON array of action objects, which VoIPBin executes accordingly.
Error Handling: If the AI generates an invalid JSON response, VoIPBin treats it as a normal text response and plays it via TTS.
Using the AI
Initial Prompt
The initial prompt serves as the foundation for the AI’s behavior. A well-crafted prompt ensures accurate and relevant responses. There is no limit to prompt length, but this should remain confidential for future considerations.
Example Prompt:
Pretend you are an expert customer service agent.
Please respond kindly.
But, if you receive a request to connect to the agent, respond with the next message in JSON format.
Do not include any explanations in the response.
Only provide an RFC8259-compliant JSON response following this format without deviation.
[
{
"action": "connect",
"option": {
"source": {
"type": "tel",
"target": "+821100000001"
},
"destinations": [
{
"type": "tel",
"target": "+821100000002"
}
]
}
}
]
Action Object Structure
See detail here.
VoIPBin supports a wide range of actions. Developers should refer to VoIPBin’s documentation for a complete list of available actions.
Technical Considerations
Escalation to Live Agents
VoIPBin does not provide an automatic escalation mechanism for transferring calls to human agents. Instead, developers must configure AI responses accordingly by ensuring that AI logic returns a JSON action when escalation is required.
Logging & Debugging
Developers can debug AI interactions through VoIPBin’s transcription logs, which capture AI responses and interactions.
Current Limitations & Future Enhancements
TTS Customization: Currently, voice, language, and speed customization are not available but will be added in future updates.
Multilingual Support: The AI currently supports only English, but additional language support is planned.
Context Retention: Each AI request is processed independently, meaning there is no built-in conversation memory.
VoIPBin’s AI feature offers a flexible and intelligent way to automate voice interactions within flows. By leveraging AI-powered responses and structured action execution, developers can enhance call experiences with minimal effort. As VoIPBin continues to evolve, future updates will introduce greater customization options and multilingual capabilities.
AI Summary
The AI Summary feature in VoIPBin generates structured summaries of call transcriptions, recordings, or conference discussions. It provides a concise summary of key points, decisions, and action items based on the provided transcription source.

Supported Resources
AI summaries work with a single resource at a time. The supported resources are:
Real-time Summary: * Live call transcription * Live conference transcription
Non-Real-time Summary: * Transcribed recordings (post-call) * Recorded conferences (post-call)
Choosing Between Real-time and Non-Real-time Summaries
Developers must decide whether to use a real-time or non-real-time summary based on their needs:
Use Case |
Summary Type |
Recommendation |
---|---|---|
Live call monitoring |
Real-time |
Use AI summary with a live call transcription |
Live conference insights |
Real-time |
Use AI summary with a live conference transcription |
Post-call analysis |
Non-real-time |
Use AI summary with transcribe_id from a completed call |
Recorded conference summary |
Non-real-time |
Use AI summary with recording_id |
AI Summary Behavior
The summary action processes only one resource at a time.
If multiple AI summary actions are used in a flow, each executes independently.
If an AI summary action is triggered multiple times for the same resource, it only returns the most recent segment.
In conference calls, the summary is unified across all participants rather than per speaker.
Ensuring Full Coverage
Since starting an AI summary action late in the call results in missing earlier conversations, developers should follow best practices: * Enable transcribe_start early: This ensures that transcriptions are available even if an AI summary action is triggered later. * Use transcribe_id instead of call_id: This allows summarizing a full transcription rather than just the latest segment. * For post-call summaries, use recording_id: This ensures that the full conversation is summarized from the recorded audio.
AI
AI
{
"id": "<string>",
"customer_id": "<string>",
"name": "<string>",
"detail": "<string>",
"engine_type": "<string>",
"init_prompt": "<string>",
"tm_create": "<string>",
"tm_update": "<string>",
"tm_delete": "<string>"
}
id: AI’s ID.
customer_id: Customer’s ID.
name: AI’s name.
detail: AI’s detail.
engine_type: AI’s engine type. See detail here
init_prompt: Defines AI’s initial prompt. It will define the AI engine’s behavior.
Example
{
"id": "a092c5d9-632c-48d7-b70b-499f2ca084b1",
"customer_id": "5e4a0680-804e-11ec-8477-2fea5968d85b",
"name": "test AI",
"detail": "test AI for simple scenario",
"engine_type": "chatGPT",
"tm_create": "2023-02-09 07:01:35.666687",
"tm_update": "9999-01-01 00:00:00.000000",
"tm_delete": "9999-01-01 00:00:00.000000"
}
Type
AI’s type.
Type |
Description |
---|---|
chatGPT |
Openai’s Chat AI. https://chat.openai.com/chat |
clova |
Naver’s Clova AI(WIP). https://clova.ai/ |