Transcribe

Overview

VoIPBIN’s transcription functionality is designed to cater to a range of communication needs, covering calls, conferences, and recordings. This comprehensive support ensures that users can transcribe various types of interactions accurately and efficiently.

Whether it’s a one-on-one conversation, a large conference call, or a recorded discussion, VoIPBIN’s transcription service handles it with ease. By distinguishing between audio input and output, it provides nuanced transcriptions that accurately reflect the dialogue exchanged during communication sessions. This differentiation ensures that users can clearly identify who said what, enhancing the clarity and usefulness of the transcribed content.

Real-Time capability

One notable aspect of VoIPBIN’s transcription service is its real-time capability. This feature enables users to transcribe conversations as they occur, providing instant access to written records of ongoing discussions. Real-time transcription not only facilitates live communication but also streamlines documentation processes by eliminating the need for manual transcription after the fact. This functionality is particularly valuable in fast-paced environments where quick access to accurate information is essential.

Additionally, VoIPBIN offers enhanced flexibility through websocket event subscription. Users can subscribe or unsubscribe to the transcript event using websocket event subscribe, ensuring seamless integration with their applications or systems. This allows for dynamic control over real-time transcription notifications, tailored to specific needs and workflows.

Moreover, VoIPBIN offers an added feature for enhanced integration and convenience. By including webhook information in your customer settings, you can receive real-time updates through the transcript_created event of your transcription process. This enables seamless integration with your existing systems or applications, ensuring that you stay informed of transcription progress without manual intervention.

Overall, VoIPBIN’s transcription service offers a comprehensive solution for capturing and documenting verbal communication across various platforms. Whether users need transcriptions for analysis, reference, or archival purposes, VoIPBIN’s transcription feature delivers accurate and timely results, enhancing communication workflows and productivity.

{
    "type": "transcript_created",
    "data": {
        "id": "9d59e7f0-7bdc-4c52-bb8c-bab718952050",
        "transcribe_id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
        "direction": "out",
        "message": "Hello, this is transcribe test call.",
        "tm_transcript": "0001-01-01 00:00:08.991840",
        "tm_create": "2024-04-04 07:15:59.233415"
    }
}

Transcription

VoIPBIN’s transcription service not only captures the spoken word but also provides additional context by distinguishing between audio input and output. This unique feature enables users to discern the direction of the voice within each transcription, offering valuable insight into the flow of communication.

By indicating whether the audio is incoming or outgoing, VoIPBIN’s transcription service adds an extra layer of clarity to the transcribed content. Users can easily identify who initiated a statement or response, enhancing their understanding of the conversation dynamics.

+--------+                               +-------+
|Customer|------ Direction: in --------->|VoIPBIN|
|        |                               |       |
|        |<----- Direction: out ---------|       |
+--------+                               +-------+

For example, in a call or conference scenario, users can quickly determine whether a particular remark was made by the caller or the recipient. Similarly, in recorded discussions, the audio in/out indication helps differentiate between speakers, facilitating more accurate transcription and analysis.

This audio in/out distinguish feature empowers users to gain a deeper understanding of the context and dynamics of communication, leading to more effective collaboration, documentation, and analysis. Whether it’s monitoring customer interactions, conducting research, or reviewing meeting minutes, VoIPBIN’s transcription service offers enhanced clarity and insight into verbal communication.

[
    {
        "id": "06af78f0-b063-48c0-b22d-d31a5af0aa88",
        "transcribe_id": "bbf08426-3979-41bc-a544-5fc92c237848",
        "direction": "in",
        "message": "Hi, good to see you. How are you today.",
        "tm_transcript": "0001-01-01 00:01:04.441160",
        "tm_create": "2024-04-01 07:22:07.229309"
    },
    {
        "id": "3c95ea10-a5b7-4a68-aebf-ed1903baf110",
        "transcribe_id": "bbf08426-3979-41bc-a544-5fc92c237848",
        "direction": "out",
        "message": "Welcome to the transcribe test scenario in this scenario. All your voice will be transcribed and delivered it to the web hook.",
        "tm_transcript": "0001-01-01 00:00:43.116830",
        "tm_create": "2024-04-01 07:17:27.208337"
    }
]

Enable transcribe

VoIPBIN provides two different methods to start the transcribe.

Automatic Trigger in the Flow

Add the transcribe_start action in the action flow. This action automatically triggers transcribe when the flow reaches it. See detail here.

{
    "id": "95c7a67f-9643-4237-8b69-7320a70b382b",
    "next_id": "44e1dabc-a8c1-4647-90ba-16d414231058",
    "type": "transcribe_start",
    "option": {
        "language": "en-US"
    }
}

Interrupt Trigger(Manual API Request)

The client can start the transcribe by API request sending. This allows you to start transcription manually in the middle of a call or conference. However, note that this method requires someone to initiate the API request.

  • POST /v1.0/transcribes: See detail here.

$ curl -X POST --location 'https://api.voipbin.net/v1.0/transcribes?token=token' \
    --header 'Content-Type: application/json' \
    --data '{
        "reference_type": "call",
        "reference_id": "8c71bcb6-e7e7-4ed2-8aba-44bc2deda9a5",
        "language": "en-US",
        "direction": "both"
    }'

Supported Languages

VoIPBIN supports transcription in over 70 languages and regional variants, enabling global communication scenarios. You can specify the desired language using the language option (e.g., “en-US”, “ko-KR”). Below is a non-exhaustive list of available language codes:

Language Code

Language

af-ZA

Afrikaans (South Africa)

am-ET

Amharic (Ethiopia)

ar-AE

Arabic (U.A.E.)

ar-BH

Arabic (Bahrain)

ar-DZ

Arabic (Algeria)

ar-EG

Arabic (Egypt)

ar-IQ

Arabic (Iraq)

ar-IL

Arabic (Israel)

ar-JO

Arabic (Jordan)

ar-KW

Arabic (Kuwait)

ar-LB

Arabic (Lebanon)

ar-MA

Arabic (Morocco)

ar-OM

Arabic (Oman)

ar-PS

Arabic (Palestinian Territories)

ar-QA

Arabic (Qatar)

ar-SA

Arabic (Saudi Arabia)

ar-TN

Arabic (Tunisia)

ar-YE

Arabic (Yemen)

az-AZ

Azerbaijani (Azerbaijan)

bg-BG

Bulgarian (Bulgaria)

bn-BD

Bengali (Bangladesh)

bn-IN

Bengali (India)

bs-BA

Bosnian (Bosnia and Herzegovina)

ca-ES

Catalan (Spain)

cs-CZ

Czech (Czech Republic)

da-DK

Danish (Denmark)

de-AT

German (Austria)

de-CH

German (Switzerland)

de-DE

German (Germany)

el-GR

Greek (Greece)

en-AU

English (Australia)

en-CA

English (Canada)

en-GB

English (United Kingdom)

en-GH

English (Ghana)

en-HK

English (Hong Kong)

en-IE

English (Ireland)

en-IN

English (India)

en-KE

English (Kenya)

en-NG

English (Nigeria)

en-NZ

English (New Zealand)

en-PH

English (Philippines)

en-SG

English (Singapore)

en-TZ

English (Tanzania)

en-US

English (United States)

en-ZA

English (South Africa)

es-AR

Spanish (Argentina)

es-BO

Spanish (Bolivia)

es-CL

Spanish (Chile)

es-CO

Spanish (Colombia)

es-CR

Spanish (Costa Rica)

es-DO

Spanish (Dominican Republic)

es-EC

Spanish (Ecuador)

es-ES

Spanish (Spain)

es-GT

Spanish (Guatemala)

es-HN

Spanish (Honduras)

es-MX

Spanish (Mexico)

es-NI

Spanish (Nicaragua)

es-PA

Spanish (Panama)

es-PE

Spanish (Peru)

es-PR

Spanish (Puerto Rico)

es-PY

Spanish (Paraguay)

es-SV

Spanish (El Salvador)

es-US

Spanish (United States)

es-UY

Spanish (Uruguay)

es-VE

Spanish (Venezuela)

et-EE

Estonian (Estonia)

eu-ES

Basque (Spain)

fa-IR

Persian (Iran)

fi-FI

Finnish (Finland)

fil-PH

Filipino (Philippines)

fr-BE

French (Belgium)

fr-CA

French (Canada)

fr-CH

French (Switzerland)

fr-FR

French (France)

gl-ES

Galician (Spain)

gu-IN

Gujarati (India)

he-IL

Hebrew (Israel)

hi-IN

Hindi (India)

hr-HR

Croatian (Croatia)

hu-HU

Hungarian (Hungary)

hy-AM

Armenian (Armenia)

id-ID

Indonesian (Indonesia)

is-IS

Icelandic (Iceland)

it-CH

Italian (Switzerland)

it-IT

Italian (Italy)

ja-JP

Japanese (Japan)

jv-ID

Javanese (Indonesia)

ka-GE

Georgian (Georgia)

kk-KZ

Kazakh (Kazakhstan)

km-KH

Khmer (Cambodia)

kn-IN

Kannada (India)

ko-KR

Korean (South Korea)

lo-LA

Lao (Laos)

lt-LT

Lithuanian (Lithuania)

lv-LV

Latvian (Latvia)

mk-MK

Macedonian (North Macedonia)

ml-IN

Malayalam (India)

mn-MN

Mongolian (Mongolia)

mr-IN

Marathi (India)

ms-MY

Malay (Malaysia)

my-MM

Burmese (Myanmar)

ne-NP

Nepali (Nepal)

nl-BE

Dutch (Belgium)

nl-NL

Dutch (Netherlands)

no-NO

Norwegian (Norway)

pa-Guru-IN

Punjabi (Gurmukhi, India)

pl-PL

Polish (Poland)

pt-BR

Portuguese (Brazil)

pt-PT

Portuguese (Portugal)

ro-RO

Romanian (Romania)

ru-RU

Russian (Russia)

si-LK

Sinhala (Sri Lanka)

sk-SK

Slovak (Slovakia)

sl-SI

Slovenian (Slovenia)

sq-AL

Albanian (Albania)

sr-RS

Serbian (Serbia)

su-ID

Sundanese (Indonesia)

sv-SE

Swedish (Sweden)

sw-KE

Swahili (Kenya)

sw-TZ

Swahili (Tanzania)

ta-IN

Tamil (India)

ta-LK

Tamil (Sri Lanka)

ta-MY

Tamil (Malaysia)

ta-SG

Tamil (Singapore)

te-IN

Telugu (India)

th-TH

Thai (Thailand)

tr-TR

Turkish (Turkey)

uk-UA

Ukrainian (Ukraine)

ur-IN

Urdu (India)

ur-PK

Urdu (Pakistan)

uz-UZ

Uzbek (Uzbekistan)

vi-VN

Vietnamese (Vietnam)

zh-CN

Chinese (Mandarin, Simplified)

zh-HK

Chinese (Cantonese, Traditional)

zh-TW

Chinese (Mandarin, Traditional)

zu-ZA

Zulu (South Africa)

To ensure optimal transcription results, choose the correct code that best matches your speaker’s language and dialect.

Tutorial

Start Transcription with Flow Action

The easiest way to enable transcription is by adding a transcribe_start action to your call flow. This automatically begins transcription when the call reaches that action.

Create Call with Automatic Transcription:

$ curl --location --request POST 'https://api.voipbin.net/v1.0/calls?token=<YOUR_AUTH_TOKEN>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "source": {
            "type": "tel",
            "target": "+15551234567"
        },
        "destinations": [
            {
                "type": "tel",
                "target": "+15559876543"
            }
        ],
        "actions": [
            {
                "type": "answer"
            },
            {
                "type": "transcribe_start",
                "option": {
                    "language": "en-US"
                }
            },
            {
                "type": "talk",
                "option": {
                    "text": "This call is being transcribed for quality assurance",
                    "language": "en-US"
                }
            }
        ]
    }'

Transcription starts when the call reaches the transcribe_start action and continues until the call ends.

Start Transcription via API (Manual)

For existing calls or conferences, start transcription manually by making an API request.

Transcribe an Active Call:

$ curl --location --request POST 'https://api.voipbin.net/v1.0/transcribes?token=<YOUR_AUTH_TOKEN>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "resource_type": "call",
        "resource_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
        "language": "en-US"
    }'

{
    "id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
    "customer_id": "12345678-1234-1234-1234-123456789012",
    "resource_type": "call",
    "resource_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "language": "en-US",
    "status": "active",
    "tm_create": "2026-01-20 12:00:00.000000",
    "tm_update": "2026-01-20 12:00:00.000000",
    "tm_delete": "9999-01-01 00:00:00.000000"
}

Transcribe a Conference:

$ curl --location --request POST 'https://api.voipbin.net/v1.0/transcribes?token=<YOUR_AUTH_TOKEN>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "resource_type": "conference",
        "resource_id": "c1d2e3f4-a5b6-7890-cdef-123456789abc",
        "language": "en-US"
    }'

Transcribe a Recording:

$ curl --location --request POST 'https://api.voipbin.net/v1.0/transcribes?token=<YOUR_AUTH_TOKEN>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "resource_type": "recording",
        "resource_id": "r1s2t3u4-v5w6-x789-yz01-234567890def",
        "language": "en-US"
    }'

Get Transcription Results

Retrieve transcription data after the transcription completes or during real-time transcription.

Get Transcription by ID:

$ curl --location --request GET 'https://api.voipbin.net/v1.0/transcribes/8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce?token=<YOUR_AUTH_TOKEN>'

{
    "id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
    "customer_id": "12345678-1234-1234-1234-123456789012",
    "resource_type": "call",
    "resource_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "language": "en-US",
    "status": "completed",
    "tm_create": "2026-01-20 12:00:00.000000",
    "tm_update": "2026-01-20 12:05:00.000000",
    "tm_delete": "9999-01-01 00:00:00.000000"
}

Get Transcripts (Text Results):

$ curl --location --request GET 'https://api.voipbin.net/v1.0/transcripts?token=<YOUR_AUTH_TOKEN>&transcribe_id=8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce'

{
    "result": [
        {
            "id": "06af78f0-b063-48c0-b22d-d31a5af0aa88",
            "transcribe_id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
            "direction": "in",
            "message": "Hi, good to see you. How are you today?",
            "tm_transcript": "0001-01-01 00:01:04.441160",
            "tm_create": "2024-04-01 07:22:07.229309"
        },
        {
            "id": "3c95ea10-a5b7-4a68-aebf-ed1903baf110",
            "transcribe_id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
            "direction": "out",
            "message": "Welcome to the transcribe test. All your voice will be transcribed.",
            "tm_transcript": "0001-01-01 00:00:43.116830",
            "tm_create": "2024-04-01 07:17:27.208337"
        }
    ]
}

Understanding Transcription Direction

VoIPBIN distinguishes between incoming and outgoing audio:

Direction: “in” - Audio from the customer/caller to VoIPBIN

Direction: “out” - Audio from VoIPBIN to the customer/caller

Customer  -----"in"------>  VoIPBIN
         <----"out"-------

This helps identify who said what in the conversation: - “in”: What the customer said - “out”: What VoIPBIN played (TTS, recordings, or other party in the call)

Real-Time Transcription with WebSocket

Subscribe to real-time transcription events via WebSocket to get transcripts as they’re generated during the call.

1. Connect to WebSocket:

wss://api.voipbin.net/v1.0/ws?token=<YOUR_AUTH_TOKEN>

2. Subscribe to Transcription Events:

{
    "type": "subscribe",
    "topics": [
        "customer_id:12345678-1234-1234-1234-123456789012:transcribe:8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce"
    ]
}

3. Receive Real-Time Transcripts:

{
    "event_type": "transcript_created",
    "timestamp": "2026-01-20T12:00:00.000000Z",
    "data": {
        "id": "9d59e7f0-7bdc-4c52-bb8c-bab718952050",
        "transcribe_id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
        "direction": "out",
        "message": "Hello, this is a transcribe test call.",
        "tm_transcript": "0001-01-01 00:00:08.991840",
        "tm_create": "2024-04-04 07:15:59.233415"
    }
}

Python WebSocket Example:

import websocket
import json

def on_message(ws, message):
    data = json.loads(message)

    if data.get('event_type') == 'transcript_created':
        transcript = data['data']
        direction = transcript['direction']
        text = transcript['message']

        print(f"[{direction}] {text}")

        # Process transcription in real-time
        # - Display in UI
        # - Run sentiment analysis
        # - Detect keywords

def on_open(ws):
    # Subscribe to transcription events
    subscription = {
        "type": "subscribe",
        "topics": [
            "customer_id:12345678-1234-1234-1234-123456789012:transcribe:*"
        ]
    }
    ws.send(json.dumps(subscription))
    print("Subscribed to transcription events")

token = "<YOUR_AUTH_TOKEN>"
ws_url = f"wss://api.voipbin.net/v1.0/ws?token={token}"

ws = websocket.WebSocketApp(
    ws_url,
    on_open=on_open,
    on_message=on_message
)

ws.run_forever()

Receive Transcripts via Webhook

Configure webhooks to automatically receive transcription events.

1. Create Webhook:

$ curl --location --request POST 'https://api.voipbin.net/v1.0/webhooks?token=<YOUR_AUTH_TOKEN>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "name": "Transcription Webhook",
        "uri": "https://your-server.com/webhook",
        "method": "POST",
        "event_types": [
            "transcribe.started",
            "transcribe.completed",
            "transcript.created"
        ]
    }'

2. Webhook Payload Example:

POST https://your-server.com/webhook

{
    "event_type": "transcript_created",
    "timestamp": "2026-01-20T12:00:00.000000Z",
    "data": {
        "id": "9d59e7f0-7bdc-4c52-bb8c-bab718952050",
        "transcribe_id": "8c5a9e2a-2a7f-4a6f-9f1d-debd72c279ce",
        "direction": "in",
        "message": "I need help with my account",
        "tm_transcript": "0001-01-01 00:00:15.500000",
        "tm_create": "2024-04-04 07:16:05.100000"
    }
}

3. Process Webhook in Your Server:

# Python Flask example
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/webhook', methods=['POST'])
def transcription_webhook():
    payload = request.get_json()
    event_type = payload.get('event_type')

    if event_type == 'transcript_created':
        transcript = payload['data']
        transcribe_id = transcript['transcribe_id']
        message = transcript['message']
        direction = transcript['direction']

        # Store transcript in database
        store_transcript(transcribe_id, message, direction)

        # Analyze content
        sentiment = analyze_sentiment(message)
        keywords = extract_keywords(message)

        # Trigger actions based on content
        if 'urgent' in message.lower():
            alert_supervisor(transcribe_id)

    return jsonify({'status': 'received'}), 200

Supported Languages

VoIPBIN supports transcription in multiple languages. See supported languages.

Common Languages: - en-US - English (United States) - en-GB - English (United Kingdom) - es-ES - Spanish (Spain) - fr-FR - French (France) - de-DE - German (Germany) - ja-JP - Japanese (Japan) - ko-KR - Korean (Korea) - zh-CN - Chinese (Simplified)

Example with Different Language:

{
    "type": "transcribe_start",
    "option": {
        "language": "ja-JP"
    }
}

Common Use Cases

1. Customer Service Quality Assurance:

# Monitor customer service calls
def on_transcript(transcript):
    # Check for quality metrics
    if contains_greeting(transcript):
        mark_greeting_present()

    if contains_problem_resolution(transcript):
        mark_resolved()

    # Flag negative sentiment
    if analyze_sentiment(transcript) < 0.3:
        flag_for_review()

2. Compliance and Record-Keeping:

# Store all call transcripts for compliance
def store_for_compliance(transcribe_id):
    transcripts = get_transcripts(transcribe_id)

    # Create formatted record
    record = {
        'call_id': call_id,
        'date': datetime.now(),
        'full_transcript': format_transcript(transcripts),
        'participants': get_participants(call_id)
    }

    # Store in compliance database
    compliance_db.store(record)

3. Real-Time Agent Assistance:

# Help agents during calls
def on_real_time_transcript(transcript):
    # Detect customer questions
    if is_question(transcript['message']):
        # Suggest answers to agent
        answers = knowledge_base.search(transcript['message'])
        display_to_agent(answers)

    # Detect customer frustration
    if detect_frustration(transcript['message']):
        suggest_supervisor_escalation()

4. Automated Call Summarization:

# Generate call summaries
def summarize_call(transcribe_id):
    transcripts = get_all_transcripts(transcribe_id)

    # Combine all transcripts
    full_text = ' '.join([t['message'] for t in transcripts])

    # Generate summary using AI
    summary = ai_summarize(full_text)

    # Extract key points
    action_items = extract_action_items(full_text)
    topics = extract_topics(full_text)

    return {
        'summary': summary,
        'action_items': action_items,
        'topics': topics
    }

5. Keyword Detection and Alerting:

# Monitor for important keywords
ALERT_KEYWORDS = ['urgent', 'emergency', 'cancel', 'complaint', 'lawsuit']

def on_transcript(transcript):
    message = transcript['message'].lower()

    for keyword in ALERT_KEYWORDS:
        if keyword in message:
            # Send immediate alert
            send_alert(
                transcribe_id=transcript['transcribe_id'],
                keyword=keyword,
                context=message
            )

            # Escalate to supervisor
            escalate_call(transcript['transcribe_id'])

6. Multi-Language Customer Support:

# Auto-detect and transcribe in customer's language
def start_multilingual_transcription(call_id):
    # Detect language from first few seconds
    detected_language = detect_language(call_id)

    # Start transcription in detected language
    start_transcribe(
        resource_id=call_id,
        language=detected_language
    )

    # Optionally translate to agent's language
    if detected_language != 'en-US':
        enable_translation(call_id, target_lang='en-US')

Best Practices

1. Choose the Right Trigger Method: - Flow Action: Use when transcription is always needed for specific flows - Manual API: Use when transcription is conditional or triggered by user action

2. Handle Real-Time Events Efficiently: - Process transcripts asynchronously to avoid blocking - Buffer transcripts if processing takes time - Use queues for high-volume scenarios

3. Language Selection: - Auto-detect language when possible - Set correct language for better accuracy - Test with actual customer accents and dialects

4. Data Management: - Store transcripts separately from call records - Implement retention policies (GDPR, compliance) - Encrypt sensitive transcriptions

5. Error Handling: - Handle cases where transcription fails - Retry logic for temporary failures - Log failures for debugging

6. Testing: - Test with various audio qualities - Verify accuracy with different accents - Test real-time latency

Transcription Lifecycle

1. Start Transcription:

POST /v1.0/transcribes
→ Returns transcribe_id

2. Active Transcription:

Status: "active"
→ Transcripts being generated in real-time

3. Receive Transcripts:

Via WebSocket: transcript_created events
Via Webhook: POST to your endpoint
Via API: GET /v1.0/transcripts?transcribe_id=...

4. Completion:

Status: "completed"
→ All transcripts available via API

Troubleshooting

Common Issues:

No transcripts generated: - Verify call has audio - Check language setting is correct - Ensure transcription started successfully

Poor transcription accuracy: - Use correct language code - Check audio quality - Verify clear speech (no background noise)

Missing real-time events: - Verify WebSocket subscription is active - Check topic pattern matches transcribe_id - Ensure network connection is stable

Delayed transcripts: - Real-time transcription has ~2-5 second delay (normal) - Check network latency - Verify server can handle webhook volume

For more information about transcription features and configuration, see Transcribe Overview.

Transcribe

Transcribe

{
    "id": "<string>",
    "customer_id": "<string>",
    "reference_type": "<string>",
    "reference_id": "<string>",
    "status": "<string>",
    "language": "<string>",
    "tm_create": "<string>",
    "tm_update": "<string>",
    "tm_delete": "<string>",
}
  • id: Transcribe’s ID.

  • customer_id: Customer’s ID.

  • reference_type: Reference type. See detail here.

  • reference_id: Reference ID.

  • status: Transcribe’s status. See detail here.

  • language: Transcribe’s language. BCP47 format.

example

{
    "id": "bbf08426-3979-41bc-a544-5fc92c237848",
    "customer_id": "5e4a0680-804e-11ec-8477-2fea5968d85b",
    "reference_type": "call",
    "reference_id": "12f8f1c9-a6c3-4f81-93db-ae445dcf188f",
    "status": "done",
    "language": "en-US",
    "tm_create": "2024-04-01 07:17:04.091019",
    "tm_update": "2024-04-01 13:25:32.428602",
    "tm_delete": "9999-01-01 00:00:00.000000"
}

reference_type

Reference’s type

Type

Description

call

Reference type is call.

recording

Reference type is recording.

confbridge

Reference type is confbridge.

status

Transcribe’s status

Type

Description

progressing

Transcribe is on progress.

done

Transcribe is done.

Transcription

Transcription

{
    "id": "<string>",
    "transcribe_id": "<string>",
    "direction": "<string>",
    "message": "<string>",
    "tm_transcript": "<string>",
    "tm_create": "<string>",
},
  • id: Transcription’s id.

  • transcribe_id: Transcribe’s id.

  • direction: Transcription’s direction. See detail here.

  • message: Transcription’s message.

  • tm_transcript: Transcription’s timestamp. “0001-01-01 00:00:00” is the beginning of the transcribe.

example

{
    "id": "06af78f0-b063-48c0-b22d-d31a5af0aa88",
    "transcribe_id": "bbf08426-3979-41bc-a544-5fc92c237848",
    "direction": "in",
    "message": "Hi, good to see you. How are you today.",
    "tm_transcript": "0001-01-01 00:05:04.441160",
    "tm_create": "2024-04-01 07:22:07.229309"
}

direction

Transcription’s direction

Type

Description

in

Incoming voice toward to the voipbin.

out

Outgoing voice from the voipbin.