Skip to main content
POST
/
v1
/
recordings
/
{id}
/
finalize
Finalize Streaming Recording
curl --request POST \
  --url https://api.example.com/v1/recordings/{id}/finalize \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "total_chunks": 123,
  "final_duration_ms": 123,
  "checksum_md5": "<string>"
}
'
{
  "id": "rec_abc123xyz",
  "status": "uploaded",
  "transcription_status": "queued",
  "chunks_received": 12,
  "stitching_job_id": "job_stitcher_abc123"
}

Overview

Finalizes a streaming recording after all chunks have been uploaded. This endpoint:
  1. Marks the recording as complete
  2. Queues a background job to stitch audio chunks into a single file
  3. Triggers transcription once stitching is complete
  4. Returns the recording status
Streaming Upload Flow:
  1. Device creates recording with streaming: true → Returns chunk upload URL template
  2. Device uploads chunks while recording (chunk_1.opus, chunk_2.opus, …)
  3. Device calls /finalize when recording ends → Backend stitches chunks
  4. Transcription begins automatically

Authentication

Requires a valid device token (dtok_*) or API key with recordings:write scope.
curl -X POST https://api.bota.dev/v1/recordings/rec_abc123xyz/finalize \
  -H "Authorization: Bearer dtok_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "total_chunks": 12,
    "final_duration_ms": 3595420,
    "checksum_md5": "5d41402abc4b2a76b9719d911017c592"
  }'

Path Parameters

id
string
required
Recording ID from the initial streaming recording creation (e.g., rec_abc123xyz)

Request Body

total_chunks
integer
required
Total number of chunks uploaded (must match actual chunk count)
final_duration_ms
integer
required
Actual recording duration in milliseconds
checksum_md5
string
MD5 checksum of the complete audio (optional, for integrity verification)

Response

id
string
Recording ID
status
string
Recording status: uploaded, stitching, or failed
transcription_status
string
Transcription status: queued, processing, completed, or failed
chunks_received
integer
Number of chunks received by backend
stitching_job_id
string
Background job ID for chunk stitching (for debugging)
{
  "id": "rec_abc123xyz",
  "status": "uploaded",
  "transcription_status": "queued",
  "chunks_received": 12,
  "stitching_job_id": "job_stitcher_abc123"
}

Error Responses

{
  "error": {
    "type": "recording_not_found",
    "message": "Streaming recording not found"
  }
}

Background Processing

After calling /finalize, the following happens asynchronously: Typical Processing Times:
  • Chunk stitching: 5-30 seconds (depending on chunk count)
  • Transcription: 1-5 minutes (depending on duration)

Webhooks

You’ll receive webhooks for:
  1. recording.uploaded - Fired immediately after finalization
  2. transcription.queued - Transcription job queued
  3. transcription.completed - Transcription finished successfully
  4. transcription.failed - Transcription failed

Streaming Upload Example (Full Flow)

Step 1: Create Streaming Recording

POST /v1/recordings
{
  "device_id": "dev_xyz789",
  "started_at": "2025-01-13T10:00:00Z",
  "estimated_duration_ms": 3600000,
  "codec": "opus",
  "streaming": true,
  "chunk_duration_ms": 300000
}
Response:
{
  "id": "rec_abc123",
  "status": "streaming",
  "chunk_upload_url_template": "https://s3.../rec_abc123/chunk_{part}.opus?...",
  "max_chunks": 100,
  "expires_at": "2025-01-13T14:00:00Z"
}

Step 2: Upload Chunks (Device)

# Upload chunk 1 (0-5 minutes)
PUT https://s3.../rec_abc123/chunk_1.opus?...
[binary audio data]

# Upload chunk 2 (5-10 minutes)
PUT https://s3.../rec_abc123/chunk_2.opus?...
[binary audio data]

# ... continue uploading chunks ...

# Upload chunk 12 (55-60 minutes)
PUT https://s3.../rec_abc123/chunk_12.opus?...
[binary audio data]

Step 3: Finalize Recording

POST /v1/recordings/rec_abc123/finalize
{
  "total_chunks": 12,
  "final_duration_ms": 3595420,
  "checksum_md5": "5d41402abc4b2a76b9719d911017c592"
}
Response:
{
  "id": "rec_abc123",
  "status": "uploaded",
  "transcription_status": "queued",
  "chunks_received": 12
}

Step 4: Wait for Webhooks

// Webhook 1: recording.uploaded
{
  "type": "recording.uploaded",
  "data": {
    "id": "rec_abc123",
    "uploaded_at": "2025-01-13T11:00:15Z"
  }
}

// Webhook 2: transcription.completed
{
  "type": "transcription.completed",
  "data": {
    "id": "txn_def456",
    "recording_id": "rec_abc123",
    "status": "completed"
  }
}

Best Practices

Verify Chunk Count: Ensure total_chunks matches the number of chunks you actually uploaded. The backend will verify chunk existence before stitching.
Include Checksum: Provide an MD5 checksum for integrity verification, especially for critical recordings (e.g., medical, legal).
Don’t Finalize Early: Only call /finalize after all chunks are uploaded. Finalizing with missing chunks will result in an error.
Idempotency: Calling /finalize multiple times with the same parameters is safe (idempotent). Duplicate requests will return the same response without re-processing.

Troubleshooting

Problem: “Chunk count mismatch” error Solution: Check that all chunks were uploaded successfully. Use the error response to identify missing chunks.
{
  "error": {
    "type": "chunk_mismatch",
    "missing_chunks": [7, 11]  // Re-upload these chunks
  }
}
Problem: Stitching takes too long Solution: This is normal for recordings with many chunks (>50). Check stitching job status via webhooks or polling the recording endpoint.