Skip to content

Building generative models to modify and synthesize humans in video. Our synchronizer is a cutting-edge lip-syncing model that matches video to any audio in any language, removing language barriers. Transforming how we consume media and learn. sync. labs's Python SDK generated by Konfig (https://konfigthis.com/).

License

Notifications You must be signed in to change notification settings

konfig-sdks/sync-labs-python-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visit Sync. labs

Sync. labs

Synchronize API allows you to lipsync a video to any audio in any language.

Table of Contents

Requirements

Python >=3.7

Installation

Getting Started

from pprint import pprint
from sync_labs_python_sdk import SyncLabs, ApiException

synclabs = SyncLabs(
    api_key="YOUR_API_KEY",
)

try:
    animate_response = synclabs.animate.animate(
        video_url="string_example",
        transcript="string_example",
        voice_id="string_example",
        model="sync-1.5.0",
        max_credits=3.14,
        webhook_url="string_example",
    )
    print(animate_response)
except ApiException as e:
    print("Exception when calling AnimateApi.animate: %s\n" % e)
    pprint(e.body)
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

Async

async support is available by prepending a to any method.

import asyncio
from pprint import pprint
from sync_labs_python_sdk import SyncLabs, ApiException

synclabs = SyncLabs(
    api_key="YOUR_API_KEY",
)


async def main():
    try:
        animate_response = await synclabs.animate.aanimate(
            video_url="string_example",
            transcript="string_example",
            voice_id="string_example",
            model="sync-1.5.0",
            max_credits=3.14,
            webhook_url="string_example",
        )
        print(animate_response)
    except ApiException as e:
        print("Exception when calling AnimateApi.animate: %s\n" % e)
        pprint(e.body)
        pprint(e.headers)
        pprint(e.status)
        pprint(e.reason)
        pprint(e.round_trip_time)


asyncio.run(main())

Raw HTTP Response

To access raw HTTP response values, use the .raw namespace.

from pprint import pprint
from sync_labs_python_sdk import SyncLabs, ApiException

synclabs = SyncLabs(
    api_key="YOUR_API_KEY",
)

try:
    animate_response = synclabs.animate.raw.animate(
        video_url="string_example",
        transcript="string_example",
        voice_id="string_example",
        model="sync-1.5.0",
        max_credits=3.14,
        webhook_url="string_example",
    )
    pprint(animate_response.body)
    pprint(animate_response.body["id"])
    pprint(animate_response.body["transcript_url"])
    pprint(animate_response.body["status"])
    pprint(animate_response.body["video_url"])
    pprint(animate_response.body["audio_url"])
    pprint(animate_response.headers)
    pprint(animate_response.status)
    pprint(animate_response.round_trip_time)
except ApiException as e:
    print("Exception when calling AnimateApi.animate: %s\n" % e)
    pprint(e.body)
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

Reference

synclabs.animate.animate

Generates audio given inputted text and voice and synchronizes with the given video.

🛠️ Usage

animate_response = synclabs.animate.animate(
    video_url="string_example",
    transcript="string_example",
    voice_id="string_example",
    model="sync-1.5.0",
    max_credits=3.14,
    webhook_url="string_example",
)

⚙️ Parameters

video_url: str

A url to the video file to be synchronized -- must be publicly accessible

transcript: str

A string of text to be spoken by the AI

voice_id: str

The voice to use for audio generation

model: str

The model to use for video generation

max_credits: Union[int, float]

Maximum number of credits to use for audio generation. If job exceeds this value, the job will be aborted

webhook_url: str

A url to send a notification to upon completion of audio generation

⚙️ Request Body

AnimateDto Required data for animating video. Includes video URL, transcript, voice, and optional parameters for webhook integration and credit limits.

🔄 Return

AnimateInitial

🌐 Endpoint

/animate post

🔙 Back to Table of Contents


synclabs.animate.animate_cost

🛠️ Usage

synclabs.animate.animate_cost(
    transcript="string_example",
    transcript_url="string_example",
)

⚙️ Parameters

transcript: str

A string of text to be spoken by the AI

transcript_url: str

A url pointing to a file of text to be spoken by the AI

🌐 Endpoint

/animate/cost get

🔙 Back to Table of Contents


synclabs.animate.get_animation

Use the ID from the POST request to check status. Keep checking until status is 'completed' and a download URL is provided.

🛠️ Usage

get_animation_response = synclabs.animate.get_animation(
    id="id_example",
)

⚙️ Parameters

id: str

🔄 Return

AnimateExtended

🌐 Endpoint

/animate/{id} get

🔙 Back to Table of Contents


synclabs.lipsync.get_lipsync

Use the video ID from the POST request to check video status. Keep checking until status is 'completed' and a download URL is provided.

🛠️ Usage

get_lipsync_response = synclabs.lipsync.get_lipsync(
    id="id_example",
)

⚙️ Parameters

id: str

🔄 Return

LipSyncExtended

🌐 Endpoint

/lipsync/{id} get

🔙 Back to Table of Contents


synclabs.lipsync.lip_sync

Submit a set of urls to publically hosted audio and video files or to YouTube videos. Our synchronizer will sync the video's lip movements to match the audio and return the synced video.

🛠️ Usage

lip_sync_response = synclabs.lipsync.lip_sync(
    audio_url="string_example",
    video_url="string_example",
    synergize=True,
    max_credits=3.14,
    webhook_url="string_example",
    model="sync-1.5.0",
)

⚙️ Parameters

audio_url: str

A url to the audio file to be synchronized -- must be publicly accessible

video_url: str

A url to the video file to be synchronized -- must be publicly accessible

synergize: bool

A flag to enable / disable post-processing

max_credits: Union[int, float]

Maximum number of credits to use for video generation. If job exceeds this value, the job will be aborted

webhook_url: str

A url to send a notification to upon completion of video generation

model: str

The model to use for video generation

⚙️ Request Body

LipsyncDto The audio + video data to be synced. Set synergize = false to skip our synergizer post-processor for a 10x speedup, but w/ a degredation in output quality.

🔄 Return

LipSyncInitial

🌐 Endpoint

/lipsync post

🔙 Back to Table of Contents


synclabs.lipsync.lipsync_cost

🛠️ Usage

synclabs.lipsync.lipsync_cost(
    audio_url="audioUrl_example",
    video_url="videoUrl_example",
)

⚙️ Parameters

audio_url: str

A url to the audio file to be synchronized -- must be publicly accessible

video_url: str

A url to the video file to be synchronized -- must be publicly accessible

🌐 Endpoint

/lipsync/cost get

🔙 Back to Table of Contents


synclabs.speak.get_speech

Use the video ID from the POST request to check video status. Keep checking until status is 'completed' and a download URL is provided.

🛠️ Usage

get_speech_response = synclabs.speak.get_speech(
    id="id_example",
)

⚙️ Parameters

id: str

🔄 Return

SpeakExtended

🌐 Endpoint

/speak/{id} get

🔙 Back to Table of Contents


synclabs.speak.speak

🛠️ Usage

speak_response = synclabs.speak.speak(
    transcript="string_example",
    voice_id="string_example",
    max_credits=3.14,
    webhook_url="string_example",
)

⚙️ Parameters

transcript: str

A string of text to be spoken by the AI

voice_id: str

The voice to use for audio generation

max_credits: Union[int, float]

Maximum number of credits to use for audio generation. If job exceeds this value, the job will be aborted

webhook_url: str

A url to send a notification to upon completion of audio generation

⚙️ Request Body

SpeakDto

🔄 Return

SpeakInitial

🌐 Endpoint

/speak post

🔙 Back to Table of Contents


synclabs.speak.speak_cost

🛠️ Usage

synclabs.speak.speak_cost(
    transcript="string_example",
    transcript_url="string_example",
)

⚙️ Parameters

transcript: str

A string of text to be spoken by the AI

transcript_url: str

A url pointing to a file of text to be spoken by the AI

🌐 Endpoint

/speak/cost get

🔙 Back to Table of Contents


synclabs.translate.get_translation

Use the video ID from the POST request to check video status. Keep checking until status is 'completed' and a download URL is provided.

🛠️ Usage

get_translation_response = synclabs.translate.get_translation(
    id="id_example",
)

⚙️ Parameters

id: str

🔄 Return

TranslationJobExtended

🌐 Endpoint

/translate/{id} get

🔙 Back to Table of Contents


synclabs.translate.translate

Translates and synchronizes the given video to the specified target language.

🛠️ Usage

translate_response = synclabs.translate.translate(
    video_url="string_example",
    target_language="string_example",
    max_credits=3.14,
    webhook_url="string_example",
    model="sync-1.5.0",
)

⚙️ Parameters

video_url: str

A url to the video file to be translated and synchronized -- must be publicly accessible

target_language: str

Target language to translate the video to

max_credits: Union[int, float]

Maximum number of credits to use for video generation. If job exceeds this value, the job will be aborted

webhook_url: str

A url to send a notification to upon completion of video generation

model: str

The model to use for video generation.

⚙️ Request Body

TranslateDto Required data for translating and synchronizing video. Includes video URL, target language, and optional parameters for model selection, webhook integration, and credit limits.

🔄 Return

TranslationJobInitial

🌐 Endpoint

/translate post

🔙 Back to Table of Contents


synclabs.translate.translation_cost

🛠️ Usage

synclabs.translate.translation_cost(
    video_url="videoUrl_example",
)

⚙️ Parameters

video_url: str

A url to the video file to be synchronized -- must be publicly accessible

🌐 Endpoint

/translate/cost get

🔙 Back to Table of Contents


synclabs.video.cost

🛠️ Usage

synclabs.video.cost(
    audio_url="audioUrl_example",
    video_url="videoUrl_example",
)

⚙️ Parameters

audio_url: str

A url to the audio file to be synchronized -- must be publicly accessible

video_url: str

A url to the video file to be synchronized -- must be publicly accessible

🌐 Endpoint

/video/cost get

🔙 Back to Table of Contents


synclabs.video.get_lip_sync_job

[Deprecated] Use the video ID from the POST request to check video status. Keep checking until status is 'completed' and a download URL is provided.

🛠️ Usage

get_lip_sync_job_response = synclabs.video.get_lip_sync_job(
    id="id_example",
)

⚙️ Parameters

id: str

🔄 Return

VideoExtended

🌐 Endpoint

/video/{id} get

🔙 Back to Table of Contents


synclabs.video.lip_sync

[Deprecated] Submit a set of urls to publically hosted audio and video files or to YouTube videos. Our synchronizer will sync the video's lip movements to match the audio and return the synced video.

🛠️ Usage

lip_sync_response = synclabs.video.lip_sync(
    audio_url="string_example",
    video_url="string_example",
    synergize=True,
    max_credits=3.14,
    webhook_url="string_example",
    model="sync-1.5.0",
)

⚙️ Parameters

audio_url: str

A url to the audio file to be synchronized -- must be publicly accessible

video_url: str

A url to the video file to be synchronized -- must be publicly accessible

synergize: bool

A flag to enable / disable post-processing

max_credits: Union[int, float]

Maximum number of credits to use for video generation. If job exceeds this value, the job will be aborted

webhook_url: str

A url to send a notification to upon completion of video generation

model: str

The model to use for video generation

⚙️ Request Body

CreateVideoDto The audio + video data to be synced. Set synergize = false to skip our synergizer post-processor for a 10x speedup, but w/ a degredation in output quality.

🔄 Return

VideoInitial

🌐 Endpoint

/video post

🔙 Back to Table of Contents


synclabs.voices.voices

Get all voices

🛠️ Usage

synclabs.voices.voices()

🌐 Endpoint

/voices get

🔙 Back to Table of Contents


Author

This Python package is automatically generated by Konfig

About

Building generative models to modify and synthesize humans in video. Our synchronizer is a cutting-edge lip-syncing model that matches video to any audio in any language, removing language barriers. Transforming how we consume media and learn. sync. labs's Python SDK generated by Konfig (https://konfigthis.com/).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages