Transloadit

Synthesize speech in documents

🤖/text/speak synthesizes speech in documents.

You can use the audio that we return in your application, or you can pass the audio down to other Robots to add a voice track to a video for example.

Another common use case is making your product accessible to people with a reading disability.

Usage example

Synthesize speech from uploaded text documents, using a female voice in American English:

{
  "steps": {
    "synthesized": {
      "robot": "/text/speak",
      "use": ":original",
      "provider": "aws",
      "voice": "female-1",
      "target_language": "en-US"
    }
  }
}

Parameters

  • output_meta

    Record<string, boolean> | boolean

    Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.

    For images, you can add "has_transparency": true in this object to extract if the image contains transparent parts and "dominant_colors": true to extract an array of hexadecimal color codes from the image.

    For videos, you can add the "colorspace: true" parameter to extract the colorspace of the output video.

    For audio, you can add "mean_volume": true to get a single value representing the mean average volume of the audio file.

    You can also set this to false to skip metadata extraction and speed up transcoding.

  • result

    boolean (default: false)

    Whether the results of this Step should be present in the Assembly Status JSON

  • queue

    "batch"

    Setting the queue to 'batch', manually downgrades the priority of jobs for this step to avoid consuming Priority job slots for jobs that don't need zero queue waiting times

  • force_accept

    boolean (default: false)
      Force a Robot to accept a file type it would have ignored.
    

    By default Robots ignore files they are not familiar with. 🤖/video/encode, for example, will happily ignore input images.

    With the force_accept parameter set to true you can force Robots to accept all files thrown at them. This will typically lead to errors and should only be used for debugging or combatting edge cases.

  • use

    string | Array<string> | Array<object> | object

    Specifies which Step(s) to use as input.

    • You can pick any names for Steps except ":original" (reserved for user uploads handled by Transloadit)
    • You can provide several Steps as input with arrays:
      {
        "use": [
          ":original",
          "encoded",
          "resized"
        ]
      }
      
  • prompt

    string | null

    Which text to speak. You can also set this to null and supply an input text file.

  • provider

    · required

    Which AI provider to leverage.

    Transloadit outsources this task and abstracts the interface so you can expect the same data structures, but different latencies and information being returned. Different cloud vendors have different areas they shine in, and we recommend to try out and see what yields the best results for your use case.

  • target_language

    string (default: "en-US")

    The written language of the document. This will also be the language of the spoken text.

    The language should be specified in the BCP-47 format, such as "en-GB", "de-DE" or "fr-FR". Please consult the list of supported languages and voices.

  • voice

    (default: "female-1")

    The gender to be used for voice synthesis. Please consult the list of supported languages and voices.

  • ssml

    boolean (default: false)

    Supply Speech Synthesis Markup Language instead of raw text, in order to gain more control over how your text is voiced, including rests and pronounciations.

    Please see the supported syntaxes for AWS and GCP.

Demos

Related blog posts