Transloadit

Import files from web servers

🤖/http/import imports any file that is publicly available via a web URL into Transloadit.

The result of this Robot will carry a field import_url in their metadata, which references the URL from which they were imported. Further conversion results that use this file will also carry this import_url field. This allows you to to match conversion results with the original import URL that you used.

This Robot knows to interpret links to files on these services:

  • Dropbox
  • Google Drive
  • Google Docs
  • OneDrive

Instead of downloading the HTML page previewing the file, the actual file itself will be imported.

Usage example

Import an image from a specific URL:

{
  "steps": {
    "imported": {
      "robot": "/http/import",
      "url": "https://demos.transloadit.com/inputs/chameleon.jpg"
    }
  }
}

Parameters

  • output_meta

    Record<string, boolean> | boolean

    Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.

    For images, you can add "has_transparency": true in this object to extract if the image contains transparent parts and "dominant_colors": true to extract an array of hexadecimal color codes from the image.

    For videos, you can add the "colorspace: true" parameter to extract the colorspace of the output video.

    For audio, you can add "mean_volume": true to get a single value representing the mean average volume of the audio file.

    You can also set this to false to skip metadata extraction and speed up transcoding.

  • result

    boolean (default: false)

    Whether the results of this Step should be present in the Assembly Status JSON

  • queue

    "batch"

    Setting the queue to 'batch', manually downgrades the priority of jobs for this step to avoid consuming Priority job slots for jobs that don't need zero queue waiting times

  • force_accept

    boolean (default: false)
      Force a Robot to accept a file type it would have ignored.
    

    By default Robots ignore files they are not familiar with. 🤖/video/encode, for example, will happily ignore input images.

    With the force_accept parameter set to true you can force Robots to accept all files thrown at them. This will typically lead to errors and should only be used for debugging or combatting edge cases.

  • force_name

    string | Array<string> | null (default: null)

    Custom name for the imported file(s). By default file names are derived from the source.

  • url_delimiter

    string (default: "|")

    Provides the delimiter that is used to split the URLs in your url parameter value.

  • headers

    Array<string> (default: [])

    Custom headers to be sent for file import.

    This is an empty array by default, such that no additional headers except the necessary ones (e.g. Host) are sent.

  • import_on_errors

    Array<string> (default: [])

    Setting this to "meta" will still import the file on metadata extraction errors. ignore_errors is similar, it also ignores the error and makes sure the Robot doesn't stop, but it doesn't import the file.

  • fail_fast

    boolean (default: false)

    Disable the internal retry mechanism, and fail immediately if a resource can't be imported. This can be useful for performance critical applications.

  • return_file_stubs

    boolean (default: false)

    If set to true, the Robot will not yet import the actual files but instead return an empty file stub that includes a URL from where the file can be imported by subsequent Robots. This is useful for cases where subsequent Steps need more control over the import process, such as with 🤖/video/ondemand. This parameter should only be set if all subsequent Steps use Robots that support file stubs.

Demos

Related blog posts