
Import files from Google Storage
🤖/google/import imports whole directories of files from Google Storage.
Keep your credentials safe
Note
Usage example
Import files from the path/to/files
directory and its subdirectories:
{
"steps": {
"imported": {
"robot": "/google/import",
"credentials": "YOUR_GOOGLE_CREDENTIALS",
"path": "path/to/files/",
"recursive": true
}
}
}
Parameters
output_meta
Record<string, boolean> | boolean
Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.
For images, you can add
"has_transparency": true
in this object to extract if the image contains transparent parts and"dominant_colors": true
to extract an array of hexadecimal color codes from the image.For videos, you can add the
"colorspace: true"
parameter to extract the colorspace of the output video.For audio, you can add
"mean_volume": true
to get a single value representing the mean average volume of the audio file.You can also set this to
false
to skip metadata extraction and speed up transcoding.result
boolean
(default:false
)Whether the results of this Step should be present in the Assembly Status JSON
queue
"batch"
Setting the queue to 'batch', manually downgrades the priority of jobs for this step to avoid consuming Priority job slots for jobs that don't need zero queue waiting times
force_accept
boolean
(default:false
)Force a Robot to accept a file type it would have ignored.
By default Robots ignore files they are not familiar with. 🤖/video/encode, for example, will happily ignore input images.
With the force_accept parameter set to true you can force Robots to accept all files thrown at them. This will typically lead to errors and should only be used for debugging or combatting edge cases.
force_name
string | Array<string> | null
(default:null
)Custom name for the imported file(s). By default file names are derived from the source.
credentials
string
Create a new Google service account. Set its role to "Storage Object Creator". Choose "JSON" for the key file format and download it to your computer. You will need to upload this file when creating your Template Credentials.
Go back to your Google credentials project and enable the "Google Cloud Storage JSON API" for it. Wait around ten minutes for the action to propagate through the Google network. Grab the project ID from the dropdown menu in the header bar on the Google site. You will also need it later on.
Now you can set up the
storage.objects.create
andstorage.objects.delete
permissions. The latter is optional and only required if you intend to overwrite existing paths.To do this from the Google Cloud console, navigate to "IAM & Admin" and select "Roles". From here, select "+CREATE ROLE", enter a name, set the role launch stage as general availability and set the permissions stated above.
Next, relocate to your storage browser and select the ellipsis on your bucket to edit bucket permissions. From here, select "ADD MEMBER", enter your service account as a new member and select your newly created role.
Then, create your associated Template Credentials in your Transloadit account and use the name of your Template Credentials as this parameter's value.
recursive
boolean
(default:false
)Setting this to
true
will enable importing files from subdirectories and sub-subdirectories (etc.) of the given path.Please use the pagination parameters
start_file_name
andfiles_per_page
wisely here.next_page_token
string
(default:""
)A string token used for pagination. The returned files of one paginated call have the next page token inside of their meta data, which needs to be used for the subsequent paging call.
files_per_page
string | number
(default:1000
)The pagination page size. This only works when recursive is
true
for now, in order to not break backwards compatibility in non-recursive imports.