s3_sync - Efficiently upload multiple files to S3¶
New in version 2.3.
Synopsis¶
- The S3 module is great, but it is very slow for a large volume of files- even a dozen will be noticeable. In addition to speed, it handles globbing, inclusions/exclusions, mime types, expiration mapping, recursion, cache control and smart directory mapping.
Requirements¶
The below requirements are needed on the host that executes this module.
- boto
- boto3 >= 1.4.4
- botocore
- python >= 2.6
- python-dateutil
Parameters¶
Parameter | Choices/Defaults | Comments |
---|---|---|
aws_access_key |
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
aliases: ec2_access_key, access_key |
|
aws_secret_key |
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
aliases: ec2_secret_key, secret_key |
|
bucket
required |
Bucket name.
|
|
cache_control
(added in 2.4) |
This is a string.
Cache-Control header set on uploaded objects.
Directives are separated by commmas.
|
|
delete
(added in 2.4) |
Default: no
|
Remove remote files that exist in bucket but are not present in the file root.
|
ec2_url |
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
|
|
exclude |
Default: .*
|
Shell pattern-style file matching.
Used after include to remove files (for instance, skip "*.txt")
For multiple patterns, comma-separate them.
|
file_change_strategy |
|
Difference determination method to allow changes-only syncing. Unlike rsync, files are not patched- they are fully skipped or fully uploaded.
date_size will upload if file sizes don't match or if local file modified date is newer than s3's version
checksum will compare etag values based on s3's implementation of chunked md5s.
force will always upload all files.
|
file_root
required |
File/directory path for synchronization. This is a local path.
This root path is scrubbed from the key name, so subdirectories will remain as keys.
|
|
include |
Default: *
|
Shell pattern-style file matching.
Used before exclude to determine eligible files (for instance, only "*.gif")
For multiple patterns, comma-separate them.
|
key_prefix |
In addition to file path, prepend s3 path with this prefix. Module will add slash at end of prefix if necessary.
|
|
mime_map |
Dict entry from extension to MIME type. This will override any default/sniffed MIME type. For example
{".txt": "application/text", ".yml": "application/text"} |
|
mode
required |
|
sync direction.
|
permission |
|
Canned ACL to apply to synced files.
Changing this ACL only changes newly synced files, it does not trigger a full reupload.
|
profile
(added in 1.6) |
Uses a boto profile. Only works with boto >= 2.24.0.
|
|
region |
The AWS region to use. If not specified then the value of the AWS_REGION or EC2_REGION environment variable, if any, is used. See http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region
aliases: aws_region, ec2_region |
|
security_token
(added in 1.6) |
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
aliases: access_token |
|
validate_certs
bool (added in 1.5) |
|
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
|
Notes¶
Note
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence
AWS_URL
orEC2_URL
,AWS_ACCESS_KEY_ID
orAWS_ACCESS_KEY
orEC2_ACCESS_KEY
,AWS_SECRET_ACCESS_KEY
orAWS_SECRET_KEY
orEC2_SECRET_KEY
,AWS_SECURITY_TOKEN
orEC2_SECURITY_TOKEN
,AWS_REGION
orEC2_REGION
- Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
AWS_REGION
orEC2_REGION
can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
Examples¶
- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/
- name: all the options
s3_sync:
bucket: tedder
file_root: roles/s3/files
mime_map:
.yml: application/text
.json: application/text
key_prefix: config_files/web
file_change_strategy: force
permission: public-read
cache_control: "public, max-age=31536000"
include: "*"
exclude: "*.txt,.*"
Return Values¶
Common return values are documented here, the following are the fields unique to this module:
Key | Returned | Description |
---|---|---|
filelist_actionable
list
|
always |
file listing (dicts) of files that will be uploaded after the strategy decision
Sample:
[{'s3_path': 's3sync/policy.json', 'whysize': '151 / 151', 'modified_epoch': 1477931256, 'bytes': 151, 'whytime': '1477931256 / 1477929260', 'fullpath': 'roles/cf/files/policy.json', 'chopped_path': 'policy.json', 'mime_type': 'application/json'}]
|
filelist_initial
list
|
always |
file listing (dicts) from inital globbing
Sample:
[{'modified_epoch': 1477416706, 'fullpath': 'roles/cf/files/policy.json', 'chopped_path': 'policy.json', 'bytes': 151}]
|
filelist_local_etag
list
|
always |
file listing (dicts) including calculated local etag
Sample:
[{'s3_path': 's3sync/policy.json', 'modified_epoch': 1477416706, 'fullpath': 'roles/cf/files/policy.json', 'chopped_path': 'policy.json', 'bytes': 151, 'mime_type': 'application/json'}]
|
filelist_s3
list
|
always |
file listing (dicts) including information about previously-uploaded versions
Sample:
[{'s3_path': 's3sync/policy.json', 'modified_epoch': 1477416706, 'fullpath': 'roles/cf/files/policy.json', 'chopped_path': 'policy.json', 'bytes': 151, 'mime_type': 'application/json'}]
|
filelist_typed
list
|
always |
file listing (dicts) with calculated or overridden mime types
Sample:
[{'modified_epoch': 1477416706, 'fullpath': 'roles/cf/files/policy.json', 'chopped_path': 'policy.json', 'bytes': 151, 'mime_type': 'application/json'}]
|
uploaded
list
|
always |
file listing (dicts) of files that were actually uploaded
Sample:
[{'s3_path': 's3sync/policy.json', 'whysize': '151 / 151', 'fullpath': 'roles/cf/files/policy.json', 'chopped_path': 'policy.json', 'bytes': 151, 'whytime': '1477931637 / 1477931489'}]
|
Status¶
This module is flagged as preview which means that it is not guaranteed to have a backwards compatible interface.
Maintenance¶
This module is flagged as community which means that it is maintained by the Ansible Community. See Module Maintenance & Support for more info.
For a list of other modules that are also maintained by the Ansible Community, see here.
Author¶
- Ted Timmons (@tedder)
Hint
If you notice any issues in this documentation you can edit this document to improve it.