When working with Amazon S3 (Simple Storage Service), y'all're probably using the S3 web console to download, copy, or upload files to S3 buckets. Using the console is perfectly fine, that'south what it was designed for, to brainstorm with.
Especially for admins who are used to more mouse-click than keyboard commands, the web console is probably the easiest. However, admins will eventually encounter the need to perform bulk file operations with Amazon S3, like an unattended file upload. The GUI is not the best tool for that.
For such automation requirements with Amazon Web Services, including Amazon S3, the AWS CLI tool provides admins with command-line options for managing Amazon S3 buckets and objects.
In this article, y'all will learn how to utilize the AWS CLI command-line tool to upload, re-create, download, and synchronize files with Amazon S3. You will too learn the basics of providing access to your S3 saucepan and configure that access contour to work with the AWS CLI tool.
Prerequisites
Since this a how-to article, there volition be examples and demonstrations in the succeeding sections. For you lot to follow forth successfully, you lot volition need to encounter several requirements.
An AWS account. If yous don't have an existing AWS subscription, you tin sign up for an AWS Free Tier.
An AWS S3 bucket. You lot tin use an existing saucepan if you'd prefer. Still, information technology is recommended to create an empty bucket instead. Please refer to Creating a bucket.
A Windows x computer with at least Windows PowerShell 5.one. In this commodity, PowerShell vii.0.two will be used.
The AWS CLI version two tool must be installed on your reckoner.
Local folders and files that you lot will upload or synchronize with Amazon S3
Preparing Your AWS S3 Access
Suppose that you already have the requirements in identify. Yous'd think you tin can already become and outset operating AWS CLI with your S3 bucket. I mean, wouldn't it be dainty if it were that unproblematic?
For those of you who are simply starting time to work with Amazon S3 or AWS in general, this department aims to assist you lot set access to S3 and configure an AWS CLI profile.
The full documentation for creating an IAM user in AWS can be institute in this link beneath. Creating an IAM User in Your AWS Account
Creating an IAM User with S3 Access Permission
When accessing AWS using the CLI, you will demand to create one or more IAM users with plenty access to the resources yous intend to piece of work with. In this section, you lot will create an IAM user with access to Amazon S3.
To create an IAM user with access to Amazon S3, you first demand to login to your AWS IAM panel. Under the Access management group, click on Users. Next, click on Add user.
Blazon in the IAM user'southward name you are creating inside the User name* box such every bit s3Admin. In the Access type* selection, put a check on Programmatic admission. Then, click the Next: Permissions button.
Next, click on Attach existing policies direct. And then, search for the AmazonS3FullAccess policy name and put a bank check on information technology. When done, click on Next: Tags.
Creating tags is optional in the Add tags page, and you can just skip this and click on the Next: Review button.
In the Review page, y'all are presented with a summary of the new account existence created. Click Create user.
Finally, once the user is created, yous must copy the Access key ID and the Cloak-and-dagger access key values and save them for later user. Note that this is the only fourth dimension that you can see these values.
Setting Up an AWS Contour On Your Computer
Now that you've created the IAM user with the appropriate admission to Amazon S3, the next footstep is to set up the AWS CLI profile on your computer.
This section assumes that you already installed the AWS CLI version two tool as required. For the contour cosmos, you will need the post-obit information:
The Access key ID of the IAM user.
The Cloak-and-dagger admission cardinal associated with the IAM user.
The Default region name is corresponding to the location of your AWS S3 bucket. Y'all can cheque out the list of endpoints using this link. In this article, the AWS S3 bucket is located in the Asia Pacific (Sydney) region, and the corresponding endpoint is ap-southeast-2.
The default output format. Employ JSON for this.
To create the profile, open PowerShell, and type the command beneath and follow the prompts.
Enter the Access key ID, Hugger-mugger admission primal, Default region name, and default output name. Refer to the demonstration below.
Testing AWS CLI Admission
Subsequently configuring the AWS CLI profile, yous can ostend that the profile is working by running this control beneath in PowerShell.
The command to a higher place should listing the Amazon S3 buckets that you have in your business relationship. The demonstration below shows the command in action. The outcome shows that list of available S3 buckets indicates that the contour configuration was successful.
To learn nearly the AWS CLI commands specific to Amazon S3, you can visit the AWS CLI Command Reference S3 page.
Managing Files in S3
With AWS CLI, typical file management operations tin can be done like upload files to S3, download files from S3, delete objects in S3, and copy S3 objects to another S3 location. It's all just a affair of knowing the right command, syntax, parameters, and options.
In the following sections, the environment used is consists of the post-obit.
Two S3 buckets, namely atasync1and atasync2. The screenshot beneath shows the existing S3 buckets in the Amazon S3 panel.
Local directory and files located under c:\sync.
Uploading Individual Files to S3
When you lot upload files to S3, you tin can upload i file at a time, or past uploading multiple files and folders recursively. Depending on your requirements, you may choose one over the other that you deem appropriate.
To upload a file to S3, you'll demand to provide two arguments (source and destination) to the aws s3 cp command.
For example, to upload the file c:\sync\logs\log1.xml to the root of the atasync1 saucepan, you tin utilise the command below.
aws s3 cp c:\sync\logs\log1.xml s3://atasync1/
Notation: S3 saucepan names are always prefixed with S3:// when used with AWS CLI
Run the above command in PowerShell, merely modify the source and destination that fits your environs first. The output should look similar to the demonstration below.
The demo to a higher place shows that the file named c:\sync\logs\log1.xml was uploaded without errors to the S3 destination s3://atasync1/.
Apply the command below to listing the objects at the root of the S3 bucket.
Running the control above in PowerShell would result in a similar output, equally shown in the demo below. As you tin run across in the output below, the file log1.xml is present in the root of the S3 location.
Uploading Multiple Files and Folders to S3 Recursively
The previous section showed you how to copy a single file to an S3 location. What if you demand to upload multiple files from a folder and sub-folders? Surely you wouldn't want to run the aforementioned control multiple times for dissimilar filenames, right?
The aws s3 cp command has an option to procedure files and folders recursively, and this is the --recursive pick.
As an example, the directory c:\sync contains 166 objects (files and sub-folders).
Using the --recursive option, all the contents of the c:\sync folder will exist uploaded to S3 while besides retaining the folder construction. To test, use the example lawmaking below, simply brand sure to change the source and destination appropriate to your environment.
You lot'll discover from the lawmaking below, the source is c:\sync, and the destination is s3://atasync1/sync. The /sync key that follows the S3 bucket name indicates to AWS CLI to upload the files in the /sync binder in S3. If the /sync folder does not be in S3, it will exist automatically created.
aws s3 cp c:\sync s3://atasync1/sync --recursive
The code above will result in the output, as shown in the sit-in below.
Uploading Multiple Files and Folders to S3 Selectively
In some cases, uploading ALL types of files is non the all-time choice. Similar, when you simply demand to upload files with specific file extensions (e.g., *.ps1). Another two options available to the cp command is the --include and --exclude.
While using the command in the previous section includes all files in the recursive upload, the control beneath volition include only the files that match *.ps1 file extension and exclude every other file from the upload.
Running the lawmaking higher up in PowerShell would present you with a similar result, every bit shown below.
Downloading Objects from S3
Based on the examples you've learned in this section, you tin can also perform the copy operations in contrary. Significant, you tin download objects from the S3 bucket location to the local automobile.
Copying from S3 to local would require yous to switch the positions of the source and the destination. The source beingness the S3 location, and the destination is the local path, like the one shown below.
aws s3 cp s3://atasync1/sync c:\sync
Annotation that the same options used when uploading files to S3 are also applicative when downloading objects from S3 to local. For instance, downloading all objects using the command beneath with the --recursive option.
aws s3 cp s3://atasync1/sync c:\sync --recursive
Copying Objects Between S3 Locations
Apart from uploading and downloading files and folders, using AWS CLI, you lot tin also copy or movement files between two S3 bucket locations.
You'll notice the command below using one S3 location equally the source, and another S3 location equally the destination.
aws s3 cp s3://atasync1/Log1.xml s3://atasync2/
The demonstration beneath shows you the source file beingness copied to another S3 location using the command in a higher place.
Synchronizing Files and Folders with S3
You've learned how to upload, download, and copy files in S3 using the AWS CLI commands so far. In this section, you lot'll learn almost i more than file functioning command bachelor in AWS CLI for S3, which is the sync command. The sync control only processes the updated, new, and deleted files.
There are some cases where you need to keep the contents of an S3 saucepan updated and synchronized with a local directory on a server. For case, you may take a requirement to keep transaction logs on a server synchronized to S3 at an interval.
Using the control below, *.XML log files located under the c:\sync folder on the local server will be synced to the S3 location at s3://atasync1.
The demonstration beneath shows that after running the command higher up in PowerShell, all *.XML files were uploaded to the S3 destination s3://atasync1/.
Synchronizing New and Updated Files with S3
In this next example, it is assumed that the contents of the log file Log1.xml were modified. The sync command should option upwardly that modification and upload the changes done on the local file to S3, as shown in the demo below.
The command to use is still the same as the previous case.
As you tin can see from the output above, since only the file Log1.xml was inverse locally, information technology was also the but file synchronized to S3.
Synchronizing Deletions with S3
By default, the sync control does not process deletions. Any file deleted from the source location is not removed at the destination. Well, not unless you employ the --delete option.
In this next example, the file named Log5.xml has been deleted from the source. The command to synchronize the files will be appended with the --delete option, as shown in the lawmaking below.
When you run the command above in PowerShell, the deleted file named Log5.xml should as well be deleted at the destination S3 location. The sample result is shown below.
Summary
Amazon S3 is an excellent resource for storing files in the deject. With the use of the AWS CLI tool, the fashion you lot utilize Amazon S3 is further expanded and opens the opportunity to automate your processes.
In this article, you've learned how to use the AWS CLI tool to upload, download, and synchronize files and folders between local locations and S3 buckets. You've also learned that S3 buckets' contents can also be copied or moved to other S3 locations, too.
There tin can exist many more employ-instance scenarios for using the AWS CLI tool to automate file management with Amazon S3. You can even try to combine information technology with PowerShell scripting and build your own tools or modules that are reusable. It is up to yous to discover those opportunities and show off your skills.
Further Reading
What Is the AWS Command Line Interface?
What is Amazon S3?
How To Sync Local Files And Folders To AWS S3 With The AWS CLI
0 Response to "aws s3 cli upload all files in directory"
Post a Comment