- Create a certificate-signed RDP shortcut via Group Policy - Fri, Aug 9 2019
- Monitor web server uptime with a PowerShell script - Tue, Aug 6 2019
- How to build a PowerShell inventory script for Windows Servers - Fri, Aug 2 2019
Prerequisites
First of all, I'm going to assume a few things are in place before we get too far in this article. I'm going to assume you already have an Identity and Access Management (IAM) user created with permissions to write to an S3 bucket we'll be building. I'll be using an account with the AdministratorAccess IAM policy, but in production, you'll need to limit this down obviously. I'll also be assuming you have downloaded and installed the AWS CLI.
Setting up the IAM user and defaults
Once we've met all the prerequisites, we now need to provide our IAM user's access and secret keys. We can do this by opening up a command prompt (cmd.exe) and typing aws configure. This command will prompt you for your access and secret key, default region name, and default output format. If you don't know your region name, log into the AWS Management Console, and you'll notice it in the URL. For example, the default region I use is us-east-1, shown in this URL:
https://console.aws.amazon.com/iam/home?region=us-east-1#/home
C:\>aws configure AWS Access Key ID [None]: XXXXXXXXXX AWS Secret Access Key [None]: XXXXXXXXXXXXXXX Default region name [None]: us-east-1 Default output format [None]:
Creating an S3 Bucket
Next, we'll create an S3 bucket, or you could use an existing one if you had one already. I'll create an S3 bucket named 4sysops. Notice the syntax below where we're telling the aws command-line utility to use the s3 service with the mb or makeBucket command followed by the path of the bucket itself.
aws s3 mb s3://4sysops
Uploading files
After creating the bucket, I can now begin uploading files to it. To do this, we'll use the cp command, specifying the local path of the file and the path of the bucket we just created. Again, we'll use the aws command-line utility and specify the s3 service.
C:\>aws s3 cp "C:\file.txt" s3://4sysops upload: .\file.txt to s3://4sysops/file.txt
If you have an entire directory of contents you'd like to upload to an S3 bucket, you can also use the --recursive switch to force the AWS CLI to read all files and subfolders in an entire folder and upload them all to the S3 bucket. You can see below that as a test, I told it to upload my entire C: drive, but it skipped some files it couldn't read. It would have uploaded my whole hard drive if I hadn't canceled it!
C:\>aws s3 cp C:\ s3://4sysops --recursive warning: Skipping file C:\\Documents and Settings. File/Directory is not readable. warning: Skipping file C:\\hiberfil.sys. File/Directory is not readable. warning: Skipping file C:\\pagefile.sys. File/Directory is not readable.
Downloading files
If we need to download files from our S3 bucket, we can go the other way by merely reversing the order of the two parameters: the s3 bucket path and the local path as shown below.
C:\>aws s3 cp "s3://4sysops/file.txt" ./ download: s3://4sysops/file.txt to .\file.txt
Removing files
Maybe it's been a while, and you're ready to do some cleanup. Using the aws utility, we can also remove the file we've backed up. This time, we do this using the rm command.
Subscribe to 4sysops newsletter!
aws s3 rm "s3://4sysops/file.txt" delete: s3://4sysops/file.txt
Nice article. Also configuration parameters such as max_concurrent_requests, max_queue_size, max_bandwidth, use_accelerate_endpoint make this solution production ready.
The download command is not working. The newest CLI says: fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden