- If an EC2 Reserved Instance is not applied or used - Thu, Jan 20 2022
- Midnight Commander remote connect via Shell link (copy files over SSH) and SFTP link using FISH and public key authentication - Mon, Jan 17 2022
- Root login via SSH and SFTP on EC2 instances running Linux - Wed, Jan 12 2022
In a strict the sense, the tools discussed today are not services because you can’t offer anything useful for end users with them. However, Amazon lists them as independent services in the AWS Management Console. In a way, you can consider them as services for IT admins.
Some people believe that the cloud will make many admins jobless. I wrote about the topic in 2008 and in 2010; now, in 2014, the demand for IT pros is bigger than ever. The main reason for this is that, despite fact that the cloud fosters the automation and commoditization of IT, IT matters even more today than it did five years ago because it is required for more and more business processes.
I also think many analysts and journalists who never worked in IT have a naïve understanding of IT management and administration. Just because we move our servers to the cloud doesn’t mean that only a couple of engineers of the cloud provider are required to keep the cloud running. The cloud is a very complex beast that allows companies to do things that weren’t possible before. But these new capabilities have to be managed, and they demand new hard skills from the admins who work for the organizations that rented the cloud space. Today’s post gives you a glimpse of what kind of tasks cloud admins have to perform.
Auto Scaling ^
With Auto Scaling, you determine how many EC2 instances (virtual machines) your applications require. We speak of clustering in the Windows world, but Auto Scaling works a bit differently. Four Scaling Plans are currently supported: fixed number of instances, manual scaling, dynamic scaling, and scheduled scaling.
EC2 Auto Scaling
If you configured a fixed number of instances, Auto Scaling automatically replaces unhealthy instances. Manual scaling means that you increase or decrease the number of instances yourself using the AWS Management Console, the CLI, or the API (SDKs are available). Dynamic scaling allows you to launch or terminate instances depending on various metrics (CPU, disk, and network utilization). With scheduled scaling, you increase or decrease the number of instances at configurable times. At the time of this writing, scheduled scaling is the only Scaling Plan that can’t be configured in the AWS Management Console.
Create Auto Scaling Group
Auto Scaling is a free service; you only pay for the EC2 instances your application requires. You can use Auto Scaling in combination with Elastic Load Balancing (see below) to distribute load evenly among the instances in an Auto Scaling group.
Elastic Load Balancing (ELB) ^
Elastic Load Balancing (ELB) allows you to distribute network traffic evenly across multiple EC2 instances. You can use the API or the AWS Management Console to add instances to the ELB, which can be placed in multiple Availability Zones (data centers). If you use ELB together with Auto Scaling, EC2 instances will be automatically registered in ELB when new instances are launched and deregistered when instances are terminated. ELB can also do health checks on instances and deregister them if they become unavailable. ELB works together Virtual Private Cloud (VPC), which I will cover in another post.
Elastic Load Balancer
The main advantage of ELB over on-premises load balancers is that you don’t have to worry about scaling. Your on-premises load balancer can easily become the bottleneck of your application. By contrast, a cloud-based load balancer is, well, a bit more elastic.
Identity and Access Management (IAM) ^
Identity and Access Management (IAM) allows you to control access to AWS services in two different ways: policies and roles.
You can assign multiple policies to users or user groups that define the AWS resources a user can access and what actions a user may perform with a service. Amazon offers templates for all AWS services to grant either full access or read-only access.
IAM - Users
But you can also create your own polices if you need more fine-grained permissions. The number of different actions for each service is amazingly large. Here are a few examples for EC2 that will give you an idea how complex IAM is: PurchaseReservedInstanceOffering, RegisterImage, RebootImage, ReleaseAddress, and ReplaceRoute. Since every AWS service has different features and functions, there are hundreds of different actions.
IAM - Actions
The second way to manage access is via roles. You can assign the same policies to roles as to users and groups. Three types of roles exist: Service Roles, Cross-Account Roles, and Identity Provider Access Roles.
IAM - Roles
Service Roles allow you to determine what a certain object of a particular service can do with another service object. For instance, you can define a role that allows EC2 instances to write to a particular S3 bucket (storage container). If you then assign this IAM role to a particular EC2 instance, you can store files to the S3 bucket from this instance without providing credentials. In a way, Service Roles resemble trust relationships in the Windows world, but Service Roles are much more sophisticated.
Cross-Account Roles allow you to manage access rights to other AWS accounts or IAM users of other AWS accounts. This is useful if you work on a project with a partner organization and want to give their AWS admins or developers access to AWS resources in your AWS account. In the Windows world, you would have to create accounts for these external users in your AD domain or work with Active Directory Federation Services. The latter is probably overkill for most scenarios and small organizations. The public cloud is more flexible and easier to use here because all resources and users are under one “roof.”
The third type of IAM role, the Identity Provider Access Role, extends this roof by integrating other identity providers. At the moment, AWS supports Facebook and Google accounts. In addition, you can work with identity providers that support Security Assertion Markup Language (SAML) 2.0, such as Shibboleth and Windows Active Directory Federation Services.
CloudFormation allows cloud admins to automate the deployment of AWS resources through templates (JSON text documents). Each template is associated with a collection of AWS resources that Amazon calls a stack. You can create the templates in the AWS Management Console, with CloudFormation command line tools, or with the API. Once you create the stack, CloudFormation provisions the AWS resources and takes care of their dependencies.
The AWS Management Console offers a few sample templates for popular web applications that you can use to learn the template syntax. For instance, CloudFormation comes with a WordPress template that automatically launches an EC2 instance with installed and configured Apache, MySQL, PHP, and WordPress. It configures the security group (firewall) for you, sets the database password, configures sendmail, etc.
CloudFormation - Sample templates
Of course, this is only a very simple example. The main purpose of CloudFormation is to automate the deployment of large-scale applications that need many EC2 instances, auto scaling, load balancing, etc. CloudFormation supports the most important AWS resources and is a free service.
Like CloudFormation, OpsWorks is a management tool that allows you to automate the provisioning of AWS resources. The main difference between CloudFormation and OpsWorks is that the latter enables you to manage the whole lifecycle of your applications.
The philosophies behind both tools resemble the two different deployment approaches in the Windows world. You can either clone an OS image where all applications are already installed, or you can deploy a basic image and then install your applications with a software deployment tool.
The main advantages of the “automated application installation” approach over the “cloning” approach are that you don’t have to create a new image whenever new application versions are available and that you can update applications with the same management tool. The downside of the automated installation approach is that the initial deployment process is a bit more complicated and usually requires more scripting.
CloudFormation stands for the cloning approach, and OpsWorks stands for automated installation. However, you use OpsWorks also to deploy OS images—that is, you tell OpsWorks what AMIs (OS images) you want to use for your EC2 instances.
As in CloudFormation, a collection of all the resources in a particular deployment is called a stack. Each OpsWorks stack consists of so-called layers, which are essentially different application tiers (logically distinct server applications such as a database system or a web server). Each layer can contain multiple AWS resources, such as EC2 instances, EBS volumes, load balancers, and security groups (firewalls).
OpsWorks - Stack
Application packages, or apps as they are called in OpsWorks terminology, can be loaded from public application repositories (GitHub or Subversion), or you can install them from your private repository (S3 or via HTTP). Once you have configured an app, you can deploy it to multiple EC2 instances. OpsWorks also allows you to undeploy (uninstall) apps. Via so-called Chef Recipes (JSON scripts), you can configure your application during the installation process.
OpsWorks - App definition
OpsWorks is a very new tool. The AWS resources you can manage with it are thus a bit limited. For instance, at the time for this writing, OpsWorks only supports Amazon Linux and Ubuntu EC2 instances.
Today’s post was of special interest to the IT admin. This also applies to the next part in my Amazon cloud series, which covers networking and monitoring services: Virtual Private Cloud (VPC), Direct Connect, Simple Email Service (SES), Route 53, CloudWatch, and CloudTrail.