DevSecOps quickly becomes a reality in many software development organizations. Those companies put security in the front seat and acknowledge it is an important factor to take into account. Knowledge increases steadily, but a lot of developers which do not have so much infra related knowledge need to gather extra skills to deploy cloud resources in a secure way. Less experienced developers can utilize Infrastructure as Code templates to quickly deploy cloud resources which contain insecure configurations. Besides this, anomalies which remain undetected pose another thread.
In this article we will highlight 10 AWS cloud security issues which require your developers’ attention.
Key criteria to select and list the misconfiguration are as follows:
- The service of the misconfigurations or the issue itself applies for a large scope of users
- The severity of the security issue should be high or critical
- Ideally the configuration should fall under one or more compliance frameworks
Number 1: Multi Factor Authentication not enabled
Let’s start with the most important account in your AWS subscription: the root account. If your root account does not has Multi Factor Authentication enabled, there is a higher risk that your account gets compromised. Once that happens, anyone who has the credentials can access every resource across every account in AWS. You need to enable MFA for all (root) accounts as quickly as possible. Keep in mind that you can not enable it for AWS GovCloud.
It is also possible to enforce MFA for IAM accounts which want to utilize the AWS CLI. AWS itself provides a policy which checks if an IAM user has MFA enabled before it can access specific AWS resources. In addition to that, you can also enforce MFA for other accounts to access your S3 storage buckets.
Popular cloud security posture management tools can detect this problem and trigger an alert if you wish. The misconfiguration is part of the CIS and NIST compliance framework.
Number 2: don’t allow all traffic to your services
Whatever your service is, keep access to it as limited as possible. In AWS it is very easy to create a security group in front of your service which allows access to it from every location in the world. It’s by far better to explicitly specify from which location (IP addresses or network subnets) your users and/or your applications can access your services. Nearly every compliance standard or framework covers this security misconfiguration.
Newer ways to grant access are not based on network infrastructure components such as IP addresses, but use identity based access control. The last method uses a unique identity of the component that needs access to your services. By doing so you don’t need to maintain highly changing properties such as IP address, ports and protocols.
Default security group
Every VPC in AWS comes with a default Security Group. If you do not specify a custom security group, the default security group is used for resources which require one. The initial configuration of it is to deny all incoming traffic and allow all outgoing traffic. Nearly all compliance frameworks cover this security misconfiguration.
So if you forget to select a more restrictive security group, then your resources can send out unwanted traffic to the internet or other cloud resources. This creates the possibility to install malicious software packages such as trojan horses, crypto miners. And think of DDOS attacks originating from your account. It could cause a lot of (reputational) damage to you and/or your customers.
Number 3: IAM policy allows assume role permissions across all services
Identity Access Management (IAM) with it’s roles, policies and permissions are a key component of AWS. This service let’s you define who can access which service with which permissions.
If you create an IAM policy statement that contains a “0.0.0.0/0” or “::/0” condition you permit the IAM policy to access every single resource in your AWS account. To prevent unauthorized access and data leakage you need to limit this to the services which you really need. Security tools such as Cloudsplaining help to scan your IAM policies to detect violations of this so called “least privilege princple”.
To solve this, select a dedicated IP address from which you want to accept traffic to the service. You can also specify a subnet to support more hosts or select another security group which acts as a trusted source.
Number 4: S3 buckets are publicly writable
AWS has amazing storage solutions such as S3. You can utilize S3 for a huge number of use cases, ranging from static website hosting to storing a private data collection of binaries. If you use S3 to act as the source for your static website, everyone need to have read-only access. For most other cases, your data must be private.
A number of best practices to secure your data:
- Use bucket policies to control who can access your data.
- Encrypt the data you store in transit and at rest
- Use IAM roles for applications which require access to your bucket
Things become complicated since S3 has multiple ways to protect or open up your data to the outside world. It’s easy to make a mistake here. For example, your bucket can be private, but your individual data objects (binaries) can be open to the publicly. Your developers need to master S3 bucket policies, Access Control Lists, the appropriate way to encrypt the contents, handle replication, etc.
Publicly writable buckets
Even more problematic are S3 buckets which are publicly writable to anyone on the internet. This is a huge security problem, since this enables any bad actor to upload malicious files and scripts. Think of crypto miner software or scripts which illegally collect personal information. Your bucket can be the source of attacks of other systems and you are responsible for it. Besides this example, also think of misusing your S3 bucket as a dump-store for huge files. At the end of the month, you would get the bill. Use S3 security logging methods to log every access GET, PUT, POST and DELETE request to capture unwanted requests.
Number 5: ECS task definition resource limits not set
ECS stands for Elastic Container Services which is a popular service to run containers in AWS. Scheduling and executing Docker containers, called tasks is a key feature of ECS. With tasks, you need to specify how much CPU power and memory consumption you would require for your application. If you fail to do so, the task which holds the definition of your application (requirements) will fail and your application won’t run. Capture these kind of errors in your CI/CD pipelines so you don’t have to wait until a full deployment of your IaC templates is finished. That would be too late.
Just like Kubernetes, it is also important to set upper limits to make sure your application do not consume too much CPU power or memory since failing to do so might give you scaling out problems. Your cloud bill goes up very rapidly if you do not have a scaling down plan. It’s best to define a maximum number of (Worker) Nodes for your cluster.
Number 6: Unsupported EKS Master node version
As we can see from the number of articles on the internet, there is a lot of attention to Kubernetes related security issues. Since there are so many moving parts on different (infrastructure) layers all with it’s unique characteristics and different life-cycles you need to carefully and constantly check for security issues.
The EKS Master Node is one of the core components of any Kubernetes cluster. So it is for EKS. To leverage the last security updates and other features you need to make sure your EKS Master node is supported at all times. Every EKS cluster which is managed by your organization needs to updated regularly to prevent automatic updates which AWS itself triggers. If you do not upgrade yourself regularely to a newer version, EKS does it for you. The last scenario can be an unwanted one, since breaking changes can occur. Kubernetes supports the last version and two previous versions. See the website of the Kubernetes community to discover what the currently supported versions are.
Number 7: Security group allows all traffic on ICMP Ping protocol
Ping uses the ICMP protocol (Internet Control Message Protocol) and is used to troubleshoot TCP/IP related connections and for managing it’s traffic flow.
Many people use the Ping protocol to to verify if a certain host is “up” or “down”. Unlike TCP and UDP traffic which falls under the Transport Protocol, Ping is a Network Layer Protocol which is a relatively low layer in the OSI model.
An attacker can misuse the ICMP Ping protocol in several ways, such as the following:
- ICMP nuke attack: send packets of information which the receiving host can’t handle.
- Send ICMP requests which are larger than a certain packet size (all traffic is sent as so called “packets”). The system would crash. It is called the Ping of death.
- Send too many (bad) Ping messages in a given time (Ping flood).
- Gain extra information about your network topology (f.e. find out the number of “hops/connections” between their server and yours).
No one on the internet which you don’t trust should be able to probe your network for it’s topology. It makes it a lot easier to carry out attacks from one server to another. It’s advised to filter out ICMP Ping requests for the hosts which are publicly accessible. Besides this, see the discussion on the AWS community forum about Ping and security.
Number 8: Don’t use classic Load Balancers for internet-facing applications
AWS Supports many types of Load Balancers: Layer 4 Load Balancers such as the Network Load Balancers, Classic Load Balancers and Application Load Balancers (which work on layer 7 of the OSI model).
It’s an AWS best practice to use Classic Loadbalancers only for applications which are running on a traditional EC2-Classic network. Use an Application Load Balancer (ALB) for internet facing HTTP/HTTPS based applications. Since an ALB is more intelligent, it is better suited for a micro-services oriented environment.
Number 9: Account hijacking attempts
Besides misconfigurations, you need to protect your services for anomalies. An anomaly is unexpected behavior which deviates from “common” behavior you would expect from the services and users you expect.
Access control in the cloud becomes more difficult everyday since users are spread among different cloud providers and accounts. It also becomes more difficult to define a so called “baseline” of what “normal” behavior is. Suppose you have experience a traffic spike at an unexpected moment. It could be the marketing campaign of another department in your own organization or a sudden popularity of your product. However, it could also be an attack of your systems.
Therefore, the least privilege principle applies here as well. You need to carefully manage unusual user activity, account hijacking attempts and excessive login failures.
This all starts with setting up a baseline for normal activity. The implementation of a User and Entity Behavior analytics (UEBA) engine and Machine learning which analyzes logs from multiple sources can help you to get a better picture of expected behavior from users.
Number 10: RDS/EBS snapshots publicly accessible
Simply speaking: RDS is one of the main database services of AWS and EBS is a common storage solution for EC2 instances. Snapshots are backups or “moments in time” of those data storage solutions.
A couple of years ago, the RedLock security research team discovered a huge number of RDS and EBS snapshots which were open to the public. This is a major problem since anyone could download your entire database and/or every piece of information you would store on your EBS volume. Critical information such as usernames, passwords and healthcare information was leaked. It included many fortune 50 companies.
For RBS, make sure your developers do not select the “Public” option of the DB snapshot visibility unless this is really intended and do not modify the EBS snapshot configuration which sets the EBS volume to public. A continuous security solution which monitors your entire public cloud environment helps to detect when this occurs and also auto-remediates it when needed.
Every organization which uses public cloud technology need to be aware of common misconfigurations to protect their valuable applications and data. In this article we’ve highlighted 10 common AWS security misconfigurations which should be avoided. Those issues range from unwanted publicly accessible resources to the misuse of resources which have the wrong settings applied. All of them make you more vulnerable to attacks. Scan your IaC templates for security misconfigurations and continuously monitor your cloud environment to prevent them.