(Photo by Keit Trysh on Unsplash)

Cloud Security Fitness Guide – Exercise #11: CloudTrail and Encryption

Moving your architecture to AWS in whole or part also means that your team reaps the rewards of new changes and services that are sometimes deployed very rapidly.  This is a distinguishing feature of the cloud operations, and it is actually a good thing.

But let’s not quibble over the merit of change, rather we need to insert some recommendations in the Top Ten AWS Security Best Practices.   After going over this in several AWS Loft events, a couple of local venues, and even a webinar, we realized the need to highlight CloudTrail logging before you do much of anything else.

At the same time, we also want you to start thinking about encryption, all the time.  Both of these should be in place before moving on to #1.  Why first?  Well, let’s talk a little about CloudTrail.

What is AWS CloudTrail?  It is a service AWS made available to you so that much like the story of Hansel and Gretal, you will always have a trail of breadcrumbs to follow back and see details about changes to your AWS environment. Without it, much like the story, you may get lost as there is no way to retroactively enable CloudTrail logs.

Remember that AWS is an API driven environment.  Even if you use the AWS console, behind the scenes there are API calls being made on your behalf.  All of these API calls can be logged via CloudTrail to include the call made, time of the call, who made the call (even if it is AWS or a third party,) where they made the call from (IP,) details about the call, and even if it was a success, error, or something in between.  All of this detail is now available to you, but only if you enable it.

Don’t wait until you need the logs to enable them.  That will only make the situation worse.

Today, AWS has made it easy to enable CloudTrail logs in all your regions all at once with a few mouse clicks in the console. There are some significant steps in the AWS documentation that outline how to setup CloudTrail logs so we won’t duplicate that here. The main point is to enable them in all regions.

Now, there are a couple of security recommendations we want to ensure you consider when enabling CloudTrail.  First and foremost is access controls.  Last year Tim published a blog on Protecting CloudTrail Data. It would be an excellent time to review this now as it is still applicable. Make sure the S3 bucket you designate for CloudTrail logs is encrypted and secure.

Another call out is who can delete the logs.  Tim covers that, but at the same time, consider how long you want to keep them. They are log files, thus, compress well, so storage costs are minimized. However, they will collect over time.

To this, AWS again has provided you with the tools necessary to manage this and keep costs minimized.  Set-up S3 lifecycle policies on the bucket used for CloudTrail data. These policies help to automatically purge older files. A basic rule of thumb is to keep 30-90 days on S3 and move the older files into Glacier for longer term storage. Check your data retention policies to ensure compliance.

Enable it in all regions, even those that you do not plan to use.  When a new region comes on-line, enable it there as soon as possible.  Why?  If you don’t enable it, when you need it, you won’t have it.   Also, consider that if there is a region you won’t use, that there won’t be any CloudTrail activity logs, thus no storage costs.

However, it could be an indication of subversive behavior when your CloudTrail data starts to build in regions you are not using.  Overall, the benefits far outweigh the costs and remember to enable lifecycle management on the bucket to optimize storage costs.

Make sure you are only logging global API calls in only one region.  You may have a region you use more than the others.  This would be good to use for the global API calls.  Services that do not have specific regions such as IAM will log calls in this one region.

At the same time, remember you did this, so you don’t search through a region’s log files for a global change just to realize your global changes are not logged in that region. As a best practice, you don’t want to enable this in more than one region else you start having duplicate entries.
And then there is encryption. Encryption is now prevalent and an easy to use feature that you want to consider enabling everywhere all the time. CloudTrail logs are now encrypted when stored, leveraging S3 Server Side Encryption, but you can also use AWS KMS to help handle the heavy lifting of the key management.

The bottom line is to make sure your data in encrypted from the start.  It is much more challenging to go back and sort through data to try and re-encrypt it after the fact.  Much like enabling the service itself, this will help keep your data secure.

Now is also a good time to start to consider encryption overall. AWS provides encryption for most all data types now both in flight and at rest.  As your usage of AWS continues, enable encryption.  One of the last hold-outs for encryption was boot volumes.

Now that EBS boot volumes can be deployed, the recommendation is to enable encryption everywhere all the time.  Ideally, decryption should briefly happen in memory for processing of data, but in all other aspects, encrypt the data.  It just makes good security.

The top ten security best practices referenced below still apply. This one is just to make sure you have the breadcrumbs in place to track your progress on them.

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies