AWS Security Mindmap

Amazon Web Services (AWS) has a great offering for their cloud services. It goes without saying that while you run your workload in the cloud, you want to ensure that it must be secured. To benefit their customers, AWS has built plenty of security tools in-house and also they comply to a myriad of industry standards such as PCI-DSS, HIPPA, FedRAMP/FISMA, just to name a few.

Due to the long list of security services of AWS, it can be sometimes overwhelming to identify what should one use for their use case. To solve that puzzle, AWS has come up with a Security White Paper. While this paper provides very details intended for the audience, to remember it for a longer duration – mind map is to the rescue.

I have come up with a mindmap of AWS Security Best practices. I am sure this may not be the first one in the AWS community but this serves my purpose, so keeping it here.

AWS Security Mind Map PDF



OWASP Pro Active Controls

The OWASP Top 10 Proactive Controls 2016 is published by OWASP (Open Web Application Security Project). This is a list of security techniques should exist as a part of SDLC (Software Development Life Cycle).

1)  Verify for Security Early and Often:

This is the most important aspect of any Secure Software Development Life Cycle. Applications must be tested and verified for security in the beginning of the project and throughout the lifecycle of the project – thus any issue discovered early can be fixed early and don’t block entire project.

2)  Parameterize Queries:

SQL injection is one of the most dangerous vulnerabilities for the web applications. SQL injection allows evil attacker code to change the structure of a web application’s SQL statement in a way that can steal the data, modify the data or potentially facilitate native OS command injection. By using parameterized queries, one can prevent SQL Injection.

Here is an example of SQL Injection flaw.
This is a Unsfae JAVA code that allows an attacker to inject code into query that would be executed by the database.

 String query = "SELECT acct_balance FROM acct_data WHERE customer_name = "
   + request.getParameter("custName");
 try {
 	Statement statement = connection.createStatement( … );
 	ResultSet results = statement.executeQuery( query );

Now, “custName” parameter is simply appended to the query allows an attacker to inject any SQL code they want.

So what can we do to avoid it? Use prepared statements with variable binding(eg. parameterized queries) should help alleviate this issue.

The following code example uses a PreparedStatement, Java’s implementation of a parameterized query, to execute the same database query.

 String custname = request.getParameter("custName"); // This should REALLY be validated too
 // perform input validation to detect attacks
 String query = "SELECT acct_balance FROM acct_data WHERE user_name = ? ";
 PreparedStatement pstmt = connection.prepareStatement( query );
 pstmt.setString( 1, custname); 
 ResultSet results = pstmt.executeQuery( );

3) Encode Data

Encoding help protect against many types of attacks – particularly XSS (Cross-site scripting). Encoding translates special characters into some equivalent that make safe for the target interpreter.

4) validate All Inputs

So that we can reduce or minimize the malformed data entering the system. This should not be used as a primary method to prevent XSS or SQL injection.

5) Implement Identity and Authentication Controls

Use standard methods for authentication, identity management, and session management. Ideally use appropriate guidelines for User IDs, password strength controls, securing password recovery mechanism, storing password, transmitting password etc. Additionally, it would be vital to ensure that all failures are logged and reviewed, all password failures are logged and reviews, as well as all account lockouts, are logged and reviewed.

Another option could be using authentication protocols that require no passwords – such as OAuth, OpenID, SAML, FIDO, etc.

For session management, one should consider a variety of factors such as Session ID properties – such as Name Fingerprinting, ID length, ID entropy, ID content; Use built-in language specific (though latest) session management implementation. Utilize secure cookie as much as possible. Follow best practices for Session ID lifecycle. Apply controls for Session Expiration and possible Session hijacking.

6) Implement Appropriate Access Controls

Deny access by default. Utilize Role Bases, Discretionary or Mandatory Access controls where applicable. By using access control, we are intentionally creating one more layer of security – known as authorization. Authorization is the process where requests to access a particular resource should be granted or denied. By creating access control policy we are ensuring that it meets the security requirements as described.

7) Protect Data

Encrypt your data in transit, at rest and during execution. Make sure to use strong encryption methods and libraries.

8) Implement Logging and Intrusion Detection

Log analysis and intrusion detection goes hand-in-hand. There are two ways of doing intrusion detection – Network based intrusion detection and log based intrusion detection. In this particular control, we need to design our log strategy such that we are able to detect the intrusion based on systems, networks, applications, devices,

9) Leverage Security Frameworks and Libraries

Leverage security frameworks and libraries as much as possible for your application language domain.

10) error and Exception Handling

Error messages give an attacker great insight into the inner working on your code. Thus its important to aspect of secure application development to prevent error, exceptions from leaking any information.

CIS Critical Security Controls

In the earlier post, we discussed CIS Security Benchmarks and how it can be useful to public or private organizations. In this post, we will explore some of the CIS Critical Security Controls.

The CIS Critical Security Controls, also known as CIS Controls, are a concise, prioritized set of cyber practices created to stop today’s most pervasive and dangerous cyber attacks. The CIS controls are developed, refined and validated by a community of leading experts around the world. Though it’s widely considered that by applying top 5 CIS controls, an organization should be able to reduce 85 percents of risk related to cyberattack, we will review all 20 CIS controls here for clarity sake.

  1. CSC # 1: Inventory of Authorized and Unauthorized Device
  2. CSC # 2: Inventory of Authorized and Unauthorized Software
  3. CSC # 3: Secure Configurations for Hardware and Software
  4. CSC # 4: Continuous Vulnerability Assessment and Remediation
  5. CSC # 5: Controlled Use of Administrative Privileges
  6. CSC # 6: Maintenance, Monitoring, and Analysis of Audit Logs
  7. CSC # 7: Email and Web Browser Protections
  8. CSC # 8: Malware Defenses
  9. CSC # 9: Limitation and Control of Network Ports
  10. CSC # 10: Data Recovery Capability
  11. CSC # 11: Secure Configurations for Network Devices
  12. CSC # 12: Boundary Defense
  13. CSC # 13: Data Protection
  14. CSC # 14: Controlled Access Based on the Need to Know
  15. CSC # 15: Wireless Access Control
  16. CSC # 16: Account Monitoring and Control
  17. CSC # 17: Security Skills Assessment and Appropriate Training to Fill Gaps
  18. CSC # 18: Application Software Security
  19. CSC # 19: Incident Response and Management
  20. CSC # 20: Penetration Tests and Red Team Exercises

Each of these controls has its own sub-control, which has it’s own threshold metrics (from Low Risk, Medium Risk, or High Risk). For example, our first control states that we should have an inventory of authorized and unauthorized devices. First sub-control requires us to deploy an “automated” asset inventory discovery tool and as a part of that our metric should be How many “Unauthorized” Devices present in our network at a given time. If that number is somewhere between 0-1%, that’s considered Low Risk. If that number is between 1-4%, it’s medium risk while anything above 4% is considered High Risk – and appropriate actions should be taken to mitigate such risks!

CIS Security Benchmarks

In the earlier post we talked about CIS (Center for Internet Security) and now we will take a deep dive into one of the area where CIS is focused on – Security Benchmarks.



CIS Security Benchmarks are consensus-based best practices derived from industry and they are completely vendor agnostic – thus no need to worry if today you are working with one vendor and next week you decided to move on to another vendor.

It covers multiple grounds for managing security in private or public organizations but mainly it covers:

  • secure configurations benchmarks
    • These are the recommended technical settings for operating systems, middleware, software applications and network devices. It also includes some of the cloud-related benchmarks, such as AWS Foundation Benchmarks where how to secure your AWS components – such as best practices with IAM, CloudTrail, CloudWatch etc.
  • automated configuration assessment tools and content
    • CIS’s Configuration Assessment Tool (CIS-CAT) is a tool for analyzing and monitoring the security status of information systems and the effectiveness of internal security controls and processes. This tool reports a target system’s conformance with the recommended settings in the Security Benchmarks.
  • security metrics
    • CIS has identified set of security metrics to be watched, create data related to those metrics, identify the results of such metrics and present them in an effective manner to stakeholders. Per CIS, there are twenty metrics to choose from distributed across business functions – such as Incident Management, Vulnerability Management, Patch Management, Configuration Management, Change Management, Application Security and Financial metrics.
  • security software product certifications


CISO Mindmap – Business Enablement

While doing some research about CISO function, noticed a very good MindMap created by Rafeeq Rehman.

While what he has come up with is mindmap, I will try to deconstruct this mindmap to elaborate more about the various functions performed by CISO.

Let’s begin:

  1. Business Enablement
  2. Security Operations
  3. Selling Infosec (internally)
  4. Compliance and Audit
  5. Security Architecture
  6. Project Delivery lifecycle
  7. Risk Management
  8. Governance
  9. Identity Management
  10. Budget
  11. HR and Legal
So why I numbered them and in the order?
I believe Business Enablement is the most important function of a CISO. If (s)he doesn’t know the business where (s)he operates, it will be a very difficult job to continue his duties as CISO. Consider a person coming from a technology background with no knowledge of Retail Business. If that person is hired as a CISO because (s)he knows the technology, that may not be a good deal. The only reason to become a successful CISO, one must know which business he is involved in. To understand the security function, he must understand the business climate.

If this retail business has a requirement of storing credit card information into their systems, CISO’s job is to make sure appropriate PCI-DSS controls are in place so the data doesn’t get into the wrong hands. While at the same time, making sure that PCI-DSS is not coming into the way of enabling the business to accept credit cards transactions. Yes, security is a requirement but not at the cost of not doing business.

That’s why I rate business enablement as a very important function as a CISO.

What are some of the way CISO can enable business to adopt technology and still not come in their way?

  • Cloud Computing
  • Mobile technologies
  • Internet of things
  • Artificial Intelligence
  • Data Analytics
  • Crypto currencies / Blockchain
  • Mergers and Acquisitions
We will review each of these items in details in the following blog posts.

TPM (Trusted Platform Module)

TPM or Trusted Platform Module as referred by TCG (Trusted Computing Group)  is a microcontroller used in Laptop and now also on servers to ensure the integrity of the platform. TPM can securely store artifacts used to authenticate the platform. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Authentication (ensuring that the platform can prove that it is what it claims to be) and attestation (a process helping to prove that a platform is trustworthy and has not been breached) are necessary steps to ensure safer computing in all environments.

Above image depicts the overall function of TPM module. Standard use case I have seen is ensuring secure boot process of servers. Secure boot will validate the code run at each step in the process, and stop the boot if the code is incorrect. The first step is to measure each piece of code before it is run. In this context, a measurement is effectively a SHA-1 hash of the code, taken before it is executed. The hash is stored in a platform configuration register (PCR) in the TPM.

 TPM 1.2 only support SHA-1 algorithm 

Each TPM has at least 24 PCRs. The TCG Generic Server Specification, v1.0, March 2005, defines the PCR assignments for boot-time integrity measurements. The table below shows a typical PCR configuration. The context indicates if the values are determined based on the node hardware (firmware) or the software provisioned onto the node. Some values are influenced by firmware versions, disk sizes, and other low-level information.

Therefore, it is important to have good practices in place around configuration management to ensure that each system deployed is configured exactly as desired.

Register What is measured Context
PCR-00 Core Root of Trust Measurement (CRTM), BIOS code, Host platform extensions Hardware
PCR-01 Host platform configuration Hardware
PCR-02 Option ROM code Hardware
PCR-03 Option ROM configuration and data Hardware
PCR-04 Initial Program Loader (IPL) code. For example, master boot record. Software
PCR-05 IPL code configuration and data Software
PCR-06 State transition and wake events Software
PCR-07 Host platform manufacturer control Software
PCR-08 Platform specific, often kernel, kernel extensions, and drivers Software
PCR-09 Platform specific, often Initramfs Software
PCR-10 to PCR-23 Platform specific Software

So there are very good use case of TPM to ensure secure boot and integrity of hardware – who all are using TPM? There are many institutions who runs their private clouds have been seen using TPM chipset on their servers while many public clouds do not support TPM – why? that’s mystery!

Log Management

What are available options for Log Management?

There are logs everywhere – systems, applications, users, devices, thermostats, refrigerators, microwaves – you name it.. and as your deployment grows, your complexity increases. When you need to analyze a situation or an outage, logs are your lifesaver.
There are tons of tools available – open-source, pay-per-use and few others.. Let’s take a look at some of them here:

What are different tools/framework available to store these logs and analyze the logs – may be in real time, if not, after-the-fact analysis?


Splunk is a powerful log analysis software with choice of running in enterprise data center or over a cloud.

1. Splunk Enterprise: Search, monitor and analyze any machine data for powerful new insights.

2. Splunk Cloud: This provides Splunk enterprise and all it’s feature in a SaaS way over the cloud. 

3. Splunk Light: At a miniature scale of Splunk Enterprise – Log search and analysis for small IT environments

4. Hunk: Hunk provides the power to rapidly detect patterns and find anomalies across petabytes of raw data in Hadoop without the need to move or replicate data.

Apache Flume: 

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application. 

Flume deploys as one or more agents, that’s contained within it’s own instance of JVM (Java Virtual Machine). Agents has three components: sources, sinks, and channels. An agent must have at least one of each in order to run. Sources collect incoming data as events. Sinks write events out, and channels provide a queue to connect the source and sink. Flume allows Hadoop users ingest high-volume streaming data directly into HDFS for storage. 


Apache Kafka:

Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. Kafka is fast, scalable, durable and distributed by design. This was a LinkedIn project at some point later open-sourced and now one of the top-level Apache open source project. There are many companies who has deployed Kafka in their infrastructure. 

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.

  • Kafka maintains feeds of messages in categories called topics.
  • We’ll call processes that publish messages to a Kafka topic producers.
  • We’ll call processes that subscribe to topics and process the feed of published messages consumers..
  • Kafka is run as a cluster comprised of one or more servers each of which is called a broker.

So, at a high level, producers send messages over the network to the Kafka cluster which in turn serves them up to consumers like this:


Kakfa has a good ecosystems surrounding the main product. With wide range of choice to select from, it might be a good “free” version of log management tool. For a large systems deployments, Kakfa can act as a broker with multiple publishers – may be from Syslog-ng (with agent running on each systems), FluentD (again, with fluentd agents running on nodes and plugin on Kakfa) may solve the purpose of log collections. With log4j appender, it might be extremely easy for applications which uses log4j framework, use it seamlessly. Once you have logs ingested via these subsystems, searching logs can be cumbersome.  With Kafka, there are some alternatives where you can dump these data into HDFS and run a Hive query against it and voila, you get your analysis. 

Still there is some work to be done in terms of how easily someone can retrieve it like via Kibana dashboard.


When we are talking about logs, how can we not remember ELK stack. When I got introduced to ELK stack, it was presented as a Splunk alternative as open source. I agree, it does have the feature sets to complete against core splunk product and if there is a right sizing (think: small, medium) involved, we don’t need Splunk at all and ELK stack might be good enough. Though in recent usage, we have found some scalability issues when we reach few hundred gigs of logs per day. 

Though one good feature I like of ELK stack is all-in-one. I have my log aggregator, search indexer and dashboard within one suite of application. 

With so many choices, it becomes difficult to rely on one or the other. If someone has enough money to spend Splunk might be the right choice but if someone can throw a developer at it, either ELK stack or Kafka – depends on the scale at which they are growing, might be better off.

Amazon Web Services (AWS) Risk and Compliance

This is a summary of AWS’s Risk and Compliance White Paper
AWS publishes SOC1 report – formerly known as Statement on Auditing Standards (SAS) 70, Service Organization report, widely recognized auditing standard developed by AICPA (American Institute of Certified Public Accountants). 
SOC 1 audit is an in-depth audit of design and operating effectiveness of AWS’s defined control objectives and control activities. 
Type II – refers that each of the controls described in reports are not only evaluated for adequacy of design, but are also tested for operating effectiveness by the external auditor. 
With ISO 27001 certification AWS is complying with a broad, comprehensive security standard and follows best practices in maintaining a secure environment. 
With PCI Data Security Standards (PCI DSS), AWS is complying with set of controls important to companies that handle credit card information. 
With AWS’s compliance with FISMA standards, AWS complies with wide range of specific control requirements by US government agencies. 
Risk Management:
AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate and manage risks. Based on my understanding, AWS management re-evaluate those plans at least twice a year. 
Also, AWS compliance team have adopted various Information Security and Compliance framework – including but not limited to COBIT, ISO 27001/27002, AICPA Trust Service Principles, NIST 800-53 and PCI DSS v3.1. 
Additionally, AWS regularly scan all their Internet facing services for possible vulnerabilities and notified parties involved in remediation. External Pen Test (VA test) are also performed by reputed independent companies and repots are shared with AWS management. 
FedRAMP: AWS is Federal Risk and Authorization Management Program (FedRAMPsm) compliant Cloud Service Provider. 
FIPS 140-2: The Federal Information Processing Standard (FIPS) Publication 140-2 is a US government security standard that specifies the security requirements for cryptographic modules protecting sensitive information. AWS is operating their GovCloud (US) with FIPS 140-2 validated hardware. 
To allow US government agencies to comply with FISMA (Federal Information Security Management Act), AWS infrastructure has been evaluated by independent assessors for a variety of government systems as part of their system owner’s approval process.
Many agencies have successfully achieved security authorization for systems hosted in AWS in accordance with Risk Management Framework (RMF) process defined in NIST 800-37 and DoD Information Assurance Certification and Accreditation Process (DIACAP).
Leveraging secure AWS environment to process, maintain and store protected health information, AWS is enabling entities to work in AWS cloud who need to comply with US Health Insurance Portability and Accountability Act (HIPPA). 
ISO 9001:
AWS has achieved ISO 9001 certification to directly support customers who develop, migrate and operate their quality-controlled IT systems in AWS cloud. This allows customers to utilize AWS’s compliance report as evidence of their ISO 9001 programs for industry specific quality programs such as ISO/TS 16949 in auto sector, ISO 13485 in medical devices, GxP in life science, AS9100 in aerospace industry. 
ISO 27001:
AWS has achieved ISO 27001 certification of their Information Security Management Systems (ISMS) covering AWS infrastructure, data centers, and multiple cloud services. 
AWS GovCloud (US) supports US International Traffic in Arms Regulations (ITAR) compliance. Companies subject to ITAR export regulations must control unintended exports by restricting access to protected data to US persons and restricting physical location of that data to US. AWS GovCloud provides such facilities and comply to the required compliance requirements. 
PCI DSS Level 1:
AWS is level 1 compliant under PCI DSS (Payment Card Industry Data Security Standards). Based on February 2013 guidelines by PCI Security Standards Council, AWS incorporated those guidelines in AWS PCI Compliance Package for customers. AWS PCI Compliance package include AWS PCI Attestation of Compliance (AoC), which shows that AWS has been successfully validated against standard applicable to a Level 1 Service Provider under PCI DSS Version 3.1.
AWS publishes Service Organization Controls 1 (SOC 1), Type II report. Audit of this report is done in accordance with AICPA: AT 801 (formerly SSAE 16) and International Standards for Assurance Engagements No. 3402 (ISAE 3402). 
This dual report intended to meet a broad range of financial auditing requirement of US and international bodies. 
In addition to SOC 1, AWS also publishes SOC 2, Type II report – that expands the evaluation of controls to the criteria set forth by the AICPA Trust Service Principles. These principle defines leading practice controls relevant to security, availability, processing integrity, confidentiality, and privacy applicable to service organization such as AWS. 
SOC 3 report is publicly-available summary of AWS SOC 2 report. The report includes the external auditor’s opinion of the operation of controls based on (AICPA’s Security Trust Principle included in SOC 2 report), the assertion from AWS management regarding effectiveness of controls, and overview of AWS infrastructure and Services.

Amazon Web Services (AWS) Security – an outside view

AWS and Security – A view from outside
Shared Responsibility Model:
Secure SDLC
     – static code analysis run as a part of build process
     – threat modeling
     – Google authenticator/RSA
MFA for AWS service API
     – terminating EC2 instance
     – sensitive data in S3 bucket
Security of Access Keys
     – must be secured 
     – use IAM roles for EC2 management
Enable CloudTrail
Run Trusted Advisor
– encrypted file systems
– disabling password-only access to your guests, 
– utilizing some form of multi-factor authentication to gain access to instances (or at a minimum certificate-based SSH Version 2 access).
– privilege escalation mechanism with logging on a per-user basis.
– utilize certificatebased SSHv2 to access the virtual instance,
– disable remote root login,
– use command-line logging, 
– use ‘sudo’ for privilege escalation.
– generate your own key pairs in order
– ports which are required
– certain CIDR blocks
– think about IPTables
– encrypt volume
– use DoD methods to wipe volume before deleting
– any particular cipher to use? for PCI/SOX compliance?
– use Server Order preference
– use of Perfect Forward Secrecy
– VPC security group
– IP range, Internet gateway, virtual private gateway
– Need Secret Access Key of the account
– To consider subnet and route tables
– To consider firewall/security groups
– Network ACLs:  inbound/outbound from a subnet within VPC
– ENI: Elastic Network Interface for management network / security appliance on network
– By default, you can deliver content to viewers over HTTPS by using If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate, you can use SNI Custom SSL or Dedicated IP Custom SSL.
– With Server Name Identification (SNI) Custom SSL, CloudFront relies on the SNI extension of the TLS protocol,
– With Dedicated IP Custom SSL, CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate.
S3 security:
– Use IAM policies
– Use of ACL to grant read/write access to other AWS account users
– Bucket policies: add/deny permission within single object in a bucket
– Restrict access to specific resource using POLICY KEYS: Based on request time (date condition),  whether request was send using SSL (Boolean condition),  Requestor IP address (IP condition) or requestor’s client (string condition)
– Use SSL endpoint for S3 via internet or via EC2
– Use client encryption library
– Use server side encryption (SSE)- S3 managed encryption
– S3 metadata not encrypted
– S3 data to Glacier archival at regular frequency
– s3 delete control via mfa
– CORS: cross-origin resource sharing – allows S3 objects to be referenced in HTML pages else they are considered cross-site scripting
– DynamoDB resources and API permissions via IAM
– Database level permission that allow/deny at item(row) and attribute(column) 
– Fine-grained access control allow you to specify via policy under what circumstances user/application can access DynamoDB table.
– IAM policy can restrict access individual items in the tables, attributes in these items or both.
– Allow Web Identity Federation instead of using IAM users via AWS STS (Secure Token Services)
– Each request must contain HMAC-SHA256 signature in header when sending request to DynamoDB
Amazon RDS:
– Access Control: Master User Account and Password, Create additional user accounts, DB Security Group – similar to EC2 security group, which defaults to deny all”. Access can be granted by adding database port in firewall via network IP range or EC2 security group.
– Using IAM further granular access can be granted.
– Network Isolation in muti-az deployment using DB Subnet groups
– RDS instance in VPC can be access via EC2 instances outside of VPC using SSH Bastion host and Internet Gateway.
– Encryption at RDS is available as means of transport encryption. SSL certificate installed on MySQL and SQL server – so app to DB connection is secure. 
– Encryption at rest is supported via TDE (Transparent Data Encryption) for SQL and Oracle Enterprise Edition.
– Encryption at rest is not supported for MySQL natively and application must send encrypted data if they want data-at-rest encrypted.
– Point-in-time recovery via automated backup with db log and tran log stored for user-specified retention period.
– Restore upto last 5 minutes.. store the backup for 35 days by-default.
– During backup storage I/O is suspended but with multi-az deployment, backup is done at standby, so no performance impact.
AWS RedShift:
– Cluster is closed to everyone by-default.
– Utilize security groups for network access to cluster.
– Database user permission is per cluster basis instead of per table basis. Though user can see the data in table rows generated by his activities; rows generated by other is not visible to the user.
– User who create an object is owner and only owner/superuser can query, grant or modify permission on the object.
– Redshift data is spread across multiple compute nodes in a cluster. Snapshot backups are uploaded to S3 of user-defined period.
– Four-tier Key Based architecture:
  • Data Encryption Keys: Encrypts Data Blocks in Cluster
  • Database Key: Encrypts Data Encryption Keys in Cluster
  • Cluster Key: Encrypts Database Keys in Cluster. Use AWS or HSM to store the cluster key.
  • Master Key: Encrypts Cluster Key, if stored in AWS. Encrypts the Cluster-Key-Encrypted-Database-Key if Cluster key is in HSM.
– RedShift uses Hardware-Accelerated SSL
– Offers strong cipher suites that uses Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) protocol allows PFS (Perfect Forward Secrecy).
AWS ElastiCache:
– Cache Security group like firewall
– By default, network access is turned off
– Use authorize Cache Security Group ingress API/CLI to authorize EC2 Security Group (in turn allows EC2 instances)
– Backup/Snapshot of ElastiCache Redis cluster point-in-time backup or scheduled backup.
AWS CloudSearch:
– Access to search domain’s endpoint is restricted by IP address so that only authorized hosts can submit documents and send search requests. 
– IP address authorization is used only to control access to the document and search endpoints.
– Access is based on AWS acct/IAM user and once authenticated, user has full access to all user operations. 
– Default access to individual queue is restricted to the AWS account that created it.
– Data stored in SQS is not encrypted by AWS but can be encrypted/decrypted by means of application. 
– Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates. Amazon SNS can be leveraged to build highly reliable, event-driven workflows and messaging applications without the need for complex middleware and application management. The potential uses for Amazon SNS include monitoring applications, workflow systems, time-sensitive information updates, mobile applications, and many others.
– SNS provided access control mechanism so topics and message are secured against unauthorized access. 
– Topic owners can set policies on who can publish/subscribe to a topic.
– Access is granted based on an AWS account/IAM user. 
– Actors that participate in the execution of a workflow – deciders, activity workers, workflow administrators – must be IAM users under the AWS account that owns the AWS SWF resources. Other AWS account can’t be granted access to AWS SWF workflows.
– AWS SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it. To verify a domain, Amazon SES requires the sender to publish a DNS record that Amazon SES supplies as proof of control over the domain. 
– SES uses content-filtering technologies to help detect and block messages containing viruses or malware before they can be sent.
– SES maintain complaint feedback loops with major ISPs.
– SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM). When you authenticate an email, you provide evidence to ISPs that you own the domain. 
– For SES over SMTP, it requires to encrypt the connection using TLS – supported mechanisms: STARTTLS and TLSWrapper. 
– For SES over HTTP, communication will be protected by TLS through AWS SES’s HTTPS endpoint.
AWS Kinesis:
– Logical access to Kinesis is via AWS IAM, controlling which Kinesis operations users have permission to perform. 
– By associating EC2 instance with IAM role, credentials available as a part of role is available to the applications on that EC2 instances. Thus it avoid using long-term AWS security credentials.
– Allows to create multiple users and manage permission for each users within AWS account. 
– User permissions must be granted explicitly.
– IAM is integrated with AWS Marketplace to control software subscription, usage and cost. 
– Role uses temporary security credentials to delegate access to user/service that normally don’t have access to AWS resources.
– Temporary security credentials is in short life-span (default 12 hours) and it can’t be reused after expiry. 
– Temporary security credential are: Security Token, an Access Key ID, a Secret Access Key
– Useful in situations such as:
  • Federated (non-AWS) User access:
    • Identity federation between AWS and non-AWS users in corporate identity and authorization system.
    • Using SAML, AWS as Service Provider and provide users with federated Single-Sign-On (SSO) to the AWS management Console or get federated access to call AWS APIs. 
  • Cross-Account Access: For organization who uses multiple AWS accounts to manage their resources, a role can provider users who have permission in one account to access resources in another account.
  • Applications running on EC2 instance that need to access AWS resources: If EC2 need to make calls to S3 or DynamoDB resources, it can utilize role allowing management of large fleet of instances/autoscaling.
– Dedicated Hardware Security Module (HSM) appliance to provide secure cryptographic key storage and operations within an intrusion-resistant, temper-evident device. 
– Variety of use cases such as database encryption, Digital Rights Management (DRM), Public Key Infrastructure (PKI), authentication and authorization, document signing, and transaction processing. 
– Support some of the strongest cryptographic algorithm available – AES, RSA, ECC etc. 
– Connection to CloudHSM available with EC2 and VPC via SSL/TLS using two-way digital certificate authentication
– Cryptographic partition is a logical and physical security boundary that restricts access to keys, so only owner of keys can control the keys and perform operations on HSM. 
– CloudHSM’s temper detection erase the cryptographic key material and generate event logs if tempering (physical or logical) detected. After 3 unsuccessful attempt to access HSM partition with Admin credentials, HSM appliance erase its HSM partition.
– Enable CloudTrail will send event to S3 bucket in 5 minutes. Data captured: Info about every API calls, location of that call, either console, CLI, SDK; captures console sign-in events, create log record every time AWS account owner, federated users, IAM user sign-in.
– CloudTrail access can be limited to only certain users via IAM.

XEN security bug may be forcing AWS, RackSpace to reboot their servers

It was reported by XEN security team about a bug where a buggy or malicious HVM gues can crash the host or read data relating to other guests or the hypervisor itself. This certainly cause a bigger security risk in public cloud environment – such as Amazon Web Services or RackSpace Public Cloud where they use XEN has a choice of hypervisor for guest VM.

Probably that’s the reason it was reported last week of AWS reboot across regions.