Amazon Web Services (AWS)

Índex

General

Casos d'ús / Use cases

  • Use cases
  • Digital Media
    • Digital Media in the Cloud: Best Practices for Processing Media on AWS (YouTube)
      1. Media processing in AWS (AWS)
        • Cloud transcoding architecture
          • Phase 1:
            • Add transcoder instances to EC2
            • Use S3 to store file-based sources
            • Use S3 to store file-based outputs
            • Use CloudFront to distribute output streams
          • Phase 2:
            • Use acceleration and/or Direct Connect for ingest
            • Use Amazon Virtual Private Cloud to ringfence
            • Use EC2 Reserved Instances
            • Use EC2 Spot Instances
          • Phase 3:
            • Create a fleet of transcode workers
            • Use your on-premise workflow controller to orchestrate using SWF
            • Use SQS to create a cloud transcode queue
            • Use SNS for notifications
        • Securing content (14:00)
          • Local encryption: encrypt and mantain your own keys
          • Network encryption: use secured network transfer (SSL, VPC)
          • At REST encryption: S3 encrypts at REST using AES-256
          • DRM: integrate certificated-based DRM through third parties
          • Watermarking: Integrate digital watermarking through third parties
      2. Best practices for hybrid transcoding workflows (Elemental Technologies)
      3. Cloud-based content management (Ericsson)
      4. High performance media processing (Intel)

Cloudformation

  • AWS CloudFormation Product Details
  • User guide
  • 19 Best Practices for Creating Amazon CloudFormation Templates
  • Designer
  • Preserve resources after stack destruction
  • Info
  • CLI Cloudformation
  • EC2
    • Instance
    • Volume
    • VPC
    • single EC2 instance
      • single_ec2.json
        • {
              "Description": "Single EC2 instance",
              "AWSTemplateFormatVersion": "2010-09-09",
              "Metadata": {},
              "Resources": {
                  "singleEC2": {
                      "Type": "AWS::EC2::Instance",
                      "Properties": {
                          "ImageId":"ami-xxxxxxxx",
                          "KeyName":"my_key_pair",
                          "InstanceType":"t2.micro"
                      }
                  }
              }
          }
    • single ec2 instance with an extra volume
      • single_ec2_volume.json
        • "Resources": {
          ...
              "EC2Instance": {
                  "Type": "AWS::EC2::Instance",
                  "Properties": {
                  "ImageId":{"Ref" : "ImageId"},
                  "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
                  "KeyName":"my_server_key",
                  "InstanceType":{"Ref" : "InstanceType"},
                  "UserData": {
                      "Fn::Base64": {
                      "Fn::Join" : [ "", [
                          "#!/bin/bash -xe\n",
                          "sudo mkfs -t xfs /dev/xvdh
          \n",
                          "sudo mkdir /mnt/vol
          \n",
                          "sudo chmod 777 /mnt/vol
          \n",
                          "sudo mount /dev/xvdh /mnt/vol
          \n",
                      ] ]
                      }
                  },
                  "Tags":[{"Key":"Name","Value":{"Ref":"BaseName"}}]
                  }
              },

          ...

              "NewVolume" : {
                  "Type" : "AWS::EC2::Volume",
                  "Properties" : {
                      "Size" : "100",
                      "AvailabilityZone" : { "Fn::GetAtt" : [ "EC2Instance", "AvailabilityZone" ]}
                  }
              },
             
              "MountPoint" : {
                  "Type" : "AWS::EC2::VolumeAttachment",
                  "Properties" : {
                      "InstanceId" : { "Ref" : "EC2Instance" },
                      "VolumeId"  : { "Ref" : "NewVolume" },
                      "Device" : "/dev/xvdh"
                  }
              },
    • single EC2 with UserData and extra volume
      • Notes:
        • When using direct bash commands:
          • add "\n" at the end of each command
          • no need to call sudo
      • single_ec2_userdata_volume.json
        • {
              "Description": "Single EC2 instance with extra volume",
              "AWSTemplateFormatVersion": "2010-09-09",
              "Metadata": {},
              "Parameters" : {
              "InstanceType" : {
                  "Description" : "EC2 instance type",
                  "Type" : "String",
                  "Default" : "t2.micro",
                  "AllowedValues" : [ "t1.micro", "t2.micro", "t2.small", "t2.medium", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "g2.2xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", "d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"]
                  ,
                  "ConstraintDescription" : "must be a valid EC2 instance type."
              },
             
              "HostedZone" : {
                  "Type" : "String",
                  "Description" : "The DNS name of an existing Amazon Route 53 hosted zone",
                  "AllowedPattern" : "(?!-)[a-zA-Z0-9-.]{1,63}(?<!-)",
                  "ConstraintDescription" : "must be a valid DNS zone name.",
                  "Default" : "example.net"
              },
              "ImageId" : {
                  "Type" : "String",
                  "Description" : "The image_id for the ec2 instance"
              },
              "NewVolumeSize" : {
                  "Type" : "String",
                  "Description" : "The size of the new volume (GB)",
                  "Default": "5"
              }
              },   
              "Resources": {

              "EC2Instance": {
                  "Type": "AWS::EC2::Instance",
                  "Properties": {
                  "ImageId":{"Ref" : "ImageId"},
                  "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
                  "KeyName":"wct_streaming_server",
                  "InstanceType":{"Ref" : "InstanceType"},
                  "UserData": {
                      "Fn::Base64": {
                      "Fn::Join" : [ "", [
                          "#!/bin/bash -xe \n",
                          "while [ ! -e /dev/xvdh ]; do echo waiting for /dev/xvdh to attach; sleep 10; done \n",
                          "mkfs -t xfs /dev/xvdh \n",
                          "mkdir -p /mnt/vol1 \n",
                          "mount /dev/xvdh /mnt/vol1 \n",
                          "chmod 777 /mnt/vol1 \n"
                      ] ]
                      }
                  },
                  "Tags":[{"Key":"Name","Value":{"Ref":"BaseName"}}]
                  }
              },
            
              "NewVolume" : {
                  "Type" : "AWS::EC2::Volume",
                  "Properties" : {
                  "Size" : {"Ref" : "NewVolumeSize"},
                  "AvailabilityZone" : { "Fn::GetAtt" : [ "EC2Instance", "AvailabilityZone" ]}
                  }
              },
             
              "MountPoint" : {
                  "Type" : "AWS::EC2::VolumeAttachment",
                  "Properties" : {
                  "InstanceId" : { "Ref" : "EC2Instance" },
                  "VolumeId"  : { "Ref" : "NewVolume" },
                  "Device" : "/dev/xvdh"
                  }
              },

          }
    • single EC2 entry with Route53
      • single_ec2_r53.json
        • {
              "Description": "Single EC2 instance",
              "AWSTemplateFormatVersion": "2010-09-09",
              "Metadata": {},
              "Parameters" : {
              "InstanceType" : {
                  "Description" : "EC2 instance type",
                  "Type" : "String",
                  "Default" : "m1.small",
                  "AllowedValues" : [ "t1.micro", "t2.micro", "t2.small", "t2.medium", "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "c1.medium", "c1.xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "g2.2xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", "d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge", "cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"]
                  ,
                  "ConstraintDescription" : "must be a valid EC2 instance type."
              },
             
              "HostedZone" : {
                  "Type" : "String",
                  "Description" : "The DNS name of an existing Amazon Route 53 hosted zone",
                  "AllowedPattern" : "(?!-)[a-zA-Z0-9-.]{1,63}(?<!-)",
                  "ConstraintDescription" : "must be a valid DNS zone name."
              }
              },   
              "Resources": {
              "EC2Instance": {
                  "Type": "AWS::EC2::Instance",
                  "Properties": {
                  "ImageId":"ami-437da730",
                  "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
                  "KeyName":"my_key_pair",
                  "InstanceType":"t2.micro"
                  }
              },
              "InstanceSecurityGroup" : {
                  "Type" : "AWS::EC2::SecurityGroup",
                  "Properties" : {
                  "GroupDescription" : "Enable SSH, HTTP, RTMP",
                  "SecurityGroupIngress" : [
                      {
                      "IpProtocol" : "tcp",
                      "FromPort" : "22",
                      "ToPort" : "22",
                      "CidrIp" : "0.0.0.0/0"
                      },
                      {
                      "IpProtocol" : "tcp",
                      "FromPort" : "80",
                      "ToPort" : "80",
                      "CidrIp" : "0.0.0.0/0"
                      },
                      {
                      "IpProtocol" : "tcp",
                      "FromPort" : "1935",
                      "ToPort" : "1935",
                      "CidrIp" : "0.0.0.0/0"
                      }
                  ]
                  }
              },
              "MyDNSRecord" : {
                  "Type" : "AWS::Route53::RecordSet",
                  "Properties" : {
                  "HostedZoneName" : { "Fn::Join" : [ "", [{"Ref" : "HostedZone"}, "." ]]},
                  "Comment" : "DNS name for my instance.",
                  "Name" : { "Fn::Join" : [ "", [{"Ref" : "EC2Instance"}, ".", {"Ref" : "AWS::Region"}, ".", {"Ref" : "HostedZone"} ,"."]]},
                  "Type" : "A",
                  "TTL" : "900",
                  "ResourceRecords" : [ { "Fn::GetAtt" : [ "EC2Instance", "PublicIp" ] } ]
                  }
              }   
              },
              "Outputs" : {
              "InstanceId" : {
                  "Description" : "InstanceId of the newly created EC2 instance",
                  "Value" : { "Ref" : "EC2Instance" }
              },
              "AZ" : {
                  "Description" : "Availability Zone of the newly created EC2 instance",
                  "Value" : { "Fn::GetAtt" : [ "EC2Instance", "AvailabilityZone" ] }
              },
              "PublicDNS" : {
                  "Description" : "Public DNSName of the newly created EC2 instance",
                  "Value" : { "Fn::GetAtt" : [ "EC2Instance", "PublicDnsName" ] }
              },
              "PublicIP" : {
                  "Description" : "Public IP address of the newly created EC2 instance",
                  "Value" : { "Fn::GetAtt" : [ "EC2Instance", "PublicIp" ] }
              },
              "DomainName" : {
                  "Description" : "Fully qualified domain name",
                  "Value" : { "Ref" : "MyDNSRecord" }
              }
              }
             
          }
  • EFS
    • AWS::EFS::FileSystem
    • AWS::EFS::MountTarget
    • exemple
      • ...
        "Resources"
        : {
            "MyFileSystem" : {
                "Type": "AWS::EFS::FileSystem",
                "Properties": {
                "PerformanceMode": "generalPurpose",
                "FileSystemTags": [
                    {
                    "Key": "Name",
                    "Value": "my-fs"
                    }
                ]
                }
            },
           
            "MyMountTargetSecurityGroup" : {
                "Type" : "AWS::EC2::SecurityGroup",
                "Properties" : {
                "GroupDescription" : "Enable ports 2049 (nfs)",
                "VpcId" : {"Ref" : "VPCId"},
                "SecurityGroupIngress" : [
                    {
                    "IpProtocol" : "tcp",
                    "FromPort" : "2049",
                    "ToPort" : "2049",
                    "CidrIp" : {"Ref": "CidrSubnet"}
                    }
                ]
                }
            },

            "MyMountTarget" : {
                "Type": "AWS::EFS::MountTarget",
                "Properties": {
                "FileSystemId": { "Ref": "MyFileSystem" },
                "SubnetId": { "Ref": "MySubnet" },
                "SecurityGroups": [ { "Ref": "MyMountTargetSecurityGroup" } ]       
                }
            },

        ...
        "# mount efs from instance \n",
        "# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ", {"Ref": "ProcessFileSystem"},".efs.",{"Ref" : "AWS::Region"},".amazonaws.com:/ /mnt/efs \n",

        "mkdir -p /mnt/efs \n",
        "echo ", {"Ref": "ProcessFileSystem"},".efs.",{"Ref" : "AWS::Region"},".amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0 >>/etc/fstab \n",
        "
        mount /mnt/efs \n",
        ...

          }
  • Route53
  • Alarm
  • CloudFront
    • AWS::CloudFront::Distribution
      • "PriceClass" :
        • "PriceClass_100" -> Use Only U.S., Canada and Europe
        • "PriceClass_200" -> Use U.S., Canada, Europe, Asia and Africa
        • "PriceClass_All" -> Use All Edge Locations (Best Performance)
    • No cache for 404
      • "MyCloudFront" : {
            "Type" : "AWS::CloudFront::Distribution",
            "Properties" : {
                "DistributionConfig" : {
                    "CustomErrorResponses" : [ {
                        "ErrorCode" : "404",
                        "ErrorCachingMinTTL" : "2"
                    } ]
                    ...
                 }
                }
        }
    • Cache behaviour /  Forward headers: Whitelist
      • Configuring CloudFront to Cache Objects Based on Request Headers
      • To avoid problems with CORS and 403 response from CloudFront
      •                     "DefaultCacheBehavior" : {
                                "TargetOriginId" : { "Fn::Join" : [ "", ["S3-", {"Ref":"BucketName"}, "-my_dir" ] ]},
                                "ForwardedValues" : {
                                    "Headers" : ["Origin","Access-Control-Request-Headers","
        Access-Control-Request-Method"],
                                    "QueryString" : "false",
                                    "Cookies" : { "Forward" : "none" }
                                },
                                "AllowedMethods" : ["GET", "HEAD", "OPTIONS"],
                                "CachedMethods" : ["GET", "HEAD", "OPTIONS"],
                                "ViewerProtocolPolicy" : "allow-all"
                            },
    • Full examples:
      • Origin is S3, with whitelist for forwarded headers ("Origin"):
        •     "Resources": {
                  "MyCloudFront" : {
                      "Type" : "AWS::CloudFront::Distribution",
                      "Properties" : {
                          "DistributionConfig" : {
                              "Origins" : [ {
                                  "DomainName": { "Fn::Join" : [ "", [{"Ref":"BucketName"}, ".s3.amazonaws.com"]]},
                                  "OriginPath": "my_dir",
                                  "Id" : { "Fn::Join" : [ "", ["S3-", {"Ref":"BucketName"}, "-my_dir" ] ]},
                                  "S3OriginConfig": {}
                              }],
                              "Enabled" : "true",
                              "Comment" : "My comments",
                              "DefaultCacheBehavior" : {
                                  "TargetOriginId" : { "Fn::Join" : [ "", ["S3-", {"Ref":"BucketName"}, "-my_dir" ] ]},
                                  "ForwardedValues" : {
                                      "Headers" : ["Origin"],
                                      "QueryString" : "false",
                                      "Cookies" : { "Forward" : "none" }
                                  },
                                  "ViewerProtocolPolicy" : "allow-all"
                              },
                              "PriceClass" : "PriceClass_100"
                          }      
                      }
                  }
              },
      • Origin is own http server, with no cache for 404 responses:
        •     "Resources": {
                  "MyCloudFront" : {
                      "Type" : "AWS::CloudFront::Distribution",
                      "Properties" : {
                          "DistributionConfig" : {
                              "Origins" : [ {
                                  "DomainName": "myserver.toto.org",
                                  "OriginPath": "/root_dir",
                                  "Id" : "oid-root_dir",
                                  "CustomOriginConfig": {
                                      "HTTPPort": "80",
                                      "HTTPSPort": "443",
                                      "OriginProtocolPolicy": "http-only"
                                  }
                              }],
                              "Enabled" : "true",
                              "Comment" : "My comments",
                              "DefaultCacheBehavior" : {
                                  "TargetOriginId" :
          "oid-root_dir",
                                  "ForwardedValues" : {
                                      "QueryString" : "false",
                                      "Cookies" : { "Forward" : "none" }
                                  },
                                  "ViewerProtocolPolicy" : "allow-all"
                              },
                              "CustomErrorResponses" : [ {
                                  "ErrorCode" : "404",
                                  "ErrorCachingMinTTL" : "2"
                              } ],
                              "PriceClass" : "PriceClass_100"
                          }      
                      }
                  }
              },
    • Amazon CloudFront Template Snippets
    • Amazon CloudFront - Introduction
  • LoadBalancer

    • Application Load Balancer Network Load Balancer
      Classic Load Balancer
      protocols
      HTTP, HTTPS, HTTP/2, WebSockets

      HTTP, HTTPS, TCP, SSL

      AWS::ElasticLoadBalancingV2::LoadBalancer

      AWS::ElasticLoadBalancing::LoadBalancer
      Fn::GetAtt
      CanonicalHostedZoneID

      CanonicalHostedZoneNameID
      DNSName

      CanonicalHostedZoneName
      LoadBalancerName

      -

      AWS::ElasticLoadBalancingV2::Listener
      AWS::ElasticLoadBalancingV2::TargetGroup (TargetGroups)

      Listeners
    • Classic Load Balancer
      • listener with redirect ports http/80, http/8088, https/8089, tcp/1935 and LBCookieStickinessPolicy:
        • "MyLoadBalancer": {
              "Type": "AWS::ElasticLoadBalancing::LoadBalancer
          ",
              "Properties": {
                  "LoadBalancerName": "MyLoadbalancerName",
                  "SecurityGroups" : [ ... ],
                  "AvailabilityZones": {
                      "Fn::GetAZs": ""
                  },
                  "CrossZone": "true",
                  "ConnectionSettings": {
                      "IdleTimeout" : 60
                  }
                  "Listeners": [
                      {
                          "LoadBalancerPort": "80",
                          "InstancePort": "80",
                          "Protocol": "HTTP",
                          "PolicyNames": ["MyFirstLBCookieStickinessPolicy"]
                      },
                      {
                          "LoadBalancerPort": "8088",
                          "InstancePort": "8088",
                          "Protocol": "HTTP",
                          "PolicyNames": ["MySecondLBCookieStickinessPolicy"]
                      },
                      {
                          "LoadBalancerPort": "8089",
                          "Protocol": "HTTPS",
                          "InstancePort": "8089",
                          "InstanceProtocol": "HTTPS",
                          "SSLCertificateId": "arn:aws:acm:eu-west-1:...",
                          "PolicyNames": ["MySecondLBCookieStickinessPolicy"]
                      },
                      {
                          "LoadBalancerPort": "1935",
                          "InstancePort": "1935",
                          "Protocol": "TCP"
                      }
                  ],
                  "LBCookieStickinessPolicy" : [
                      {
                          "CookieExpirationPeriod" : "500",
                          "PolicyName" : "MyFirstLBCookieStickinessPolicy"
                      },
                      {
                          "CookieExpirationPeriod" : "1000",
                          "PolicyName" : "MySecondLBCookieStickinessPolicy"
                      }
                  ],
                  "HealthCheck": {
                      "Target": "HTTP:80/",
                      "HealthyThreshold": "3",
                      "UnhealthyThreshold": "5",
                      "Interval": "30",
                      "Timeout": "5"
                  }
              }
          }
      • listener with a certificate from ACM
        • ...
    • SecurityGroup
    • UDP
  • AutoScalingGroup
    • Prerequisites:
      • cfn-signal
        • Download from: Bootstrapping Applications using AWS CloudFormation
        • kixorz/ubuntu-cloudformation.json
        • Install
          • mkdir aws-cfn-bootstrap-latest
          • curl https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz | tar xz -C aws-cfn-bootstrap-latest --strip-components 1
          • easy_install aws-cfn-bootstrap-latest
        • Problemes / Problems
          • Traceback (most recent call last):
              File "/bin/easy_install", line 9, in <module>
                load_entry_point('setuptools==0.9.8', 'console_scripts', 'easy_install')()
              [...]
              File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 701, in process_distribution
                distreq.project_name, distreq.specs, requirement.extras
            TypeError: __init__() takes exactly 2 arguments (4 given)
            • Diagnose
              • python
                • >>> from pkg_resources import load_entry_point
                  >>> load_entry_point('setuptools==0.9.8', 'console_scripts', 'easy_install')()
                  ...
                  pkg_resources.VersionConflict: (setuptools 25.1.4 (/usr/lib/python2.7/site-packages/setuptools-25.1.4-py2.7.egg), Requirement.parse('setuptools==0.9.8'))
            • Solució / Solution
              • sudo rm -rf /usr/lib/python2.7/site-packages/setuptools-25.1.4-py2.7.egg
      • You must install a UserData on your instance (AMI) that generates a cfn-signal received by CreationPolicy
        •     "LaunchConfig" : {
                  "Type" : "AWS::AutoScaling::LaunchConfiguration",
                  "Properties" : {
                  "ImageId":{"Ref" : "MyImageId"},
                  "SecurityGroups" : [ { "Ref" : "MySecurityGroup" } ],
                  "KeyName":"my_key",
                  "InstanceType":{"Ref" : "MyInstanceType"},
                  "IamInstanceProfile": "role_my_server",
                  "UserData": {
                      "Fn::Base64": {
                      "Fn::Join" : [ "", [
                          "#!/bin/bash -xe\n",
                          "/usr/bin/cfn-signal -e 0 --stack ", { "Ref": "AWS::StackName" },
                          " --resource MyAutoscalingGroup ",
                          " --region ", { "Ref" : "AWS::Region" }, "\n"
                      ] ]
                      }
                  }
                  }
              },
    • Auto Scaling Template Snippets
    • AutoScalingGroup
      • AWS::AutoScaling::AutoScalingGroup
      • example.json
        •     "MyAutoScalingGroup" : {
                  "Type" : "AWS::AutoScaling::AutoScalingGroup",
                  "Properties" : {
                  "LaunchConfigurationName" : { "Ref" : "MyLaunchConfig" },
                  "MinSize" : "1",
                  "MaxSize" : "3",
                  "LoadBalancerNames" : [ { "Ref" : "MyLoadBalancer" } ],
                  "Tags":[
                      {
                      "Key":"Name",
                      "Value":{ "Fn::Join" : [ "", ["myinstance-", {"Ref" : "MyName"}]]},
                      "PropagateAtLaunch" : "true"
                      }
                  ]
                  },
                  "CreationPolicy" : {
                  "ResourceSignal" : {
                      "Timeout" : "PT15M",
                      "Count"   : "1"
                  }
                  },
                  "UpdatePolicy": {
                  "AutoScalingRollingUpdate": {
                      "MinInstancesInService": "1",
                      "MaxBatchSize": "1",
                      "PauseTime" : "PT15M",
                      "WaitOnResourceSignals": "true"
                  }
                  }
              },
    • ScalingPolicy
    • Alarm
      • AWS::...
      • example.json
        •     "MyUpScalingPolicy" : {
                  "Type" : "AWS::AutoScaling::ScalingPolicy",
                  "Properties" : {
                  "AdjustmentType" : "ChangeInCapacity",
                  "AutoScalingGroupName" : { "Ref" : "MyAutoScalingGroup" },
                  "Cooldown" : "60",
                  "ScalingAdjustment" : "1"
                  }
              },

              "MyCPUHighAlarm": {
                  "Type": "AWS::CloudWatch::Alarm",
                  "Properties": {
                  "EvaluationPeriods": "1",
                  "Statistic": "Average",
                  "Threshold": "80",
                  "AlarmDescription": "Alarm if CPU too high or metric disappears indicating instance is down",
                  "Period": "60",
                  "AlarmActions": [ { "Ref": "MyUpScalingPolicy" } ],
                  "Namespace": "AWS/EC2",
                  "Dimensions": [ {
                      "Name": "AutoScalingGroupName",
                      "Value": { "Ref": "MyAutoScalingGroup" }
                  } ],
                  "ComparisonOperator": "GreaterThanThreshold",
                  "MetricName": "CPUUtilization"
                  }
              },
    • LaunchConfiguration
    • Example:
      • my_scaling.json
        • ...
  • ElastiCache
  • ...

IAM

ACM (AWS Certificate Manager)

EC2

  • Amazon EC2 instances
  • ec2instances.info
  • Service Health Dashboard
  • Free tier eligible
    • Amazon Linux AMI 2014.03 (yum)
    • Red Hat Enterprise Linux 6.4
    • SuSE Linux Enterprise Server 11 sp3
    • Ubuntu Server 12.04 LTS
    • Ubuntu Server 13.10
  • Nitro-based instances
    • lslblk
      • nvme0n1
  • Elastic Network Adapter (ENA)
    • used by e.g. c5, t3 instance types
    • Enabling Enhanced Networking with the Elastic Network Adapter (ENA) on Linux Instances
      • check instance ENA installation
        • ssh ...
        • sudo modinfo ena
        • ethtool -i eth0
          • driver: ena
      • check instance ENA support
        • instance_id=...
          aws ec2 describe-instances --instance-ids ${instance_id} --query "Reservations[].Instances[].EnaSupport"
      • check AMI ENA support
        • ami_id=...
          aws ec2 describe-images --image-id ${ami_id} --query "Images[].EnaSupport"
      • steps
        1. check ENA kernel module
          • sudo modinfo ena
          • CentOS: if not available, update the kernel
            • sudo yum install kernel
        2. check systemd version
          • rpm -qa | grep -e '^systemd-[0-9]\+\|^udev-[0-9]\+'
        3. if it is greater or equal than  197, disable predictable network interface names:
          • sudo sed -i '/^GRUB\_CMDLINE\_LINUX/s/\"$/\ net\.ifnames\=0\"/' /etc/default/grub
          • sudo grub2-mkconfig -o /boot/grub2/grub.cfg
        4. stop instance
        5. from local computer:
          • set ENA support:
            • instance_id=...
              aws ec2 modify-instance-attribute --instance-id ${instance_id} --ena-support
          • check ENA support:
            • aws ec2 describe-instances --instance-ids ${instance_id} --query "Reservations[].Instances[].EnaSupport"
              • should return True
        6. change instance type to c5 or t3
        7. start instance
        8. you may need to update Route53 with the new public ip address
        9. to create an AMI from this one:
          1. connect to your new instance: ssh ...
          2. sudo rm /etc/udev/rules.d/70-persistent-net.rules
          3. stop instance
          4. create AMI: Actions -> Image -> Create image
          5. check that AMI has ENA enabled:
            • ami_id=...
              aws ec2 describe-images --image-id ${ami_id} --query "Images[].EnaSupport"
              • should return True
  • cloud-init
  • Network bandwidth
  • Reboot
  • Alarms
  • Volumes: EBS
    • Making an Amazon EBS Volume Available for Use
      • get list of available drives
        • lsblk
          • NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
            xvda    202:0    0  10G  0 disk
            └─xvda1 202:1    0  10G  0 part /
            xvdg    202:96   0  20G  0 disk
      • get mountability of drives
        • sudo file -s /dev/xvdg
          • "/dev/xvdg: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)"
            • ready to mount
          • "/dev/xvdg: data"
            • you need to create filesystem (e.g. XFS):
              • sudo mkfs -t xfs /dev/xvdg
      • mount the drive
        • temporarily
          • mkdir /mnt/my_point
          • mount /dev/xvdg /mnt/my_point
        • permanently
          • /etc/fstab
            • /dev/xvdg /mnt/my_point xfs     defaults        0 0
    • Redimensionament / Resize
      • Amazon EBS Elastic Volumes
        • Amazon EBS Update – New Elastic Volumes Change Everything
        • Automating Amazon EBS Volume-resizing with AWS Step Functions and AWS Systems Manager
        • Passos / Steps
          1. Comproveu la mida inicial / Check original size:
            • lsblk
            • voleu modificar la partició dins d'un disc / you want to modify a partition in a disk:
              • NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
                nvme0n1     259:1    0  20G  0 disk
                └─nvme0n1p1 259:2    0  20G  0 part /
            • voleu modificar un disc sense particions / you want to modify a disk without partitions:
              • NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
                nvme1n1     259:0    0  20G  0 disk /mnt/vol1
          2. Modifiqueu la mida del volum AWS / Modify AWS volume size (e.g. from 20GB to 25GB):
            • AWS console:
              • Volumes -> Modify
            • CLI
              • ...
            • boto3
              • ...
          3. Espereu que el disc estigui a punt: passarà de l'estat «modifying» a «optimizing». Encara que el percentatge a «optimizing» sigui 0%, ja podeu passar al següent pas / Wait for disk to be ready: it will go from "modifying" to "optimizing". Even if percentage of optimizing is 0%, you can now proceed with the next step
          4. Comproveu els canvis al disc / Check changes on disk:
            • lsblk
            • NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
              nvme0n1     259:1    0  25G  0 disk
              └─nvme0n1p1 259:2    0  20G  0 part /
            • NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
              nvme1n1     259:0    0  25G  0 disk /mnt/vol1
          5. Feu créixer la partició (si n'hi ha) / Grow partition (if any):
            • si el disc té una partició, feu créixer la que volgueu. Per exemple, per a fer créixer la primera (1) partició del disc /dev/nvme0n1 (és a dir nvme0n1p1) / if disk has any partition on it, grow the desired partition. E.g. to grow first (1) partition of disk /dev/nvme0n1 (i.e. nvme0n1p1)
              • growpart /dev/nvme0n1 1
              • check changes with lsblk:
                • NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
                  nvme0n1     259:1    0  25G  0 disk
                  └─nvme0n1p1 259:2    0  25G  0 part /
            • si el disc no té cap particio, aneu al pas següent / if disk has no partition, you can proceed with the next step
          6. Amplieu el sistema de fitxers / Extend file system (parameter for xfs_growfs is the mount point: /, /mnt/vol1 ...):
            • comproveu el tipus de sistema de fitxers utilitzat / check the used filesystem (e.g. ext4, xfs...):
              • df -hT
            • ext2, ext3, ext4
              • ...
            • xfs
              • sudo yum install xfsprogs
              • sudo xfs_growfs -d /
              • sudo xfs_growfs -d /mnt/vol1
          7. Comproveu el resultat final / Check the final result:
            • df -h
              • Filesystem       Size  Used Avail Use% Mounted on
                /dev/nvme0n1p1    25G   18G  7.7G  70% /
                /dev/nvme1n1      25G   12G   13G  49% /mnt/vol1
        • check_disk.sh
          • #!/bin/bash

            # usage threshold: 80%
            usage_threshold=80

            # instance_id
            instance_id="id-1234"
            #instance_id=$(curl -m 2 -s http://169.254.169.254/latest/meta-data/instance-id/)

            # resize factor
            resize_factor=1.5

            # check local (not efs-mounted) disk usage
            full_disks=$(df --local --output=source,fstype,ipcent,target | tail -n +2 | awk -v usage_threshold=${usage_threshold} '{gsub(/%/,"",$3)} $3+0 >= usage_threshold {print $1 " " $2 " " $3 " " $4}')

            # if needed, resize on AWS and resize partition/disk
            while IFS= read -r linia
            do
                if (( ${#linia} > 0 ))
                then
                    echo "-- ${linia}"
                   
                    # split line using bash array
                    array_df_line=(${linia// / })
                    device_name=${array_df_line[0]}
                    fs_type=${array_df_line[1]}
                    usage=${array_df_line[2]}
                    mount_point=${array_df_line[3]}
                   
                    # 1. change aws ebs volume size
                    echo " 1. change volume size: /usr/local/bin/aws_resize_volume.py --instance-id ${instance_id} --device-name ${device_name} ${resize_factor}"

                    # grow partition (if any) (e.g. nvme0n1p1)
                    # get partition from device name
                    # - /dev/nvme0n1p1 is a partition and needs to be grown
                    # - /dev/nvme1n1 is not a partition and does not need to be grown
                    # lsblk --inverse --nodeps
                    # NAME      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
                    # nvme0n1p1 259:2    0  25G  0 part /
                    # nvme1n1   259:0    0  25G  0 disk /disc1

                    # 2. grow partition (if any)
                    short_device_name=${device_name##*/}
                    # get only entry if TYPE is part (not when TYPE is disk)
                    partition=$(lsblk --inverse --nodeps | awk -v pattern=${short_device_name} '$1 ~ pattern && $6 ~ /part/ {print $1}')
                    if [ -v partition ]
                    then
                        # /dev/nvme0n1p1: growpart /dev/nvme0n1 1
                        # /dev/sda1: growpart /dev/sda 1
                        device_name_partition_index=${device_name: -1}
                        device_name_without_partition_index=${device_name: : -1}
                        device_name_root=${device_name_without_partition_index%p*}
                        echo " 2. grow partition: sudo growpart ${device_name_root} ${device_name_partition_index}"
                    else
                        echo " 2. no partition"
                    fi
                   
                    # 3. grow filesystem
                    case ${fs_type} in
                        "xfs" )
                            echo " 3. grow filesystem: sudo xfs_growfs -d ${mount_point}"
                            ;;
                        "ext4" )
                            echo " 3. grow filesystem: sudo resize2fs ${mount_point}"
                            ;;
                        * ) echo " 3. unknown filesystem ${fs_type}"
                            ;;
                    esac
                fi
            done < <(echo "${full_disks}")

            exit 0
      • How to Resize AWS EC2 EBS Volumes
    • LVM
  • Swap
  • EFS (Elastic File System)
  • AMI (images)
    • Removal
      • When deleting (deregister) an AMI, the corresponding snapshot is not deleted
      • When trying to delete a snapshot attached to an AMI, an error is shown. Therefore, you can safely (try to) remove all snapshots; those which are attached to an AMI will not be removed.
      • IMPORTANT: when deleting an AMI, if its is attached to a Launch Configuration, this will not be checked by AWS, and LaunchConfiguration will point to a non-existing AMI!
  • Auto Scaling
  • userdata: pas de paràmetres en crear una instància / pass parameters when creating an instance
  • Balancejador de càrrega / Load balancer

S3

  • Amazon S3 Path Deprecation Plan – The Rest of the Story
    • https://bucketname.s3.amazonaws.com/
    • old versions:
      • https://s3.amazonaws.com/bucketname/
      • https://s3-us-east-2.amazonaws.com/bucketname/
  • Bucket policy
    • AWS Policy Generator
    • Granting Read-Only Permission to an Anonymous User
    • Public Readable Amazon S3 Bucket Policy
      • {
          "Version": "2008-10-17",
          "Statement": [
            {
              "Sid": "AllowPublicRead",
              "Effect": "Allow",
              "Principal": {
                "AWS": "*"
              },
              "Action": [
                "s3:GetObject"
              ],
              "Resource": [
                "arn:aws:s3:::bucket_name/*"
              ]
            }
          ]
        }
      • Boto (Python):
        • # modify policy to make it publicly available
          policy_json = '{"Version":"2008-10-17","Statement":[{"Sid":"AllowPublicRead","Effect":"Allow","Principal":{"AWS":"*"},"Action":["s3:GetObject"],"Resource":["arn:aws:s3:::%s/*"]}]}' % (bucket_name)
          print policy_json
          bucket.set_policy(policy_json)
  • Cache
  • CORS
  • Static website
    • Virtual Hosting of Buckets
    • Hosting a Static Website on Amazon S3
    • Example: Setting Up a Static Website Using a Custom Domain
    • Redireccionament / Redirect
    • AngularJS
    • Directory list
    • HTTPS
    • s3 bucket
      cloudfront
      route53
      http
      https
      name
      Static website hosting




      www.toto.org



      -
      -
      -
      http://s3-eu-west-1.amazonaws.com/www.toto.org/index.html
      https://s3-eu-west-1.amazonaws.com/www.toto.org/index.html
      -
      -
      -
      http://www.toto.org.s3.amazonaws.com/ (Access denied)
      http://www.toto.org.s3.amazonaws.com/index.html
      http://www.toto.org.s3.eu-west-1.amazonaws.com/ (Access denied)
      http://www.toto.org.s3.eu-west-1.amazonaws.com/index.html
      https://www.toto.org.s3.amazonaws.com/
      x
      -
      -
      http://www.toto.org.s3-website-eu-west-1.amazonaws.com/ https://www.toto.org.s3-website-eu-west-1.amazonaws.com/ (timeout: because of dots?)
      x
      -
      www.toto.org A ALIAS s3-website-eu-west-1.amazonaws.com.
      http://www.toto.org/ https://www.toto.org/
      x
      • Origin Domain Name: www.toto.org.s3.amazonaws.com
      • Default root object: index.html
      • Alternate Domain Names (CNAMEs): www.toto.org
      • Custom SSL Certificate: (choose one of the certificates previously uploaded to path /cloudfront/)
      www.toto.org A ALIAS www.toto.org (xxxxxx.cloudfront.net) http://www.toto.org/ https://www.toto.org/
      www-toto-org
      -
      -
      -
      http://s3-eu-west-1.amazonaws.com/www-toto-org/ (Access denied)
      http://s3-eu-west-1.amazonaws.com/www-toto-org/index.html
      https://s3-eu-west-1.amazonaws.com/www-toto-org/
      https://s3-eu-west-1.amazonaws.com/www-toto-org/index.html
      -
      -
      -
      http://www-toto-org.s3.amazonaws.com/ (Access denied)
      http://www-toto-org.s3.amazonaws.com/index.html
      http://www-toto-org.s3-eu-west-1.amazonaws.com/ (Access denied)
      http://www-toto-org.s3.eu-west-1.amazonaws.com/index.html
      https://www-toto-org.s3.amazonaws.com/ (Access denied)
      https://www-toto-org.s3.amazonaws.com/index.html
      https://www-toto-org.s3-eu-west-1.amazonaws.com/ (Access denied)
      https://www-toto-org.s3.eu-west-1amazonaws.com/index.html
      x
      -
      -
      http://www-toto-org.s3-website-eu-west-1.amazonaws.com/ https://www-toto-org.s3-website-eu-west-1.amazonaws.com/ (timeout)
      x
      • Origin Domain Name: www-toto-org.s3.amazonaws.com
      • Default root object: index.html
      • Alternate Domain Names (CNAMEs): www.toto.org
      • Custom SSL Certificate: (choose one of the certificates previously uploaded to path /cloudfront/)
      www.toto.org A ALIAS www.toto.org (xxxxxx.cloudfront.net) http://www.toto.org/ https://www.toto.org/
  • s3tools
    • s3cmd
      • Instal·lació / Installation
      • make bucket
        • s3cmd mb s3://...
      • list
        • s3cmd ls s3://bucket_name
      • upload
        • s3cmd put ...
  • yas3fs
    • Instal·lació / Installation
      • CentOS
        • sudo yum -y install fuse fuse-libs
          sudo easy_install pip
          sudo pip install yas3fs
          sudo sed -i'' 's/^# *user_allow_other/user_allow_other/' /etc/fuse.conf
          yas3fs s3://mybucket/path /mnt/local_path

        • fusermount -u mnt/local_path
  • s3fs-fuse
    • Wiki
    • maximum file size: 64GB
    • dependències / dependencies
      • Mageia
        • urpmi lib64fuse-devel
      • CentOS
        • yum install automake gcc-c++ fuse fuse-devel libcurl-devel libxml2-devel
    • compilació / compilation
      • git clone https://github.com/s3fs-fuse/s3fs-fuse.git
      • cd s3fs-fuse
      • ./autogen.sh
      • ./configure --exec-prefix=/usr
      • make
      • su; make install
      • Problemes / Problems
        • ./configure: line 4964: syntax error near unexpected token `common_lib_checking,'
          ./configure: line 4964: `PKG_CHECK_MODULES(common_lib_checking, fuse >= ${min_fuse_version} libcurl >= 7.0 libxml-2.0 >= 2.6 )'
          • ...
    • ?
      • echo "user_allow_other" >> /etc/fuse.conf
    • utilització / usage
      • Wowza
      • when not using roles
        • ~/.passwd-s3fs (/etc/passwd-s3fs)
          • bucketName:accessKeyId:secretAccessKey
        • chmod 600 ~/.passwd-s3fs
      • mkdir /mnt/bucketName; chmod 755 /mnt/bucketName
      • s3fs bucketName /mnt/bucketName -ouse_cache=/tmp -o allow_other,ahbe_conf=/etc/ahbe.conf
      • with role:
        • s3fs bucketName /mnt/bucketName -o allow_other,ahbe_conf=/etc/ahbe.conf,iam_role=my_rolename
      • mount on boot (FAQ)
        • How to force s3fs mount on boot
          • option 1:
            • /etc/init.d/local
          • option 2:
            • /etc/fstab
              • s3fs#my_bucket /mnt/my_bucket fuse _netdev,nonempty,allow_other,ahbe_conf=/etc/ahbe.conf 0 0
              • s3fs#my_bucket /mnt/my_bucket fuse _netdev,nonempty,allow_other,ahbe_conf=/etc/ahbe.conf,iam_role=my_rolename 0 0
            • Problems:
              • d????????? ? ? ? ?            ? my_bucket
                • Solution
                  • check that _netdev option is present in /etc/fstab
      • public permission for new files:
        • s3fs#my_bucket /mnt/my_bucket fuse _netdev,nonempty,allow_other,ahbe_conf=/etc/ahbe.conf,iam_role=my_rolename,default_acl=public-read 0 0
    • Problemes / Problems
    • Cloudfront Cache-control
      • ahbe.conf
        • sample_ahbe.conf
        • # mpd and m3u8 files are cached for 2 seconds
          .mpd Cache-Control max-age=2
          .m3u8 Cache-Control max-age=1
      • s3fs bucketName /mnt/bucketName -o allow_other,ahbe_conf="/etc/ahbe.conf"

RDS

  • Using RDS with Django
  • Security group
    • Default rule make database available from outside
    • To be reachable from ec2 instances, add a rule for your VPC (e.g. 172.32.0.0/16)

CloudFront

Route53

VPC

  • VPCs and Subnets
  • Differences in CloudFormation when having a particular VPC

    • default VPC
      own VPC

      default subnet
      own subnet own subnet





      AWS::EC2::Subnet
      AWS::EC2::RouteTable
      AWS::EC2::Route
      AWS::EC2::SubnetRouteTableAssociation
      AWS::EC2::VPC
      AWS::EC2::InternetGateway
      AWS::EC2::VPCGatewayAttachment
      AWS::EC2::Subnet
      AWS::EC2::RouteTable
      AWS::EC2::Route
      AWS::EC2::SubnetRouteTableAssociation
      AWS::EC2::SecurityGroup

      "VpcId" : {"Ref" : "MyVPC"}
      AWS::EC2::Instance
      "SecurityGroups" : [{ "Ref" : "MySecurityGroup" }] "SecurityGroupIds" : [{ "Ref" : "MySecurityGroup" }]
      "SubnetId" : {"Ref" : "MyFirstSubnet"}
      "SecurityGroupIds" : [{ "Ref" : "MySecurityGroup" }]
      "SubnetId" : {"Ref" : "MyFirstSubnet"}
      AWS::ElasticLoadBalancing::LoadBalancer
      "AvailabilityZones" : {"Fn::GetAZs": ""}
      "Subnets" : [{"Ref" : "MyFirstSubnet"}] "Subnets" : [{"Ref" : "MyFirstSubnet"}]
      AWS::AutoScaling::AutoScalingGroup
      "AvailabilityZones" : {"Fn::GetAZs": ""} "VPCZoneIdentifier" : [{"Ref" : "MyFirstSubnet"}] "VPCZoneIdentifier" : [{"Ref" : "MyFirstSubnet"}]
  • Multicast
    • Overlay Multicast in Amazon Virtual Private Cloud
      • Info
        • Get all instances with a multicast tag, and filter those within a specific community (e.g.: foo)
          • aws --output json ec2 describe-instances --filters "Name=tag-key,Values=multicast" >instances_multicast.json
          • jq '.Reservations[].Instances[] | select( .Tags[] | . and .Key=="multicast" and (.Value | startswith("foo")) )' instances_multicast.json
          • from the selected instances, get only some information:
            • jq '.Reservations[].Instances[] | select( .Tags[] | . and .Key=="multicast" and (.Value | startswith("foo")) ) | [.InstanceId, .PrivateIpAddress, .PublicIpAddress, .Tags]' instances_multicast.json
      • Setup
        • Setup step 1: Create a subnet
          • Option 1: just create a subnet in a existing VPC
          • Option 2: Create new AWS VPC (vpc-xxxxxx) with a subnet and a route table to Internet:
            • Subnet
              • Name: Public subnet
              • IPv4 CIDR: 10.0.0.0/24
            • Cloudformation
              • {
                    "Resources": {
                        "MyVPC": {
                            "Type" : "AWS::EC2::VPC",
                            "Properties" : {
                                "CidrBlock": "10.0.0.0/16",
                                "EnableDnsSupport" : "true",
                                "EnableDnsHostnames" : "true",
                                "Tags" :[ { "Key" : "Name", "Value" : "my-vpc"} } ]
                            }
                        },

                        "MyInternetGateway" : {
                            "Type" : "AWS::EC2::InternetGateway"
                        },

                        "MyVPCGatewayAttachment" : {
                            "Type" : "AWS::EC2::VPCGatewayAttachment",
                            "Properties" : {
                                "InternetGatewayId" : {"Ref" : "MyInternetGateway"},
                                "VpcId" : {"Ref" : "MyVPC"}
                            }
                        },

                        "MyPublicSubnet" : {
                            "Type" : "AWS::EC2::Subnet",
                            "Properties" : {
                                "VpcId" : { "Ref" : "MyVPC" },
                                "CidrBlock" : "10.0.0.0/24",
                                "MapPublicIpOnLaunch" : "true",
                                "Tags" : [ { "Key" : "Name", "Value" : "my-subnet"} ]
                            }
                        },
                       
                        "PublicRouteTable" : {
                            "Type" : "AWS::EC2::RouteTable",
                            "Properties" : {
                                "VpcId" : {"Ref" : "MyVPC"}
                            }
                        },
                       
                        "PublicRoute" : {
                            "Type" : "AWS::EC2::Route",
                            "DependsOn" : "MyVPCGatewayAttachment",
                            "Properties"  : {
                                "RouteTableId" : {"Ref" : "PublicRouteTable"},
                                "DestinationCidrBlock" : "0.0.0.0/0",
                                "GatewayId" : {"Ref" : "MyInternetGateway"}
                            }
                        },
                       
                        "PublicSubnetRouteTableAssociation" : {
                            "Type" : "AWS::EC2::SubnetRouteTableAssociation",
                            "Properties" : {
                                "SubnetId" : {"Ref" : "MyPublicSubnet"},
                                "RouteTableId" : {"Ref" : "PublicRouteTable"}
                            }
                        },

                    }

                }
        • Setup step 2: Create AWS role with policy:
          • {
                "Version": "2012-10-17",
                "Statement": [
                    {
                        "Sid": "Stmt1414071732000",
                        "Effect": "Allow",
                        "Action": [
                            "ec2:DescribeInstances",
                            "ec2:DescribeTags",
                            "ec2:DescribeRegions"
                        ],
                        "Resource": [
                            "*"
                        ]
                    }
                ]
            }
          • Cloudformation:
            • ...
        • Setup step 3: Create AWS security group (sg-yyyyyy) in your VPC (vpc-xxxxxx)
          • Inbound Rules:
            • Type: Custom Protocol Rule
            • Protocol: GRE (47)
            • Port Range: All
            • Source: sg-yyyyyy
          • Cloudformation (to avoid circular dependency, a AWS::EC2::SecurityGroupIngress must be created)
            •     "MyInboundRule" : {
                      "Type": "AWS::EC2::SecurityGroupIngress",
                      "Properties":{
                      "IpProtocol" : "47",
                      "SourceSecurityGroupId" : {"Ref" : "MySecurityGroup"},
                      "GroupId" : {"Ref" : "MySecurityGroup"}
                      }
                  },
                 
                  "MySecurityGroup" : {
                      "Type" : "AWS::EC2::SecurityGroup",
                      "Properties" : {
                          "GroupDescription" : "Enable ports 22 (ssh), GRE (47) (multicast)",
                          "VpcId" : {"Ref" : "MyVPC"},
                          "SecurityGroupIngress" : [
                              {
                                  "IpProtocol" : "tcp",
                                  "FromPort" : "22",
                                  "ToPort" : "22",
                                  "CidrIp" : "0.0.0.0/0"
                              }
                          ]
                      }
                  },
        • Setup step 4: Creation of several instances with this role and security group, with tag:
          • Name: multicast; Value: foo,172.16.0.7/24
          • Name: multicast; Value: foo,172.16.0.8/24
          • ...
      • Installation
        • Option 1:
          • Installation of Ruby script
            • CentOS
              • sudo yum install unzip bridge-utils ebtables curl ruby ruby-devel rubygem-nokogiri rubygem-daemons libxml2-devel
              • sudo gem install aws-sdk-v1 integration libxml-ruby
            • cd
            • wget https://s3.amazonaws.com/mcd-code/mcd-code-2014-07-11.zip
            • cd mcd-code-2014-07-11
            • sudo chmod 755 *
            • sudo mkdir -p /opt/mcast
            • sudo cp -pr * /opt/mcast'
            • sudo chown -R root:root /opt/mcast
          • Start Ruby script
            • temporarily
              • sudo ruby -d /opt/mcast/mcd
            • daemon
              • CentOS
                • mcd.service
                  • [Unit]
                    Description=Multicast daemon for AWS EC2 instances
                    After=syslog.target network.target cloud-init.service

                    [Service]
                    Type=simple
                    #PIDFile=/run/mcd.pid
                    ExecStartPre=/usr/local/bin/mcd_setup.sh foo 172.16.0.0/24
                    ExecStart=/opt/mcast/mcd
                    ExecStartPost=/usr/local/bin/mcd_setup_route.sh foo
                    ExecReload=/bin/kill -s HUP $MAINPID
                    ExecStop=/bin/kill -s QUIT $MAINPID

                    [Install]
                    WantedBy=multi-user.target
                • mcd_setup_route.sh
                  • #!/bin/bash

                    function print_help_and_exit {
                        cat <<EOF
                    Usage: `basename $0` multicast_name

                    Add route for multicast: mcbr-<multicast_name>

                    Examples:
                    - `basename $0` foo

                    EOF
                        exit 1
                    }

                    MIN_ARGS=1
                    MAX_ARGS=1
                    if (( $# < $MIN_ARGS )) || (( $# > $MAX_ARGS ))
                    then
                        print_help_and_exit
                    fi

                    # options
                    if ! params=$(getopt -o h --long help -n $0 -- "$@")
                    then
                        # invalid option
                        print_help_and_exit
                    fi
                    eval set -- ${params}

                    while true
                    do
                        case "$1" in
                            -h | --help ) print_help_and_exit;;
                            -- ) shift; break ;;
                            * ) break ;;
                        esac
                    done

                    # parameters
                    multicast_name=$1

                    # wait for bridge to exist
                    bridge_name="mcbr-${multicast_name}"

                    timeout=60
                    increment=5
                    t=0

                    while (( t < timeout )) && (brctl show ${bridge_name} 2>&1 1>/dev/null | grep -q "No such device")
                    do
                        echo "[`basename $0`] bridge ${bridge_name} is not available yet (${t}s/${timeout}s)"
                        (( t+=increment ))
                        sleep ${increment}
                    done

                    # add route for multicast
                    echo "[`basename $0`] adding route for multicast: mcbr-${multicast_name}"
                    route add -net 224.0.0.0/4 mcbr-${multicast_name}

                    exit 0
                • mcd_setup.sh
                  • #!/bin/bash

                    function print_help_and_exit {
                        cat <<EOF
                    Usage: `basename $0` multicast_name multicast_cidr
                    Set the AWS EC2 tag: "multicast", "<multicast_name>,<multicast_cidr>"
                    Address multicast_cidr has the same number as local ip address, but converted to specified subnet
                    E.g.: if local address is 10.1.2.3/24 and you specify 172.16.0.0/24, the multicast_cidr=172.16.0.3/24
                    IMPORTANT: role for this ec2 instance must include a policy with: "ec2:CreateTags"

                    Examples:
                    - `basename $0` foo 172.16.0.0/24
                    EOF
                        exit 1
                    }

                    MIN_ARGS=2
                    MAX_ARGS=2
                    if (( $# < $MIN_ARGS )) || (( $# > $MAX_ARGS ))
                    then
                        print_help_and_exit
                    fi

                    # options
                    if ! params=$(getopt -o h --long help -n $0 -- "$@")
                    then
                        # invalid option
                        print_help_and_exit
                    fi
                    eval set -- ${params}

                    while true
                    do
                        case "$1" in
                            -h | --help ) print_help_and_exit;;
                            -- ) shift; break ;;
                            * ) break ;;
                        esac
                    done

                    # parameters
                    multicast_name=$1
                    multicast_cidr=$2

                    function translate_ip {
                        input_cidr=$1
                        output_cidr=$2

                        # remove subnet
                        input_address=${input_cidr%/*}

                        # get network and prefix
                        eval $(ipcalc -np $output_cidr)
                        output_prefix=$PREFIX
                        output_network=$NETWORK

                        # calculate number of bytes (n)
                        let output_positions=${output_prefix}/8

                        # remove first n bytes
                        input_array=(${input_address//./ })
                        input_significative=${input_array[@]:${output_positions}}

                        # get first n bytes
                        output_array=(${output_network//./ })
                        output_significative=${output_array[@]:0:${output_positions}}
                       
                        # join all bytes
                        total_address_array=(${output_significative[@]} ${input_significative[@]})
                        total_address=$(IFS='.';echo "${total_address_array[*]}";IFS=$'')
                        total_cidr="${total_address}/${output_prefix}"
                        echo $total_cidr
                    }

                    # check whether ip command is available
                    if ! (which ip >/dev/null 2>&1)
                    then
                        echo "ERROR: command ip not found. Consider running this script as root or sudo."
                        exit 1
                    fi

                    # get own information
                    local_cidr=$(ip -o address | awk '$2 !~ /lo/ && $3 ~ /^inet$/ {print $4; exit;}')
                    #local_ipv4=$(curl http://169.254.169.254/latest/meta-data/local-ipv4/)

                    multicast_cidr=$(translate_ip $local_cidr $multicast_cidr)

                    echo multicast_cidr: $multicast_cidr

                    # eu-west-1c
                    if ! aws_subregion=$(curl -s -m 4 http://169.254.169.254/latest/meta-data/placement/availability-zone)
                    then
                        echo "no aws_subregion found. Are you sure that you are running this script on an AWS instance?"
                        exit 1
                    fi
                    # eu-west-1
                    aws_region=${aws_subregion: : -1}

                    instance_id=$(curl -s -m 4 http://169.254.169.254/latest/meta-data/instance-id)

                    # create a tag
                    aws ec2 create-tags --region ${aws_region} --resources $instance_id --tags Key="multicast",Value="${multicast_name}\,${multicast_cidr}"

                    exit 0
        • Option 2: Install bash script
          • Installation of mcd.sh
          • Start bash script
            • temporarily
            • daemon
              • CentOS
      • Check
        • Process
          • sudo ps -edalf | grep mcd
        • Logs
          • tail -f /var/log/messages | grep mcd
        • Created bridges
          • brctl show
        • Created GRE tunnels
          • ip link show
        • Routes
          • route -n
        • Members
          • netstat -g
        • Omping
          • from each instance:
            • omping 172.16.0.7 172.16.0.8 ...
          • if it does not work:
            • check if some gretap points to a non existing address
              • to get the remote address, check the logs in /var/log/messages when the gretap was created
      • Ús / Usage
        • routes
          • in each instance:
            • route add -net 224.0.0.0/4 mcbr-foo
        • application:
          • from one instance:
            • ffmpeg ...
          • from the other one:
            • ...

Big data and analytics

CLI

  • AWS Command Line Interface
  • Instal·lació / Installation (python)
  • Ús / Usage
    • aws --version ...
  • Problemes / Problems
    • ImportError: No module named history
      • the problem is that a pair of awscli (1.14.28) and botocore (1.6.0), installed from yum (awscli.noarch), does not work
      • Solucions / Solutions
        • Swap s3transfer packages (awscli-1.14.28-5):
          • yum swap python2-s3transfer python-s3transfer
        • Use pip
          • sudo pip install awscli
            • a pair of working packages is, e.g.: awscli==1.14.28, botocore==1.8.32
  • queries
  • templates
    • --generate-cli-skeleton > skeleton.json
    • --cli-input-json file://skeleton.json
  • stdin/stdout
    • aws s3 cp s3://bucket/key - | bzip2 -best | aws s3 cp - s3://bucket/key.bz2
  • CLI reference
    • general options
      • --region eu-west-1
      • --profile myprofile
      • --output json
      • ...
    • aws autoscaling
      • launch configuration
      • autoscaling group
        • aws autoscaling create-auto-scaling-group --auto-scaling-group-name grup_stateless --launch-configuration-name lc_stateless_yyyymmdd_hhmm --min-size 1 --max-size 2 --load-balancer-names lb-stateless
        • aws autoscaling update-auto-scaling-group --auto-scaling-group-name grup_stateless --launch-configuration-name lc_stateless_yyyymmdd_hhmm
        • number of instances inside an autoscaling group, given its name
          • result=$(aws --output json autoscaling describe-auto-scaling-groups --auto-scaling-group-names $asg_name)
            number_instances=$(echo $result | jq '.AutoScalingGroups[0].Instances | length')
        • get autoscaling group, given its tag Name=myname:
          • asg_info=$(aws --output json autoscaling describe-auto-scaling-groups)
            asg_name_tag_value="myname"
            asg=$(echo "$asg_info" | jq ".AutoScalingGroups[] | select( .Tags[] | . and .Key==\"Name\" and .Value==\"${asg_name_tag_value}\")")
        • aws autoscaling set-desired-capacity --auto-scaling-group-name $asg_name --desired-capacity 2
        • Instance protection
          • aws autoscaling set-instance-protection --instance-ids i-93633f9b --auto-scaling-group-name my-auto-scaling-group --protected-from-scale-in
        • given an instance id, get the autoscaling group name it belongs to:
          • aws ec2 describe-tags --filters "Name=resource-id,Values=$instance_id" "Name=key,Values=aws:autoscaling:groupName" | jq '.Tags[] | .Value'
      • get all autoscaling groups (with pagination)
        • aws_options="--profile my_profile --output json --region eu-west-1"
          next_token=""
          max_items=50
          page_size=100
          total_number=0
          total_elements=$(jq -n '[]')

          while [[ $next_token != "null" ]]
          do
              if [[ "$next_token" ]]
              then
                  lc_info=$(aws ${aws_options} autoscaling describe-launch-configurations --max-items $max_items --page-size $page_size --starting-token $next_token)
              else
                  lc_info=$(aws ${aws_options} autoscaling describe-launch-configurations --max-items $max_items --page-size $page_size)
              fi
              #echo $lc_info | jq '.'
              returned_number=$(echo $lc_info | jq  '.LaunchConfigurations | length' )
              returned_elements=$(echo $lc_info | jq  '.LaunchConfigurations')
              echo $returned_elements | jq '.'

              total_elements=$( echo $total_elements | jq ". += $returned_elements")
              echo "returned_number: $returned_number"
              total_number=$(( total + returned_number ))
              echo "total_number: $total_number"
              echo $lc_info | jq  '.LaunchConfigurations[] | ([.LaunchConfigurationName] | join (" "))'

              next_token=$(echo $lc_info | jq '.NextToken')
              echo "next_token: $next_token"
             
          done


          echo "---------------------------"
          echo $total_elements | jq '.[].LaunchConfigurationName'
    • aws cloudformation
    • aws cloudfront
      • aws configure set preview.cloudfront true
      • list all distributions
        • aws cloudfront list-distributions --output json
      • get a specific distribution:
        • aws cloudfront get-distribution --id E1KBXTVP599T0A
      • get the config for a specific distribution
        • aws cloudfront get-distribution-config --id E1KBXTVP599T0A
      • update distribution
        • aws cloudfront get-distribution-config --id ${cloudfront_id} --output json > /tmp/${cloudfront_id}.json

          # get etag
          etag=$(jq -r '.ETag' /tmp/${cloudfront_id}.json)

          # modify
          /tmp/${cloudfront_id}.json
          ...


          aws cloudfront update-distribution --id ${cloudfront_id} --if-match $etag --distribution-config $(jq -c '.DistributionConfig' /tmp/${cloudfront_id}.json) 
    • aws configure
      • boto3 credentials
      • generated files
        • ~/.aws/credentials (for all SDKs)
        • ~/.aws/config (only for CLI)
      • crearà / will create: ~/.aws/config
        • [default]
          output = json
          region = eu-west-1
          aws_access_key_id = xxx
          aws_secret_access_key = yyy

          [preview]
          cloudfront = true
      • i/and ~/.aws/credentials (? now included in ~/.aws/config)
        • [default]
          aws_access_key_id = xxx
          aws_secret_access_key = yyy
      • per a fer servir un altre perfil / to use another profile
        • aws --profile myprofile configure
        • will create ~/.aws/config
          • [profile myprofile]
            output = json
            region = eu-west-1
        • and ~/.aws/credentials
          • [myprofile] aws_access_key_id = xxx
            aws_secret_access_key = yyy
        • aws --profile myprofile ...
      • per a fer servir un altre fitxer de configuració / to use an alternate config file (e.g. /etc/aws/config):
    • aws efs
      • file system
        • name="fs-toto"
          creation_token=$(
          openssl rand -hex 10)
          response=$(aws efs create-file-system --creation-token ${creation_token} --tags Key=Name,Value=${name})
          file_system_id=$(echo ${response} | jq -r '.FileSystemId')
          echo ${file_system_id}
      • mount targets
        • create a security group for NFS (port 2049)
          • group_name="sgroup-nfs-toto"
            vpc_id="vpc-..."
            description="Enable port 2049 (nfs)"
            response=$(aws ec2 create-security-group --group-name ${group_name} --vpc-id ${vpc_id} --description "${description}")
            group_id=$(echo ${response} | jq -r '.GroupId')
            echo ${group_id}
          • protocol=tcp
            port=2049
            cidr="" # cidr of the subnet inside the vpc
            aws ec2 authorize-security-group-ingress --group-id ${group_id} --protocol ${protocol} --port ${port} --cidr "${cidr}"
        • create a mount target
          • subnet_id="subnet-..."
            response=$(aws efs create-mount-target --file-system-id ${file_system_id} --subnet-id ${subnet_id} --security-groups ${group_id})
            mount_target_id=$(echo ${response} | jq -r '.MountTargetId')
            echo ${mount_target_id}
        • get info about a mount target
          • response=$(aws efs describe-mount-targets --mount-target-id ${mount_target_id})
      • mount efs from an instance
    • aws ec2
      • instances
        • aws ec2 start-instances --instance-ids i-9b789ed8
        • aws ec2 describe-instances --filter Name=tag:Name,Values=ubuntu_1310
          • ...
            TAGS    Name    ubuntu_1310
          • get PublicIpAddress
            • aws --output json ec2 describe-instances --instance-ids i-9b789ed8 | jq -r '.Reservations[0].Instances[0].PublicIpAddress'
        • aws ec2 describe-instance-attribute --instance-id i-896b92c9 --attribute instanceType
        • aws ec2 stop-instances --instance-ids i-9b789ed8
        • aws ec2 terminate-instances --instance-ids i-9b789ed8
        • waiters
          • aws ec2 wait instance-running --instance-id i-896b92c9
          • aws ec2 wait instance-status-ok --instance-id i-896b92c9
          • aws ec2 wait instance-stopped --instance-id i-896b92c9
      • images
        • create an image:
          • instance_id=i-xxxxxxx
            image_prefix=image_u1404
            data=$(date '+%Y%m%d_%H%M')
            imatge_id=$(aws ec2 create-image --instance-id ${instance_id} --name "${image_prefix}_${data}" --description "My description")
            echo "${imatge_id} ${image_prefix}_${data} (from ${instance_id})"
        • create an image but root volume (/dev/sda1) will be destroyed on termination (useful when this image will be used in a launch configuration of an autoscaling group)
          • instance_id=i-xxxxxxx
            image_prefix=image_u1404
            data=$(date '+%Y%m%d_%H%M')
            imatge_id=$(aws ec2 create-image --instance-id ${instance_id} --name "${image_prefix}_${data}" --description "My description" --block-device-mappings "[{\"DeviceName\": \"/dev/sda1\",\"Ebs\":{\"VolumeType\":\"gp2\",\"DeleteOnTermination\":true}}]")
            echo "${imatge_id} ${image_prefix}_${data} (from ${instance_id})"
        • describe images:
          • get all own images
            • aws ec2 describe-images --owners self
            • aws_options="--profile my_profile --output json --region eu-west-1"
              ami_info=$(aws ${aws_options} ec2 describe-images --owners self)
              echo $ami_info | jq  '.Images | sort_by(.Name) | .[] | [.ImageId, .OwnerId, .Name] | join("  ")'
          • describe an image given its id
          • get an image by name:
            • aws ec2 describe-images --owners self --filters "Name=name,Values=my_image_name" --output json
          • using wildcards:
            • aws ec2 describe-images --owners self --filters "Name=name,Values=my_image_basename*" --output json
          • get image_id of an image with given its name:
            • aws ec2 describe-images --owners self --filters "Name=name,Values=my_image_name" --output json | awk '/ImageId/ {print $2}' | tr -d '",'
            • aws ec2 describe-images --owners self --filters "Name=name,Values=my_image_name" --output json | jq -r '.Images[].ImageId'
          • get a list of amis, sorted by creation date
            • aws ec2 describe-images --owners self --filters "Name=name,Values=my_image_basename*" --output json | jq -r '(.Images | sort_by(.CreationDate) | .[] | [.CreationDate, .ImageId] | join(" ") )'
          • get image_id of the most recent image:
            • aws ec2 describe-images --owners self --filters "Name=name,Values=my_image_basename*" --output json | jq -r '(.Images | sort_by(.CreationDate) | .[-1] | .ImageId )'
          • aws ec2 describe-image-attribute --image-id ami-6be62b1c --attribute description
        • waiters
      • snapshots
      • add tags to an:
        • instance:
          • aws ec2 create-tags --resources i-9b789ed8 --tags Key=Name,Value=ubuntu_1404
        • image:
          • aws ec2 create-tags --resources ami-6be62b1c --tags Key=Name,Value=image_ubuntu_1404
          • value="..."
            # escape commas and remove single quotes
            escaped_value=$(echo ${value//,/\\,} | tr -d "'")

            aws ec2 create-tags --resources ami-6be62b1c --tags Key=Name,Value="${escaped_value}"
      • get tags from an:
        • instance
          • aws ec2 describe-tags --filters "Name=resource-id,Values=i-1234567890abcdef8"
      • create an instance from an image:
        • aws ec2 run-instances --image-id ami-6be62b1c --security-groups launch-wizard-2 --count 1 --key-name my_keyname --placement AvailabilityZone='eu-west-1a',Tenancy='default' --instance-type t2.micro
        • get the instance_id:
          • reponse in json (parse with jq)
            • instance_description=$(aws --output json ec2 run-instances --image-id $image_id --security-groups $security_group_name --iam-instance-profile Name="role-nfs_server" --count 1 --key-name $key_name --placement AvailabilityZone=${availability_zone},Tenancy='default' --instance-type $instance_type --block-device-mappings 'DeviceName=/dev/sda1,Ebs={DeleteOnTermination=true,VolumeType=gp2}' )
            • instance_id=$(echo $instance_description | jq -r '.Instances[0].InstanceId')
          • response in text (parse with awk)
            • instance_id=$(aws --output text ec2 run-instances --image-id ami-6be62b1c --security-groups launch-wizard-2 --count 1 --key-name my_keyname --placement AvailabilityZone='eu-west-1a',Tenancy='default' --instance-type t2.micro | awk '/INSTANCES/ {print $8}')
            • aws ec2 describe-instances --instance-ids $instance_id
        • overwrite the delete on termination behaviour:
          • aws ec2 run-instances --image-id ami-6be62b1c ... --block-device-mappings 'DeviceName=/dev/sda1,Ebs={DeleteOnTermination=true,VolumeType=gp2}'
        • ...
          • dades d'usuari /user data:
            • when creating instance:
            • des de la instància / from ec2 instance:
              • curl http://169.254.169.254/latest/user-data/
            • ec2-run-user-data (ec2ubuntu) (already installed in EC2 Ubuntu ami)
        • assign a role to the instance
        • Problems:
          • Client.InvalidParameterCombination: Could not create volume with size 10GiB and iops 30 from snapshot 'snap-xxxxx'
            • AutoScaling - Client.InvalidParameterCombination
            • Re: Stabilization Error (Again)
            • "Iops" should not be there (?)
            • Workaround: create image from web interface instead
            • Solution: add --block-device-mappings option (following example is for 10GiB volume)
              • aws ec2 create-image --instance-id i-xxxxxxxx --name AMIName --block-device-mappings '[{"DeviceName":"/dev/sda1","Ebs":{"VolumeType":"gp2","DeleteOnTermination":"true","VolumeSize":10}}]'
      • volumes
        • create a volume
          • volume_description=$(aws ec2 create-volume --availability-zone $availability_zone --volume-type $volume_type --size $small_volume_size_gibytes)
          • volume_id=$(echo $volume_description | jq '.VolumeId')
        • wait for volume to be created
        • tag a volume with a name
          • aws ec2 create-tags --resources $volume_id --tags Key=Name,Value=$volume_name
        • describe a volume with specified name:
        • get availability zone of a volume
          • aws_cli_options="--profile my_profile --output json"
            volume_name=my-volume-name
            volume_description=$(aws $aws_cli_options ec2 describe-volumes --filters "Name=tag:Name,Values=$volume_name")
            availability_zone=$(echo $volume_description | jq -r '.Volumes[0].AvailabilityZone')
        • attach a volume to an instance
          • aws ec2 attach-volume --volume-id $volume_id --instance-id $instance_id --device /dev/sd${volume_letter}
        • detach volume
        • list all volumes
          • response=$(aws --output json ec2 describe-volumes)

            while IFS= read -r; do
                volume=$REPLY
                volume_id=$(echo $volume | jq -r '.VolumeId')
                echo "---  VolumeId: $volume_id"

                #tags=$(echo $volume | jq -c -r '(.Tags | values | .[] | select(.Key == "Name") )')
                tags=$(echo $volume | jq -c -r '(.Tags | values | .[] )')
                for tag in $tags
                do
                    tag_key=$(echo $tag | jq '.Key')
                    tag_value=$(echo $tag | jq '.Value')
                    echo "    $tag_key: $tag_value"
                done
            done < <(echo "$response" | jq -c -r '.[] | .[]')
    • aws elb
      • create a load balancer and associate to an instance:
        • aws elb create-load-balancer --load-balancer-name lb-stateful --listeners Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80 --availability-zones eu-west-1a eu-west-1b eu-west-1c --security-groups sg-ba33c2df
        • aws elb configure-health-check --load-balancer-name lb-stateful --health-check Target=TCP:80,Interval=30,Timeout=10,UnhealthyThreshold=2,HealthyThreshold=2
        • aws elb register-instances-with-load-balancer --load-balancer-name lb-stateful --instances i-9b789ed8
      • create a load balancer to be associated to an autoscaling group:
        • aws elb create-load-balancer --load-balancer-name lb-stateless --listeners Protocol=TCP,LoadBalancerPort=1935,InstanceProtocol=TCP,InstancePort=1935 Protocol=HTTP,LoadBalancerPort=8080,InstanceProtocol=HTTP,InstancePort=8080 --availability-zones eu-west-1a eu-west-1b eu-west-1c --security-groups sg-ba33c2df
        • aws elb configure-health-check --load-balancer-name lb-stateless --health-check Target=TCP:8080,Interval=30,Timeout=10,UnhealthyThreshold=2,HealthyThreshold=2
      • add an HTTPS listener with ARN of an uploaded IAM certificate
        • aws --region eu-west-1 elb create-load-balancer-listeners --load-balancer-name $load_balancer_name --listeners Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTPS,InstancePort=443,SSLCertificateId=$ARN
      • modify the certificate of an existing listener for a given port:
    • aws iam
      • certificats de servidor / server certificates
        • upload a certificate
          • obtained e.g. from Letsencrypt
            • letsencrypt_dirname=/etc/letsencrypt
              aws iam upload-server-certificate --server-certificate-name cert-${domain} \
                  --certificate-body file://${letsencrypt_dirname}/live/${domain}/cert.pem \
                  --private-key file://${letsencrypt_dirname}/live/${domain}/privkey.pem \
                  --certificate-chain file://${letsencrypt_dirname}/live/${domain}/chain.pem
          • self-signed, to be used in cloudfront:
            • openssl req -new -nodes -keyout www.toto.org.key -sha256 -x509 -days 365 -out www.toto.org.crt
              • Common Name: www.toto.org
            • aws iam upload-server-certificate \
                  --server-certificate-name cert-www.toto.org \
                  --certificate-body file://.../
              www.toto.org.crt \
                  --private-key file://.../
              www.toto.org.key \
                 
              --certificate-chain file://.../www.toto.org.crt \
                  --path /cloudfront/
        • get a list of server certificates
        • get ARN of a certificate (will be specified when adding a listener to a ELB)
          • ARN=$(aws --output json iam get-server-certificate --server-certificate-name ${domain}.cert | jq '.ServerCertificate.ServerCertificateMetadata.Arn')
        • check if a certificate is available
          • ...
    • aws route53
      • Adding EC2 instances to Route53 (bash+boto)
      • aws route53 list-resource-record-sets --hosted-zone-id xxxxxx
      • aws route53 change-resource-record-sets --hosted-zone-id xxxxxx --change-batch file:///absolute_path_to/change_entry.json
        • change_entry.json (to modify record www.toto.org; e.g. TTL value)
          • {
              "Comment": "Modifying TTL to 55",
              "Changes": [
                {
                  "Action": "UPSERT",
                  "ResourceRecordSet": {
                    "Name": "www.toto.org.",
                    "Type": "A",
                    "TTL": 55,
                    "ResourceRecords": [
                      {
                        "Value": "xx.xx.xx.xx"
                      }
                    ]
                  }
                }
              ]
            }
      • Auto configuration of Route53 from EC2 at boot (Ubuntu Upstart)
        • previously, from any computer:
          • option 1 (preferred): create a role and assign it to the instance when launching it
            • ...
          • option 2: create a user
            • create a user and group that can only modify Route53 entries:
              • from web interface:
                • Grup
                  • Group name
                    • grup_nomes_route53
                  • Policy
                    • {
                        "Version": "2012-10-17",
                        "Statement": [
                          {
                            "Effect": "Allow",
                            "Action": [
                              "route53:*"
                            ],
                            "Resource": [
                              "*"
                            ]
                          }
                        ]
                      }
              • alternatively, from aws cli:
                • ...
        • once logged into EC2 instance:
          • only if used option 2 (not using a role):
          • /usr/local/bin/route53.sh
            • #!/bin/bash

              # zone_id for
              mydomain.org
              ZONE_ID=$(cat zone_id.txt)
              ip_address=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
              name=www.mydomain.org
              tmp_file=/tmp/modify_record_set.json
              rm -f $tmp_file

              #aws route53 list-resource-record-sets --hosted-zone-id $ZONE_ID

              cat > $tmp_file <<EOF
              {
                "Comment": "Modifying ip address",
                "Changes": [
                  {
                    "Action": "UPSERT",
                    "ResourceRecordSet": {
                      "Name": "${name}.",
                      "Type": "A",
                      "TTL": 60,
                      "ResourceRecords": [
                        {
                          "Value": "$ip_address"
                        }
                      ]
                    }
                  }
                ]
              }
              EOF
              #source /opt/p27/bin/activate
              aws route53 change-resource-record-sets --hosted-zone-id $ZONE_ID --change-batch file://$tmp_file
              #deactivate
              exit 0
          • init script
            • CentOS
              • /etc/systemd/system/route53.service
                • [Unit]
                  Description=Description of my script
                  After=syslog.target network.target

                  [Service]
                  Type=oneshot
                  ExecStart=/usr/local/bin/route53.sh

                  [Install]
                  WantedBy=multi-user.target
              • sudo systemctl enable route53.service
              • sudo systemctl start route53.service
            • Ubuntu
              • /etc/init/route53.conf
                • description "route53 daemon"
                   
                  start on (filesystem and net-device-up IFACE=lo)
                  stop on runlevel [!2345]
                   
                  env DAEMON=/usr/local/bin/route53.sh
                  env PID=/var/run/route53.pid

                  env AWS_CONFIG_FILE=/home/ubuntu/.aws/credentials

                  exec $DAEMON
    • aws s3
      • Note: no need to create dirs: they do not exist
      • list all buckets
        • aws s3 ls
      • list all files in a "directory":
        • aws s3 ls --recursive s3://my_bucket/my_dir1/
      • copy a single file to S3:
        • aws s3 cp toto.txt s3://my_bucket/
      • recursively copy to S3:
        • aws s3 cp --recursive . s3://my_bucket/
      • recursively copy from S3:
        • aws s3 cp --recursive s3://my_bucket/ .
      • sync
        • aws s3 sync ... --exclude '*.png' --exclude 'log' ...
        • Problemes / Problems

Boto (Python)

  • Instal·lació / Installation
    • v3
      • pip install boto3
    • v2
      • pip install boto
      • Alternative: get it from git and install it:
        • cd ~/src
        • git clone https://github.com/boto/boto.git
        • cd boto
        • [source /opt/PYTHON27/bin/activate]
        • python setup.py install
  • Credencials / Credentials
    • aws configure
    • ~/.boto
      • [Credentials]
        aws_access_key_id = xxxx
        aws_secret_access_key = yyyy
    • Django
      • if instance running Django has no IAM role, and Django uses boto3 (e.g. django-storages), the credentials must be set at settings.py
      • settings.py
        • AWS_ACCESS_KEY_ID = '...'
          AWS_SECRET_ACCESS_KEY = '...'
    • Celery
      • in an instance running Celery / Django without IAM role, if a process called by celery uses boto3 (e.g. django-storages), variables defined in Django settings.py are not set. They must explicitly be set at celery.conf (called as EnvironmentFile by celery.service)
      • celery.conf
        • AWS_ACCESS_KEY_ID = '...'
          AWS_SECRET_ACCESS_KEY = '...'
  • Docs
  • Problemes / Problems
  • Usage
    • Amazon EC2 Basics For Python Programmers
    • Difference in boto3 between resource, client, and session?



    • examples
      default session
      specific session



      Amazon S3 Examples

      Session
      • stores configuration information (primarily credentials and selected region)
      • allows you to create service clients and resources
      • boto3 creates a default session for you when needed


      -
      session = boto3.Session(profile_name='dev')
      Resource
      • higher-level, object-oriented API
      • generated from resource description
      • uses identifiers and attributes
      • has actions (operations on resources)
      • exposes subresources and collections of AWS resources
      • does not provide 100% API coverage of AWS services
      creation
      s3_resource = boto3.resource('s3') ...
      methods
      s3_resource.create_bucket(Bucket='mybucket')
      Client
      • low-level AWS service access
      • generated from AWS service description
      • exposes botocore client to the developer
      • typically maps 1:1 with the AWS service API
      • all AWS service operations are supported by clients
      • snake-cased method names (e.g. ListBuckets API => list_buckets method)
      creation
      from resource
      s3_client = s3_resource.meta.client
      from session
      s3_client = boto3.client('s3')
      s3_client = session.client('s3')

      methods
      Creating and Using Amazon S3 Buckets
      • s3_client.list_buckets()
      • s3_client.create_bucket(Bucket='mybucket')
      • s3_client.upload_file()

      ...


      Using an Amazon S3 Bucket as a Static Web Host
    • autoscaling
    • s3
    • cloudformation
      • boto.cloudformation
      • Examples with boto3:
        • List of all stacks using a paginator:
          • Listing more than 100 stacks using boto3
          • import boto3

            cloudformation_resource = boto3.resource('cloudformation', region_name='eu-west-1')
            client = cloudformation_resource.meta.client
            number_stacks = 0
                   
            paginator = client.get_paginator('list_stacks')
            #response_iterator = paginator.paginate(StackStatusFilter=['CREATE_COMPLETE'])
            response_iterator = paginator.paginate()
            for page in response_iterator:
                stacks = page['StackSummaries']
                for stack in stacks:
                    stack_name = (stack['StackName'])
                    stack_status = (stack['StackStatus'])
                    print('{} {} {}'.format(number_stacks, stack_name, stack_status))
                    number_stacks += 1
      • Examples with boto v2:
        • single EC2 instance
          • single_ec2.json
          • single_ec2.py
            • import boto.cloudformation
              from django.conf import settings
              ...

                  try:
                      conn = boto.cloudformation.connect_to_region( settings.AWS_DEFAULT_REGION )
                      stack = conn.create_stack(self.name,
                          template_body=template_body,
                          template_url=None,
                          parameters=[],
                          notification_arns=[],
                          disable_rollback=False,
                          timeout_in_minutes=None,
                          capabilities=None)
        • single EC2 entry with Route53
          • single_ec2_r53.json
          • single_ec2_r53.py
            •         try:
                          # connect to the cloud and create the stack
                         
                          # connect to AWS
                          conn = boto.cloudformation.connect_to_region( settings.AWS_DEFAULT_REGION )

                          # check if the stack already exists
                          existing_stacks = [s.stack_name for s in conn.describe_stacks()]
                          logger.debug("   Existing stacks: %s" % existing_stacks)
                          if self.name in existing_stacks:
                              logger.error("   Stack %s is already created" % self.name)
                              raise Exception("Stack %s is already created" % self.name)
                         
                          # create the stack
                          conn.create_stack(self.name,
                                            template_body=template_body,
                                            template_url=None,
                                            parameters=[
                                                        ('HostedZone','toto.org'),
                                                        ],
                                            notification_arns=[],
                                            disable_rollback=False,
                                            timeout_in_minutes=None,
                                            capabilities=None)
                         
                          # wait for COMPLETE
                          ready = False
                          while not ready:
                              stacks = conn.describe_stacks(self.name)
                              if len(stacks) == 1:
                                  stack = stacks[0]
                              else:
                                  raise Exception("Stack %s has not been created" % self.name)
                              logger.debug("   stack status: %s" % stack.stack_status)
                              # CREATE_COMPLETE, ROLLBACK_COMPLETE
                              ready = (string.find(stack.stack_status, 'COMPLETE')) != -1
                              time.sleep(5)
                         
                          # get output information
                          outputs = dict()
                          for output in stack.outputs:
                              outputs[output.key] = output.value
                         
                          logger.debug("   DomainName: %s" % outputs['DomainName'])
        • ...
          • my_file.py
            •             # connect to AWS
                          conn = boto.cloudformation.connect_to_region( settings.AWS_DEFAULT_REGION )
                         
                          stacks = conn.describe_stacks(stack_name)
                          if len(stacks) == 1:
                              stack = stacks[0]
                          else:
                              raise Exception("Stack %s does not exist" % stack_name)

                          # CREATE_COMPLETE, ROLLBACK_COMPLETE
                          ready = (string.find(stack.stack_status, 'CREATE_COMPLETE')) != -1
                  
                          if ready:       
                              # get parameters (conversion from ResultSet to dictionary)
                              parameters = {item.key:item.value for item in stack.parameters}
                             
                              # get output information (conversion from ResultSet to dictionary)
                              outputs = {item.key:item.value for item in stack.outputs}
    • cloudfront
      • Create an invalidation:
        •     try:
                  import boto3
              except Exception as e:
                  print 'ERROR: %s' % e

              profile_name = 'my_profile'
              session = boto3.Session(profile_name=profile_name)
              cloudfront_client = session.client('cloudfront')

              distribution_id = 'xxxxxx'
                
              # create invalidation
              import time
              response = cloudfront_client.create_invalidation(
                              DistributionId=distribution_id,
                              InvalidationBatch={
                                  'Paths': {
                                      'Quantity': 1,
                                      'Items': ['/*']
                                  },
                              'CallerReference': str(time.time())
                              }
                          )

              print response
    • autoscaling
      • Examples with boto3
        • Get ids of instances inside an autoscaling group with specified name (using JMESPath):
          • # get the list of instances in the autoscaling group
            client_autoscaling = boto3.client('autoscaling', region_name='eu-west-1')
            paginator = client_autoscaling.get_paginator('describe_auto_scaling_instances')
            page_iterator = paginator.paginate(
                PaginationConfig={'PageSize': 50}
            )
                       
            # ids for instances whose 'AutoScalingGroupName' == asg_name
            # http://boto3.readthedocs.io/en/latest/guide/paginators.html#filtering-results-with-jmespath
            filtered_instances = page_iterator.search(
                'AutoScalingInstances[?AutoScalingGroupName == `{}`]'.format(asg_name)
            )
            instance_ids = [ i['InstanceId'] for i in filtered_instances ]
        • Get all volumes in zones starting with 'eu-'
          • import boto3

            ec2_client = boto3.client('ec2', region_name='eu-west-1')
               
            paginator = ec2_client.get_paginator('describe_volumes')
            page_iterator = paginator.paginate(
                PaginationConfig={'PageSize': 50}
            )
                          
            zone_prefix = 'eu-'
            filtered_volumes = page_iterator.search(
                'Volumes[?starts_with(AvailabilityZone,`{}`)]'.format(zone_prefix)
            )
        • ... and size is greater than 80 (GiB):
          • zone_prefix = 'eu-'
            filtered_volumes = page_iterator.search(
                'Volumes[?(starts_with(AvailabilityZone,`{}`) && Size>`80`)]'.format(zone_prefix)
            )
        • Have a tag "Name" whose value starts with "my-":
          • prefix = 'my-'
            filtered_volumes = page_iterator.search(
                'Volumes[?(Tags[?Key==`Name`] | [?starts_with(Value,`{}`)])]'.format(prefix)
            )
    • ec2
      • An Introduction to boto’s EC2 interface
      • EC2 (API reference)
      • Amazon EC2 Deployment with Boto
      • volumes (boto3)
      • yourfile.py
        • conn = boto.ec2.connect_to_region("eu-west-1", aws_access_key_id=settings.AWS_ACCESS_KEY_ID, aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
          reservation = conn.run_instances( 'ami-...', security_groups=['launch-wizard'], min_count=1, max_count=1, key_name='parell_key', placement='eu-west-1a', tenancy='default', instance_type='t1.micro')
          instance = reservation.instances[0]
                     
          while instance.state != 'running':
              time.sleep(5)
              instance.update() # Updates Instance metadata
              print "Instance state: %s" % (instance.state)

          # add Name tag
          instance.add_tag("Name","your_instance_name")

          print "Instance ID: %s" % instance.id
          print "Instance IP address: %s" % instance.ip_address
        • # get all the reservations with a given Name tag:
          reservations = conn.get_all_instances(filters={'tag:Name': '
          your_instance_name'})
          # get the first reservation
          reservation = reservations[0]
        • # get all the reservations with a given instance_id
          reservations = conn.get_all_instances(instance_ids=["i-27823367"]) # get the first reservation
          reservation = reservations[0]
          instance = reservation.instances[0]
          # get the value of tag "Name"
          name = instance.tags["Name"]
        • # connection
          conn = boto.ec2.autoscale.connect_to_region(settings.AWS_REGION,
                                                      aws_access_key_id=settings.AWS_ACCESS_KEY_ID_EC2,
                                                      aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY_EC2)
                 
          # get autoscaling group
          asg = conn.get_all_groups(names='my_name')[0]
                 
          # get instances
          instance_ids = [i.instance_id for i in asg.instances]
          print " Instances ID: %s" % (instance_ids)
                 
          # shutdown instances
          asg.shutdown_instances()
                 
          # wait for all instances to be shutdown
          instances = True
          while instances:
              time.sleep(5)
              asg = conn.get_all_groups('my_name')[0]
              if not asg.instances:
                  instances = False
              else:
                  logger.debug(" still some instances in group %s"%self.name)
                 
          # remove group
          asg.delete()
  • zip

Mobile

CloudWatch

  • Log
    • Python
      • WatchTower
    • system logs
      • awslogs
      • Instal·lació / Installation
      • Configuració / Setup
        • Upload the CloudWatch Agent Configuration File to Systems Manager Parameter Store
        • /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml
          • [credentials]
            [proxy]
            [region]
        • Manually Create or Edit the CloudWatch Agent Configuration File
          • /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json


            • agent
              • metrics_collection_interval
              • region
              • credentials
                • role_arn
              • debug
              • logfile
              metrics
              • namespace
              • append_dimensions
              • aggregation_dimensions
              • endpoint_override
              • metrics_collected
                • collectd
                • cpu
                  • resources
                  • totalcpu
                  • measurement[]
                    • rename
                    • unit
                  • metrics_collection_interval
                  • append_dimensions
                • disk
                  • resources
                  • measurement[]
                    • rename
                    • unit
                  • ignore_file_system_types
                  • metrics_collection_interval
                  • append_dimensions
                • diskio
                  • resources
                  • measurement[]
                    • rename
                    • unit
                  • metrics_collection_interval
                  • append_dimensions
                • swap
                  • measurement[]
                    • rename
                    • unit
                  • metrics_collection_interval
                  • append_dimensions
                • mem
                  • measurement[]
                    • rename
                    • unit
                  • metrics_collection_interval
                  • append_dimensions
                • net
                  • ...
                • netstat
                  • ...
                • processes
                  • ...
                • procstat
                • statsd
              • force_flush_interval
              • credentials
                • role_arn
              logs
              • logs_collected
                • files
                  • collect_list[]
                    • file_path
                    • log_group_name
                    • log_stream_name
                    • timezone
                    • multi_line_start_pattern
                    • encoding
                • windows_events
                  • collect_list[]
                    • event_name
                    • event_levels
                    • log_group_name
                    • log_stream_name
                    • event_format
                • log_stream_name
                • endpoint_override
                • force_flush_interval
                • credentials
                  • role_arn
        • Merge several json files:
          • awslogs_merge_conf.sh
            • #!/bin/bash

              EXPECTED_ARGS=2
              if (( $# != $EXPECTED_ARGS ))
              then
                  cat <<EOF
              Usage: `basename $0` config_dir config_file

              Examples:
              - `basename $0` /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
              EOF
                  exit 1
              fi

              # parameters
              config_dir=$1
              config_file=$2

              jq -s '.[0].logs.logs_collected.files.collect_list = [.[].logs.logs_collected.files.collect_list | add] | .[0]' ${config_dir}/*.json >${config_file}

              exit 0

http://www.francescpinyol.cat/aws.html
Primera versió: / First version: 2.X.2015
Darrera modificació: 30 de setembre de 2020 / Last update: 30th September 2020

Valid HTML 4.01!

Cap a casa / Back home