How to fix Django AWS EC2 Gunicorn ExecStart ExecStop end error?

I am trying to point my AWS Route 53 Domain to my EC2 IPv4 Public IP for my Django app, but I’m running into some gunicorn issues. The strange thing is that I am getting a successful nginx configuration messages, but yet it doesn’t work. I’ve already created a record set on Route 53.

Error:
gunicorn.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.

Settings.py:

ALLOWED_HOSTS = ['175.228.35.250', 'myapp.com']

gunicorn.service:

[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/my_app
ExecStart=/home/ubuntu/my_app/venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/my_app/my_app.sock my_app.wsgi:application
[Install]
WantedBy=multi-user.target

Nginx Code:

server {
  listen 80;
  server_name 175.228.35.250 my_app.com www.my_app.com;
  location = /favicon.ico { access_log off; log_not_found off; }
  location /static/ {
      root /home/ubuntu/my_app;
  }
  location / {
      include proxy_params;
      proxy_pass http://unix:/home/ubuntu/my_app/my_app.sock;
  }
}

Nginx Test is successful but yet app won’t run:

ubuntu@ip-175-228-35-250:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Dynamically change branches on AWS CodePipeline

I am looking for a good solution on how to run parametrized (customized) builds in CodePipeline where branch can be changed dynamically?

A little background on the problem: I need an on-demand environment that will be started on certain branch. We already use Bamboo CI server for part of the infrastructure and this is easily achievable with customized build also in Jenkins.

So basically I need a way to trigger a build with branch as a variable on CodePipeline in AWS.

Is there’s any guarantee that AWS SQS FIFO msgs with different messageGroupId will not stay unprocessed for long time?

The Problem:
If I added all my messages with the same MessageGroupId, then if I have multiple processors each can process 10 msgs, I will not be able to process more than 10 msgs concurrently, so in fact a single processor will always work and the others will receive nothing.

Solution I Tried:
Simply adding different MessageGroupId for each msg.

Question:
Is there’s any guarantee that in case of spike in new msgs – more than all the processors can handle – that the processors might keep processing new msgs and abandon old ones simply because they have different MessageGroupId? or old msgs will have priority for delivery?

SES on node.js not sending the email although it return ‘success’ response

I tried sending an email using SES. The to address is same as the from address, and is verified in AWS SES.

        usersList = [{
            "Destination": {
                "ToAddresses": [
                    "someuserthatisverifiedinSES@abc.xyz"
                ]
            },
            "ReplacementTemplateData": `{ "name":"User1", "message": "Hell no" }`
        }];
        defaultTemplateData = `{ "name":"Student", "message": ` + req.body.message + ` }`;

        var params = {
            Destinations: usersList,
            Source: "someuserthatisverifiedinSES@abc.xyz", /* required */
            Template: `SomeTemplate`, /* required */
            DefaultTemplateData: defaultTemplateData
        };
        // Create the promise and SES service object
        var sendPromise = ses.sendBulkTemplatedEmail(params).promise();

        // Handle promise's fulfilled/rejected states
        return sendPromise.then(
            function (data) {
                console.log('sent successfully');
                console.log(data);
                return data;
            }).catch(
            function (err) {
                console.log('error happened off');
                console.error(err, err.stack);
                return err;
            });
    }

I’m getting a successful response to this code.

{ ResponseMetadata: { RequestId:
’14ada23f-someID-5be700be687f’ }, Status: [ { Status:
‘Success’,
MessageId: ‘010001619e8a79ed-someID-000000’ } ] }

But the email is not being delivered.

Specific Auto-Scaling group Cloudformation AWS

I have an auto scaling group in AWS so i can change stack size of instances created by CF.
However when i want to reduce the stack it will terminate random instances in the stack.

I need to know if its possible to reduce stack size in CF but leave specific instances in the stack running using the scaling group?

      "WebServerScaleUpPolicy" : {
  "Type" : "AWS::AutoScaling::ScalingPolicy",
  "Properties" : {
    "AdjustmentType" : "ChangeInCapacity",
    "AutoScalingGroupName" : { "Ref" : "AutoScalingServerGroup" },
    "Cooldown" : "60",
    "ScalingAdjustment" : "1"
  }
},
"WebServerScaleDownPolicy" : {
  "Type" : "AWS::AutoScaling::ScalingPolicy",
  "Properties" : {
    "AdjustmentType" : "ChangeInCapacity",
    "AutoScalingGroupName" : { "Ref" : "AutoScalingServerGroup" },
    "Cooldown" : "60",
    "ScalingAdjustment" : "-1"
  }
}

Any Help Appreciated.

getting net::ERR_CONNECTION_CLOSED when i send post request to aws lambda

I have a base64 image data uri which I am sending to the aws lambda function,
but I am getting a net::ERR_CONNECTION_CLOSED when I send the request.
here’s the code from client side

fetch(PUBLIC_ENDPOINT, {
      method: 'POST',
      headers: {
        Authorization: this.state.imgData,
        domain: 'xxxx',

        'Content-Type': 'image/png',
      },
    }).
    then((response) => response.json()).
    then((data) => {
      console.log('Message:', data);
      document.getElementById('message').textContent = '';
      document.getElementById('message').textContent = data.message;
    }).catch((e) => {
      console.log('error', e);
    });

and here’s the code for the lambda api endpoint

module.exports.hello = (event, context, callback) => {
  const imgData = event.headers.Authorization;
  const clientName = event.headers.domain;
  console.log('data is : ',imgData);
    console.log('clientName is : ',clientName);
const bucketParams = {
   Bucket : clientName
};


var buf = new Buffer(imgData.replace(/^data:image\/\w+;base64,/, ""), 'base64');
const uploadParams = {
  Bucket:clientName,
  Key: 'xxxxxx',
    Body: buf,
    ContentEncoding: 'base64',
        ContentType: 'image/png',
        ACL: 'public-read'
};
s3.putObject(uploadParams, function(err, data){
   if (err) {
     console.log(err);
     console.log('Error uploading data: ', data);
   } else {

     console.log('successfully uploaded the image!');
   }
 });  

  const response = {
    statusCode: 200,
    headers: {
      /* Required for CORS support to work */
      'Content-Type': 'application/json',
    'Access-Control-Allow-Origin': '*',
      /* Required for cookies, authorization headers with HTTPS */
    'Access-Control-Allow-Credentials': true,
    'Access-Control-Allow-Headers':'*'
  },
    body: JSON.stringify({
      message: 'Go Serverless v1.0! Your function executed successfully!',
      input: event,
    }),
  };

 return callback(null, response);

so, could you guys tell me where am I going wrong. why is it giving me the error I mentioned above?
I’d also like to add that in Network tab, it shows response status as 200.
then how come this error of connection closed?

How to migrate elasticsearch data to AWS elasticsearch domain?

I have elasticsearch 5.5 running on a server with some data indexed in it. I want to migrate this ES data to AWS elasticsearch cluster. How I can perform this migration. I got to know that one way is by creating the snapshot of ES cluster, but I am not able to find any proper documentation for this.