Friday, 29 May 2020

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:
  hello:
    handler: handler.hello
#   The following are a few example events you can configure
#   NOTE: Please make sure to change your handler code to work with those events
#   Check the event documentation for details
    events:
      - http:
        path: msg
        method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:
  hello:
    handler: handler.hello
    events:
      - http:
        path: msg
        method: get
        cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({
  statusCode: 200,
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Headers": "*",
    "Access-Control-Allow-Methods": "*"
  },
  body: JSON.stringify(obj)
});

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict';
const CorsResponse = obj => ({
  statusCode: 200,
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Headers": "*",
    "Access-Control-Allow-Methods": "*"
  },
  body: JSON.stringify(obj)
});


module.exports.hello = async event => {
  return CorsResponse("HELLO, WORLD!");
};

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")
  .then(resp => resp.json())
  .then(resp => {
    console.log(resp);
  });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before
  events:
    - http:
      path: upload
      method: post
      cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:
 - Effect: "Allow"
   Action:
     - "s3:*"
   Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:
  name: aws
  runtime: nodejs12.x

…and modify if it to:

provider:
  name: aws
  runtime: nodejs12.x
  apiGateway:
    binaryMediaTypes:
      - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {
  const s3 = new S3({});
  const  params = { Bucket: "your-bucket-name", Key: `/${fileName}`, Body: body };


  return new Promise(res => {
    s3.upload(params, function(err, data) {
      if (err) {
        return res(CorsResponse({ error: true, message: err }));
      }
      res(CorsResponse({ 
        success: true, 
        url: `https://${params.Bucket}.s3.amazonaws.com/${params.Key}` 
      }));
    });
  });
};

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict';
const AWS = require("aws-sdk");
const { S3 } = AWS;
const path = require("path");
const Jimp = require("jimp");
const uuid = require("uuid/v4");
const awsMultiPartParser = require("lambda-multipart-parser");


const CorsResponse = obj => ({
  statusCode: 200,
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Headers": "*",
    "Access-Control-Allow-Methods": "*"
  },
  body: JSON.stringify(obj)
});


const uploadToS3 = (fileName, body) => {
  const s3 = new S3({});
  var params = { Bucket: "your-bucket-name", Key: `/${fileName}`, Body: body };
  return new Promise(res => {
    s3.upload(params, function(err, data) {
      if (err) {
        return res(CorsResponse({ error: true, message: err }));
      }
      res(CorsResponse({ 
        success: true, 
        url: `https://${params.Bucket}.s3.amazonaws.com/${params.Key}` 
      }));
    });
  });
};


module.exports.upload = async event => {
  const formPayload = await awsMultiPartParser.parse(event);
  const MAX_WIDTH = 50;
  return new Promise(res => {
    Jimp.read(formPayload.files[0].content, function(err, image) {
      if (err || !image) {
        return res(CorsResponse({ error: true, message: err }));
      }
      const newName = `${uuid()}${path.extname(formPayload.files[0].filename)}`;
      if (image.bitmap.width > MAX_WIDTH) {
        image.resize(MAX_WIDTH, Jimp.AUTO);
        image.getBuffer(image.getMIME(), (err, body) => {
          if (err) {
            return res(CorsResponse({ error: true, message: err }));
          }
          return res(uploadToS3(newName, body));
        });
      } else {
        image.getBuffer(image.getMIME(), (err, body) => {
          if (err) {
            return res(CorsResponse({ error: true, message: err }));
          }
          return res(uploadToS3(newName, body));
        });
      }
    });
  });
};

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone
  onDrop={files => onDrop(files)}
  multiple={false}
>
  <div>Click or drag to upload a new cover</div>
</Dropzone>

The onDrop function looks like this:

const onDrop = files => {
  let request = new FormData();
  request.append("fileUploaded", files[0]);


  fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {
    method: "POST",
    mode: "cors",
    body: request
    })
  .then(resp => resp.json())
  .then(res => {
    if (res.error) {
      // handle errors
    } else {
      // success - woo hoo - update state as needed
    }
  });
};

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before
plugins:
  - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path");
module.exports = {
  entry: "./handler.js",
  output: {
    libraryTarget: 'commonjs2',
    path: path.join(__dirname, '.webpack'),
    filename: 'handler.js',
  },
  target: "node",
  mode: "production",
  externals: ["aws-sdk"],
  resolve: {
    mainFields: ["main"]
  }
};

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2ZSAjIf
via IFTTT

No comments:

Post a Comment

Passkeys: What the Heck and Why?

These things called  passkeys  sure are making the rounds these days. They were a main attraction at  W3C TPAC 2022 , gained support in  Saf...