October 21, 2019
In a previous post Using AWS Lambda to save files to AWS S3 using Node.js we covered using AWS Lambda to create functions to execute a task that usually would be used within a server. Today we will cover the same process but use the popular Serverless framework.
The Serverless framework allows the creation, monitoring and deployment of serverless functions on pretty much all leading cloud providers from AWS to Google to Alibaba.
Due to the authors familiarity with AWS we will be using AWS using Node.js. From here on we will refer to serverless as sls
which is the command line shortcut name.
If you can write AWS Lambda functions why would you what to use Serverless? Well simply it cuts down the amount of time spent on configurations and bouncing between multiple screens within the AWS console. Permissions and API gateways are all handled within the sls
configuration file. This will become more clear down the line.
Our test application will take in data from the user: name, email and a base64 image. The intent is to save the base64 image to AWS S3, the user data to AWS DynamoDb. The API will check if the email is unique and refuse to save the data is the email has been used before. This is the basic setup for something like a id card store. The example is trivial but complex enough to fully use serverless.
As such we will need an AWS S3 bucket and a AWS DynamoDb database. Be sure to create both resources in the same region, here we will be use us-east-1
.
The bucket must be configured to be publicly accessible. This is best done by using a policy:
{
"Version": "2012-10-17",
"Id": "MakePublic",
"Statement": [
{
"Sid": "MakePublic",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<YOUR-BUCKET-NAME>/*"
}
]
}
This will allow anyone to get objects (view our images).
Create an AWS DynamoDb, the only requirement here would be to use email
as a Partition Key
(to my SQL background this is our primary key).
Now that we have created our resources on AWS, let’s continue to configure our local environment.
To follow this tutorial, you will need the following:
aws
command line installed and configured with your admin rights. SLS still needs access to your AWS stack.sls
from https://serverless.com/When starting out creating serverless functions you will create 2 files, one will be your .js
file for the function and the other file will be the serverless.yml
.
Your JS file will mimic the same functional code you would write when creating AWS Lambda functions. The more important file is the serverless.yml
which will hold your basic function name, resources and API endpoints. Just like permissions with AWS a lot of problems cna be traced back to an improperly configured serverless.yml
.
Note: YML is space sensitive the way Python is. Pay attention to your indentations.
More on this as we write out our function.
sls deploy -v
(with verbose flag)sls logs -f <YOUR_FUNCTION_NAME> -t
(this is helpful for testing when deploying on AWS)sls invoke local -f <YOUR_FUNCTION_NAME> -p mocks/<YOUR_TEST_JSON>.json
(helpful to make sure your function works)It is important to remember that incoming requests to our functions are JSON
based, but the body is always a JSON
string. Like wise when we respond we need to JSON.stringify
our body responses.
So requests incoming into the API should be formatted like so:
{
"body": "{
\"name\": \"test\",
\"email\": \"test@test.com\",
\"data\": \"data:image/png;base64,XXX\"
}",
}
This is helpful when creating our mock tests. Like wise when responding it is important to respond with a statusCode and JSON.stringify()
body message. The hello world SLS example for example is:
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
},
null,
2
),
};
Run sls
command, this will ask if you want to create a new application. Use the prompts to select AWS Node.js
, name your project, you can use the Serverless account for monitoring but it is not necessary for this example. Once complete you will have a named directory that holds a index.js
and a serverless.yml
.
For now let’s just execute this function, run sls deploy
. Serverless will now create your function on AWS Lambda. But it will not create any endpoints. Run sls invoke local -f hello
, this will locally execute your function which is called hello
by default. You should see the following:
{
"statusCode": 200,
"body": "{\n \"message\": \"Go Serverless v1.0! Your function executed successfully!\",\n \"input\": \"\"\n}"
}
Let’s add an endpoint for our function. Open serverless.yml
there is a lot of information here about configuring the YML file. Most of it will be comments, but we will see our service name, our providers (AWS under node) and finally our function and handler. Let’s rename these.
We will change the name of the handler.js
to api.js
and change the function name to save
with that JS file. So module.exports.hello
will now be module.exports.save
and the function name in the YML will now be:
functions:
save:
handler: api.save
Run sls invoke local -f save
to check that everything is correct. If you get a Function "save" doesn't exist in this Service
error that means you forgot to rename the function and or handler in the .js or .yml files.
To add an endpoint to our small API you will need to add it to your function, so update your YML with:
functions:
save:
handler: api.save
events:
- http:
path: users/create
method: get
The path is the url endpoint and the method is what HTTP protocol you will be using. Run sls deploy
again. You should see an endpoint created:
endpoints:
GET - https://<YOUR_ENDPOINT>.execute-api.us-east-1.amazonaws.com/dev/users/create
Call your API by using curl -GET https://<YOUR_ENDPOINT>.execute-api.us-east-1.amazonaws.com/dev/users/create
. You will see a response, the hello world function returns a body with a message and the entire event as it came into the function.
Now let’s try saving to S3, we want to save a base64 image to S3. Let’s do that first before building the rest of the functionality. AWS uses IAM permssions to allow using resources on AWS. Let’s add our S3 bucket. Under provider
in the YML add the following:
provider:
name: aws
runtime: nodejs8.10
region: ${self:custom.region}
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObjectAcl
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
The IAM statements translate to, allow to put objects, get objects and put objects with access rights. Pay attention to the resource arn:aws:s3:::${self:custom.bucket}/*
, Serverless allows you to use variables. Add above the provider
custom:
bucket: <YOUR-BUCKET-NAME>
region: us-east-1
This is every much like ES6 strings using $, so arn:aws:s3:::${self:custom.bucket}/*
is arn:aws:s3:::<YOUR-BUCKET-NAME>/*
. Likewise for the region
variable.
Now that permissions are set up. Let’s use our resources, since we are on AWS Lambda we have access to all of the libraries available within the AWS stack.
Let’s add S3 to our api.js
, we will need to invoke AWS so add the following at the top.
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
and add the following after the function, this will just log out the event and parse any body into JSON.
console.log("EVENT: \n" + JSON.stringify(event, null, 2));
const data = JSON.parse(event.body);
Since we will be receiving data via POST, change the HTTP method in the serverless.yml
to post
from get
.
For testing what we will do now is recieving a message coming in and save that message to a text file on S3. As such its best to think of your incoming request. Which I think we will do the following, let’s just test echoing the incoming data for now.
curl -POST -d '{"name": "test user","email": "test@email.com"}' https://<YOUR_ENDPOINT>.execute-api.us-east-1.amazonaws.com/dev/users/create
You should see the same event, however now in the body we get:
"body": "{\"name\": \"test user\",\"email\": \"test@email.com\"}",
So we are receiving data correctly.
While we will be developing this function we will need to test it regularly, and calling curl each time will get tedious. Let’s create a mock test. Create a mock
directory and create a JSON
file that just contains:
{
"body": "{\"email\": \"test@test.com\"}"
}
Now you can call sls invoke local -f save -p mock/test.json
which is a great way to work on the function, instead of deploying and testing every time. You should see:
EVENT:
{
"body": "{\"email\": \"test@test.com\"}"
}
{
"statusCode": 200,
"body": "{
\n \"message\":
\"Go Serverless v1.0! Your function executed successfully!\",
\n \"input\":
{
\n
\"body\":
\"{\\\"email\\\": \\\"test@test.com\\\"}\"
\n
}
\n}"
}
Since we are not saving anything and just echoing whatever body is incoming. Let’s now save.
The module exports is an async
function, and writing to S3 takes some computational time. As such you should use async / await
otherwise your function can execute correctly but fail to actually save. Place this code to save to S3:
// Payload key is the final name
let s3payload = {
Bucket: '<YOUR-BUCKET-NAME>',
Key: data.email + '.json',
Body: data.email,
};
// Try S3 save.
const s3Response = await S3.upload(s3payload).promise();
Finally it’s best to write the function in a try / catch block to catch any errors. Below is the full code so far:
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
module.exports.save = async event => {
// Log incoming Event
console.log("EVENT: \n" + JSON.stringify(event, null, 2));
// parse event.body to JSON
const data = JSON.parse(event.body);
try {
// Payload key is the final name
let s3payload = {
Bucket: '<YOUR-BUCKET-NAME>',
Key: data.email + '.json',
Body: data.email,
};
// Try S3 save.
const s3Response = await S3.upload(s3payload).promise();
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'saved!',
saved: s3Response
},
null,
2
),
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify(
{
message: 'error!',
error: error
},
null,
2
),
};
}
};
Call sls invoke local -f save -p mock/test.json
and you should see the following response:
EVENT:
{
"body": "{\"email\": \"test@test.com\"}"
}
{
"statusCode": 200,
"body": "{
\n \"message\": \"saved!\",
\n \"saved\": {
\n \"ETag\": \"\\\"b642b4217b34b1e8d3bd915fc65c4452\\\"\",
\n \"Location\":
\"https://<YOUR-BUCKET-NAME>.s3.amazonaws.com/test%40test.com.json\",
\n \"key\":
\"test@test.com.json\",
\n
\"Key\": \"test@test.com.json\",
\n \"Bucket\":
\"<YOUR-BUCKET-NAME>\"
\n }
\n}"
}
This concludes part one of this tutorial. Part 2 will focus on saving to DynamoDb.
Written by Farhad Agzamov who lives and works in London building things. You can follow him on Twitter and check out his github here