Follow-up to “Skip Lambda…” — Why You Shouldn’t Skip It

In my previous article, “Skip Lambda, Save Data to DynamoDB Directly Using API Gateway; Process Later With Streams”, I describe a technique allowing you to skip using a Lambda function in an API, and deliver data directly to DynamoDB. This followup details why, in many cases, you shouldn’t do that.

First, to be clear, it’s not that you should never do it (see examples at the end). If you fully control your system, it’s a simpler, more efficient, and cheaper way to get data into DynamoDB (or any other AWS system you may need to proxy to). But, there are good reasons to not do this, and I wanted to clarify that it shouldn’t be used anytime it’s technically possible.

I would say that the two main reasons to add a Lambda are to provide data validation and security

Security

Security depends on your situation. With the proxy technique, you still have security mechanisms, such as either direct Cognito user auth, or API keys. So, you may be covered here. However, what if there are multiple attributes in the data/request that must be evaluated in combination to ensure security? For example, let’s say you use API keys, as well as the client submitting the request passes something like a group name, or an application ID, as you need this to go into the DynamoDB (or any database) record. Let’s call this a “second ID.” You would want to ensure that the API key being used, corresponds to that second ID, which is not something the standard API Gateway authentication can do. This is more likely caused by a mistake on the client’s end, e.g. they have a typo or some error in their code, where they wind up submitting the wrong second ID with their request, but because you’re only checking the API key, you now inject the record they sent, with a bad second ID, or worse, a second ID that actually belongs to someone else (if you use a single database for all).

If you are authenticating via a Cognito user, this is less likely, as you’d be injecting the Cognito user’s ID as the main component for this in the record. But certainly with API keys, or any case where there is some secondary ID or attribute that has to coincide with that API key or user, this can be an issue.

Data Validation

Validating the incoming data, is probably the bigger reason to employ a Lambda in the flow. While you can do some basic validation via the VTL templates, you are extremely limited, and it really doesn’t allow you to have the level of logic most would want when validating incoming data (especially if you are doing a public API!). Additionally, while the VTL could reject the data, it can’t be setup to provide any specifics on why it’s rejecting it.

With a Lambda, you can do things like enforce a JSON schema, ensure the specific values for various attributes fall within acceptable ranges/are valid values, etc. For example, say you had a health app tracking heart rate, while a value such as 1000 is perfectly legit for the DynamoDB numeric field type, it’s not a heart rate you’d want to allow.

Additionally, dealing with mixtures of valid attributes is near impossible with VTL and/or required query parameters. e.g. you might have an app that allows either passing an address, or passing a latitude and longitude. It would be difficult to validate in VTL (and impossible with query parameter requirements) that the user used one or the other, AND when using location, that they specified both latitude and longitude.

DynamoDB Specific Aspects

When it comes to DynamoDB specifically, there is even more reason for validation. When using the proxy technique, and in particular with batch writes, you encounter a couple of problems.

BatchWriteItem doesn’t support ConditionExpressions. Thus, if you Put an item whose key is the same as an existing item, it’ll simply overwrite the existing item, and you’d never know. Whereas instead if you are inserting records via a Lambda, you can add ConditionExpressions and/or other checks (it does mean you are no longer leveraging batch writes, but in many cases, this data validation will be far more important).

Further, if you have two items in the batch, with the same primary key, the entire batch will be rejected! See the bullet list of items that cause the entire batch to be rejected (about the 6th paragraph down) in the BatchWriteItem documentation.

One last aspect is that with a Lambda, you are no longer constrained by the BatchWriteItem limit of 25 items. You would still want to be careful of how long it was taking to insert your records, overall API response time, and so on, but for example, maybe now it’s easy to allow up to 100 items in a single request. More on this below.

Solutions

Adding a Lambda between API Gateway and DynamoDB (or other services) solves the above issues. Or, what you may find, is you also use a Lambda authorizer (also/formerly known as a custom authorizer). Or even some other combination.

For example, for one API I’ve built, we use API keys, but instead of using API Gateway to check that, we use a lambda authorizer. This is because we have the “second ID” situation as mentioned above, and need to ensure that not only is it a valid API key, but that other data that’s passed with the payload is acceptable for that API key (i.e. for that customer). You could also use API Gateway to verify just the API key (getting the benefit of also using their rate limiting and so on), then check the secondary data in your lambda. However, I chose the lambda authorizer, because I can treat it like middleware, where I have a single authorizer covering multiple APIs/endpoints. Furthermore, with lambda authorizers, API Gateway caches the result of those (based on the headers or token that you use), such that additional calls with the same auth are even more quickly authenticated (vs. requiring your lambda being invoked each time). We’ll see if we revisit this given that it now puts the rate limiting on us (we don’t use that for this endpoint, so it’s currently ok). e.g. I may wind up changing this to use the API Gateway API key auth, and then just have a small shared library that handles the secondary data auth across any lambdas that need it.

In the lambda itself, we can now use JSON schema validation, validation of any/all attributes in the data, as well as creating more helpful error responses. Additionally, as mentioned above, ConditionExpressions can be used to aid in preventing overwriting an existing item (in a more DynamoDB best practice way (vs. say doing a query to see if the item already exists)).

Finally, in regard to the BatchWriteItem 25 item limit, with a Lambda, you can still use BatchWriteItem if you can do all the validation necessary for your situation and know you won’t have duplicate keys, etc. You would just break your >25 item request into 25-item chunked BatchWriteItem requests from your Lambda. Thus, instead of doing 100 Puts, you could do 4 batches.

Sidenote on lambda authorizers and caching

One note regarding lambda authorizer caching, that can catch you out. Lambda authorizers return a policy document that details what endpoints/resources are valid for the authentication. If you have API 1 and 2 (both authenticated with the same authorizer), and you only return API 1’s endpoint ARN in the policy document, a subsequent call to API 2, with the same auth information will actually get a 403 response. So, beware when crafting the policy you return with these — use a wildcard for the ARN or specify each endpoint’s ARN. See this Serverless Framework forum message for more.

When Would I Use the Proxy Technique?

Given the above, you may be thinking it’d be rare to use the proxy technique. But I see at least two cases where it would still be useful.

Case 1: initial prototyping and development

Often you may have a project where you’re prototyping, or don’t have a great notion yet of exactly how your system will work, but you want to spin up an app and start learning. In this case, given it’d be a non-production app, or non-public use case, and you control the incoming data, you could use the proxy technique to get an API setup super quickly. This would allow you to more quickly start figuring out your business case, instead of spending extra time on the Lambda that may need to go through many iterations. Once you have it figured out, insert the lambda and all the data validation and so on you need.

Case 2: internal applications

The other case would be for internal applications, where it’s a non-public API, and you tightly control the clients sending data to the API. Maybe it’s 100% contained inside your VPC, and/or the issues of data validation above don’t come into play. Additionally, you may be able to do the data validation later (e.g. via the DynamoDB Streams processing discussed in the first article), and it’s ok for you to simply discard those items, or you have a way to fix them up, etc. In such a tightly controlled environment, if it’s very high volume, the cost savings may come into play as well.

In the end, the Lambda is a bit more work, but most likely the flexibility and safety it provides will be of significant benefit in most cases vs. the proxy technique.

Final Notes

Finally, I’ll note that none of this precludes using DynamoDB Streams for post-processing. In my own use, I’m still doing all of that, I’ve simply inserted the lambda in to do data validation. This is still a big advantage in my case because the post processing part is compute/time intensive enough that I don’t want to slow down the response, and as long as the data has been captured, the client’s responsibility is complete.

NatureQuant CTO/Co-founder. HotelTonight Co-founder. Cyclist, trail runner, espresso & coffee lover, geek, traveler, foodie.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store