All Articles

The Complete Guide to Custom Authorizers with AWS Lambda and API Gateway

In this post, you'll learn about using API Gateway custom authorizers.

This post covers:

  • Background on custom authorizers and their benefits and downsides
  • Basic usage of custom authorizers
  • Caching with custom authorizers
  • Two patterns for using custom authorizers

I talk to a lot of people who are building REST APIs with AWS Lambda and API Gateway. These tools help them iterate quickly without having to worry about infrastructure.

For those building serverless applications with AWS Lambda and API Gateway, the issue of how to handle authorization is a common question.

Custom authorizers are a feature provided by API Gateway to separate your auth logic from the business logic in your function. This is particularly powerful when your application is split across many functions in multiple services. Authentication and authorization can happen in a separate function, allowing your other functions to focus on their core responsibilities.

In this post, you will learn the ins and outs of how custom authorizers work. In particular, we will cover:

Let’s get started.

What are custom authorizers and when should I use them?

API Gateway custom authorizers are Lambda functions that are called before your main function to authenticate and/or authorize that the caller may proceed to your core function. When a custom authorizer runs, you may reject the request by indicating that it is unauthorized, or you may allow the request to continue to its requested resource.

Benefits of custom authorizers

One of the biggest reasons to use a custom authorizer is to centralize your auth logic in a single function rather than packaging it up as a library into each of your functions. If your auth logic changes in the future, you can simply redeploy a single function — your custom authorizer — rather than needing to redeploy each function that includes your auth library.

A second reason to use custom authorizers is to cache responses. Unless you’re using something like a JWT, your auth logic will need to make a remote call. This can add unneeded latency if you’re running this check within every function. By isolating the remote call in your custom authorizer, you will only need to pay the price once. Then you can cache the value for up to an hour.

Downsides to custom authorizers

Custom authorizers aren’t universally beneficial, and you should only use them if you really need them.

The biggest cost of a custom authorizer is that there is the added latency in your API Gateway calls. Most people are familiar with the cold start problem with AWS Lambda. Since your custom authorizer is a Lambda function, you could be paying this penalty twice — once on the custom authorizer, and once on your core function.

Even without the cold start issue, you’re still adding an additional network hop, with the associated processing time, to your request flow. This is going to add some time. The added latency can be mitigated with the caching strategies mentioned below, but it’s something you should be aware of.

A second downside to custom authorizers is that every endpoint that uses a custom authorizer must include authorization information. This can be inflexible in certain situations. Imagine you have an API endpoint where unauthenticated users can request information about a particular resource. Further, you’d like to add additional, private information about the resource when the endpoint is called by the owner of the resource.

With custom authorizers, this is not available out-of-the-box. One workaround is to pass authentication information that indicates an unauthenticated user, such as an Authorization header of Bearer unauthenticated. However, this is an undesirable hack of the Authorization header. It would be preferable if this functionality were available directly in AWS API Gateway.

When should I use custom authorizers?

Having reviewed the benefits and downsides of custom authorizers, here are the main questions you should be asking when considering whether to use custom authorizers:

  • Do I have expensive auth logic that I would like to cache independently of my function? Using a custom authorizer allows you to cache auth information separately from your endpoints responses.
  • Do I have auth logic contained in multiple, separately-deployable units? If your auth logic is in only one function or in a group of functions that are deployed together, custom authorizer might be overkill. If your auth logic is contained in multiple separate services, a custom authorizer might be preferable to avoid needing to redeploy all services when your auth logic changes.
  • Am I using API Gateway as a proxy to other AWS resources? You can use API Gateway as a proxy to direct call other AWS APIs, such as ingesting records into Kinesis. If this is the case, there is no core Lambda function where you could check auth. A custom authorizer is a great way to protect your proxy resource.

If you answered ‘Yes’ to at least one of the three questions above, custom authorizers might be a good fit for you. You should then ask yourself if you can handle the additional latency from adding a second Lambda function to your request flow.

Keep reading to learn about the basic usage of custom authorizers.

Basic usage

A custom authorizer is a Lambda function that you write. Because you are writing the function, you have significant flexibility on the logic in your authorizer. You can use your custom authorizer to verify a JWT token, check SAML assertions, validate sessions stored in DynamoDB, or even hit an internal server for authentication information.

In this basic usage section, we’ll cover two topics:

  • the two types of custom authorizers, and
  • the responses in your custom authorizer.

Custom authorizer types

There are two types of custom authorizers: TOKEN and REQUEST.

Token authorizers are the most straight-forward. You specify the name of a header, usually Authorization, that is used to authenticate your request. The value of this header is passed into your custom authorizer for your authorizer to validate.

The event object in your Lambda function for a token authorizer is small and simple:

{
    "type":"TOKEN",
    "authorizationToken":"allow",
    "methodArn":"arn:aws:execute-api:us-west-2:123456789012:ymy8tbxw7b/*/GET/"
}

Request authorizers are more complex. Rather than simply passing along a header value to your authorizers, the request type will pass along most of the information about the request. This includes all header values, query string parameters, and other information.

Here’s an example event object in your custom authorizer for a request type, as provided by AWS:

{
    "type": "REQUEST",
    "methodArn": "arn:aws:execute-api:us-east-1:123456789012:s4x3opwd6i/test/GET/request",
    "resource": "/request",
    "path": "/request",
    "httpMethod": "GET",
    "headers": {
        "X-AMZ-Date": "20170718T062915Z",
        "Accept": "*/*",
        "HeaderAuth1": "headerValue1",
        "CloudFront-Viewer-Country": "US",
        "CloudFront-Forwarded-Proto": "https",
        "CloudFront-Is-Tablet-Viewer": "false",
        "CloudFront-Is-Mobile-Viewer": "false",
        "User-Agent": "...",
        "X-Forwarded-Proto": "https",
        "CloudFront-Is-SmartTV-Viewer": "false",
        "Host": "....execute-api.us-east-1.amazonaws.com",
        "Accept-Encoding": "gzip, deflate",
        "X-Forwarded-Port": "443",
        "X-Amzn-Trace-Id": "...",
        "Via": "...cloudfront.net (CloudFront)",
        "X-Amz-Cf-Id": "...",
        "X-Forwarded-For": "..., ...",
        "Postman-Token": "...",
        "cache-control": "no-cache",
        "CloudFront-Is-Desktop-Viewer": "true",
        "Content-Type": "application/x-www-form-urlencoded"
    },
    "queryStringParameters": {
        "QueryString1": "queryValue1"
    },
    "pathParameters": {},
    "stageVariables": {
        "StageVar1": "stageValue1"
    },
    "requestContext": {
        "path": "/request",
        "accountId": "123456789012",
        "resourceId": "05c7jb",
        "stage": "test",
        "requestId": "...",
        "identity": {
            "apiKey": "...",
            "sourceIp": "..."
        },
        "resourcePath": "/request",
        "httpMethod": "GET",
        "apiId": "s4x3opwd6i"
    }
}

As you can see, there’s quite a bit more information you can use to identify the caller and the resource they’re requesting. This can be helpful or it can be overkill — it depends on your situation.

Choosing a request type of authorizer has impact on your caching strategy, as you’ll need to specify which element(s) of the request should be used for caching. This is discussed below in the choosing your cache key section.

Custom authorizer responses

In your authorizer logic, you will need to do one of two things:

  1. Deny the request based on the provided identification, or
  2. Allow the request and let the request proceed to the backing resource.

If you want to deny the request, you can throw an error in your Lambda function to stop the request from proceeding further.

If you want to allow the request, you have more work to do. Your Lambda function will need to return an object with the following shape:

{
	"principalId": "my-username",
	"policyDocument": {
		"Version": "2012-10-17",
		"Statement": [
			{
				"Action": "execute-api:Invoke",
				"Effect": "Allow",
				"Resource": "arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/GET/mydemoresource"
		]
	},
	"context": {
		"org": "my-org",
		"role": "admin",
		"createdAt": "2019-01-03T12:15:42"
	}
}

Let’s walk through these elements one-by-one.

Principal Id

The principalId is a required property on your authorizer response. It represents the principal identifier for the caller. This may vary from application-to-application, but it could be a username, an email address, or a unique ID.

Policy Document

The policyDocument is another required property and the core of the authorizer response. You must return a valid IAM policy that allows access to the underlying API Gateway resource that the user is trying to access.

IAM policies is a can of worms in itself, but you can use custom authorizers even if you understand only the basics. Essentially, you need to produce an object that allows the caller to perform a specific action (execute-api:Invoke). Finally, you need to specify the resource on which they are allowed to perform this action.

"Statement": [
	{
		"Action": "execute-api:Invoke", // <-- What action they can take.
		"Effect": "Allow", // <-- Allow the action
		"Resource": "arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/GET/mydemoresource" // <-- The resource on which they can perform this action
]

If your custom authorizer is fronting a single API Gateway resource or you are not caching your authorizer responses, the resource you specify is straight-forward. AWS provides the ARN of the method that the caller is requesting. You can access this ARN with the methodArn property on the event object in your Lambda function.

If your custom authorizer is fronting multiple resources and you’re caching your responses, the resource you specify is more complex. API Gateway caches the authorizer response for all backing resources for a particular token, so you will need a broader resource specification in your IAM policy. This is discussed further in the caching section.

Context

Finally, you can add arbitrary data to your authorizer response in the context object. The context object is an optional property. This context will be added to the event object in your backing Lambda function.

The context object is a helpful tool with custom authorizers. You can use it to enrich the request with information about the user from your authentication lookup. This can save you time in your backing function as you won’t need to make a remote call to hydrate your user with additional data.

Further, the context object is helpful when using custom authorizers for authentication rather than authorization. This allows you to keep your custom authorizers small and focused rather than overloaded with business logic. This pattern is discussed further below.

With these basics of custom authorizers in hand, let’s move on to the mechanics of caching with custom authorizers.

Caching your custom authorizers

API Gateway allows you to cache the response from your authorizer for a given user. This caching can lessen the performance hit from adding a second Lambda function in your request flow, and it can even speed up your requests if the usual authentication and user enrichment process is expensive.

This section will cover a few aspects of caching authorizers in API Gateway, including:

  • Choosing a cache key
  • Determining how long to cache
  • Caching across multiple functions.

Choosing a cache key

When caching with API Gateway, you will need to choose a cache key. This is the way to identify a particular user in your custom authorizer for caching purposes.

For token-based authorizers, you don’t have to make a choice here. The header value that is used for your authorizer will be the cache key. Any requests that use the same header within your cache expiry will receive the cached result from your authorizer.

If you are using request-based authorizers, you will need to specify the parameter(s) that serve as your cache key. You can use one or more parameters for your cache key and can choose from HTTP headers or query string parameters, as well as stage variables and context from API Gateway.

Generally, you’ll want to use something like the Authorization header or an apiKey query string parameter to serve as the cache key for your request-based authorizers.

Determining how long to cache

You may cache an API Gateway authorizer’s policy response for up to an hour. This is a great way to get the benefits of custom authorizers without taking the performance hit on every request.

The cache expiry is on a per-key basis. Imagine your cache expiry is set for one hour. If your first user, Susan, makes her request at 1:00 PM, her policy will be cached until 2:00 PM. If your second user, Bob, makes his request at 1:15 PM, his policy will be cached until 2:15 PM.

As is typical with determining when to cache, you need to consider how your cached data could expire. A couple questions to ask are:

  • Will I want to invalidate individual tokens, and how quickly? If an authentication token can become invalid due to permission changes or account shut-offs, your cache expiry causes a delay in making that invalidation effective.
  • Will my authentication context change? If you’re using the pattern of using custom authorizers for authentication, your authorizer is mostly about fetching and injecting identity context into a request. If this information can change — such as by adding a user to a new team, or changing the role of a user — your Lambda functions could see stale data until your cache expires.

Unlike API Gateway response caching, you cannot invalidate a single cache entry for custom authorizers. This would be a useful addition to custom authorizers as you could invalidate cache entries in the two situations above, allowing you to set your cache TTL to longer values.

You do have the ability to flush all authorizer cache values with your API Gateway. This is useful for a large-scale breach where you need to invalidate all tokens across your application.

Caching across multiple functions

The last quirk to know about custom authorizer caching is about caching policy responses when you’re using a custom authorizer in front of multiple functions. I’ve seen this problem bite a number of users, so pay attention!

As mentioned in the custom authorizer responses section, your custom authorizer will need to return an IAM policy that allows the caller to invoke a particular resource in your API Gateway. It will look something like this:

{
	"principalId": "my-username",
	"policyDocument": {
		"Version": "2012-10-17",
		"Statement": [
			{
				"Action": "execute-api:Invoke",
				"Effect": "Allow",
				"Resource": "arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/GET/mydemoresource"
		]
	}
}

For a particular request, you can use the event.methodArn property in your authorizer function to return the ARN of the Resource to which you’re allowing access.

However, the policy result is cached across all requested method ARNs for which the custom authorizer is fronting. Let’s see how this plays out in an example.

Imagine your user creates a new resource by making a POST request to /mydemoresource. The value for the event.methodArn will be something like:

arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/POST/mydemoresource

Now if your user tries to view her resources, she’ll make a GET request to /mydemoresource. The value for event.methodArn will be:

arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/GET/mydemoresource

Unfortunately, if your custom authorizer returned the specific value for event.methodArn in the first call, this request will be rejected because the policy does not have permission to invoke the newly-requested resource!

There are two ways to deal with this problem.

The first requires more knowledge of your API structure. You could return a Resource value that is expansive enough to cover all of the resources that your authorizer is protecting. In our example, it could be something like:

{
	"principalId": "my-username",
	"policyDocument": {
		"Version": "2012-10-17",
		"Statement": [
			{
				"Action": "execute-api:Invoke",
				"Effect": "Allow",
				"Resource": [
				  "arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/GET/mydemoresource",
				  "arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/POST/mydemoresource"
		]
	}
}

The downside to this approach is that you need to know and specify by ARN all of the resources you’re protecting. This couples the authorizer and backing functions more than I’d like, as it requires a redeploy of the authorizer whenever you add additional resources that are protected by it.

The second approach is to simply return a wildcard for your Resource value:

{
	"principalId": "my-username",
	"policyDocument": {
		"Version": "2012-10-17",
		"Statement": [
			{
				"Action": "execute-api:Invoke",
				"Effect": "Allow",
				"Resource": "*"
	}
}

Normally, using wildcards in IAM policies is a bad idea. In this situation, it seems more controlled and, thus, acceptable. However, I’d love to know if this is a security hole. Please reach out if you know how this could be exploited.

Now that we understand the caching options and behavior with custom authorizers, let’s dig into the two patterns I see with custom authorizers.

Patterns for custom authorizer usage

There are two main patterns I see with custom authorizers. The first is a full authorization approach, while the second is more of an authentication + context-injection approach, leaving the authorization step to your backing Lambda function. Let’s review them in turn.

Full authorization

With the full authorization approach, you are removing any need for authorization logic from your backing Lambda functions. If the request makes it through your custom authorizer, your Lambda function can trust that it has full access to the requested resource.

This approach works well when your authorization decision is binary — either a user is authenticated and has access to everything in your API, or the request should be blocked.

This approach breaks down when you have more granular access needs. If you have different levels of access within your API, such as admins vs. users, you will need to pass more business-level logic within your custom authorizer. Similarly, if you have resource-based access limitations — only the owner of a resource can update or delete the resource — you’ll end up with a hairball of logic spread across your custom authorizer and your backing function.

Using custom authorizers for authentication

The second pattern is to use the custom authorizer to authenticate your user and inject context into the request while doing more granular authorization within the backing Lambda function. I see this pattern more often, and it fits well with decoupled, microservice architectures.

Let’s illustrate this by way of an example. Imagine you have a forum application where users belong to organizations and can have different levels of roles within the organization. In this example, we’ll simplify it to two roles, admin or member. Members are allowed to view and create posts within their organization’s forum, but only admins are allowed to delete posts within a forum.

When creating or viewing a forum post, you need to perform the following authorization steps:

  1. Check the given token to confirm this is a valid identity. If not, reject the request.
  2. Check the identity to see if the user belongs to the forum in which they are requesting to create or view a post. If not, reject the request.
  3. Allow the request.

In this pattern, step 1 would be done in our custom authorizer. If the identity is valid, the authorizer would use the context object in the response to add information such as the username of the user, the organization to which the user belongs, and the role of the user in the organization. The result from the authorizer might look as follows:

{
	"principalId": "my-username",
	"policyDocument": {
		"Version": "2012-10-17",
		"Statement": [
			{
				"Action": "execute-api:Invoke",
				"Effect": "Allow",
				"Resource": "arn:aws:execute-api:us-east-1:123456789012:qsxrty/test/GET/posts/123"
		]
	},
	"context": {
		"username": "my-username",
		"org": "my-org",
		"role": "member",
		"createdAt": "2019-01-03T12:15:42"
	}
}

Step 2 would be handled in the backing Lambda function itself. The function would retrieve the requested post from the database, then validate that the user is in the proper organization to view the post. If not, the request would be rejected. If yes, the result would be returned to the caller.

The pattern is similar on requests to delete a post. After receiving the enriched context from the custom authorizer, our backing function would delete the requested post if the user belongs to the organization and has a role of admin. Otherwise, the request would be rejected.

This pattern is helpful in that it centralizes your authentication logic in a single place without requiring granular access control logic in your custom authorizer. However, it does mean you’ll need to add authorization into your backing functions as well.

Conclusion

Custom authorizers in API Gateway can be a handy tool to simplify your application’s auth logic or cache expensive auth operations. In this post, we learned about when you do and don’t want to use custom authorizers. We covered how custom authorizers work and the different types of authorizers. We reviewed how and when to cache your authorizer responses. Finally, we discussed two patterns for using custom authorizers.

This can be a tricky area, so I hope this guide helps. Please reach out with any questions you have on custom authorizers that this guide does not address.

Interested in more AWS & Serverless content like this?

Subscribe to my mailing list:

* indicates required
Published 6 Feb 2019

Working for Serverless, Inc. Infrastructure & data engineer with expertise in AWS, data processing, and serverless technologies. Learning never stops.
Alex DeBrie on Twitter