|From Serverless Architectures on AWS by Peter Sbarski
This article takes a close look at the API Gateway. We’ll look at the fundamental activities that go into building an API and discuss features such as staging and versioning, as well as caching, logging, and throttling of requests.
Serverless architectures are versatile. You can use them to build an entire back-end or glue a few services together to solve a specific task. Building a proper back-end requires the development of an application programmatic interface (API) that sits between the client and back-end services. In AWS, the API Gateway is the key AWS service that allows developers to create a RESTful API.
API Gateway as the interface
You can think of the API Gateway as an interface (figure 1) between back-end services (including Lambda) and client applications (web, mobile, or desktop).
Figure 1. The API Gateway is needed, particularly for web applications, to establish an interface for back-end services.
Your front-end application should communicate with services directly, but in many cases, this isn’t possible or desirable in terms of security or privacy. Some actions should be performed only from a back-end service. For example, sending an email to all users should be done via a Lambda function. You shouldn’t do it from the front-end because it’d involve loading every user’s email address into another user’s browser. Serious security and privacy issues like this are a quick way to lose your customers. Don’t trust the user’s browser and don’t perform any sensitive operations in it. The browser is also a bad environment for performing operations that may leave your system in an undesirable state. Have you seen those websites that say, “Do not close this window until the operation has finished?” Avoid building systems this brittle. Instead, run operations from a back-end Lambda function and flag the UI when the operation is done.
An API Gateway is an example of technology that makes serverless applications easier to build and maintain than their traditional server-based counterparts. In a more-traditional system, you might need to provision EC2 instances, configure load-balancing using Elastic Load Balancer and maintain software on each of the servers. The API Gateway removes the need to do all that. You can use it to define an API and connect it to services in minutes. The gateway scales up and down automatically and, in us-east-1, the cost is around $3.50 per million API calls received, which makes it affordable for many applications. Let’s look at a few important features of the API Gateway in more detail.
Integration with AWS services
When you connect an API Gateway to a User Profile Lambda function, your website can request information about a user from a Lambda function. The other three are HTTP Proxy, AWS Service Proxy, and Mock Integration, which are briefly described here:
The HTTP Proxy can forward requests to other HTTP endpoints. Standard HTTP methods (HEAD, POST, PUT, GET, PATCH, DELETE, and OPTIONS) are supported. The HTTP proxy is useful if you need to build an interface in front of a legacy API or transform/modify the request before it reaches the desired end point.
AWS Service Proxy
The AWS Service Proxy can call through to AWS services directly rather than through a Lambda function. Each method is (for example, GET) mapped to a specific action in a desired AWS service, such as adding an item to a DynamoDB table directly. It’s a lot quicker to proxy straight to DynamoDB than to create a Lambda function that can write to a table. Service Proxy is a great option for basic use cases (such as list, add, and remove) and it works across a wide range of AWS services. In more advanced use cases (like those that need logic), you still need to write a function.
The Mock Integration option is used to generate a response from the API Gateway without having to integrate with another service. It’s used in cases such as when a preflight Cross-Origin Resource Sharing (CORS) request is issued and the response is predefined in the API Gateway.
Caching, throttling, and logging
It wouldn’t be a useful service if the API Gateway didn’t have facilities for caching, throttling, encryption, and logging. Caching can reduce both latency and the load on the back end by returning results computed earlier. However, this performance increase is not free. The cost (charged per hour) depends on the size of the cache.
Throttling reduces the number of calls to the API using the token bucket algorithm. You can use it to restrict the number of invocations per second to prevent your back-end from being hammered with requests. Finally, logging allows CloudWatch to capture what is happening to the API. It can capture the full incoming request and outgoing response, and track information such as cache hits and misses.
Staging and versioning
Staging (an environment for your AP) and versioning are features that you’ve probably already used. You can have up to ten stages per API (and 60 APIs per account), and it’s entirely up to you how to set them up. We prefer to create stages for development, UAT, and production environments, and sometimes we create stages for individual developers. Each stage can be configured separately and use stage variables to invoke different end points; this means you can configure different stages to invoke different Lambda functions or HTTP endpoints.
Each time an API is deployed it creates a version. You can go back to previous versions if you make a mistake, making rollbacks easy. Different stages can reference different versions of the API, making it flexible enough to support different versions of your application.
Configuring the API Gateway manually (using the AWS console) is fine during the learning process, but it isn’t a sustainable or a robust way to work in the long term. Luckily, you can script an entire API using Swagger, which is a popular format for defining APIs. Your existing API can be exported to Swagger and Swagger definitions can be imported as new APIs.
That’s all for this article.
For more, check out the entire book on liveBook here.