To handle the ever increasing load, one of my requirements was to cache the authenticated REST API’s for faster processing and decrease the load on backend servers (Tomcat in this case).
We use a simple encrypted token passed in a header field, say
Now, we have various API’s such as user profile, addresses which return data based on this token passed and I don’t want them to hit our backend servers everytime. Neither I am looking for unnecesarily storing this simply retrieved data in our Redis servers.
Instead, we can use NGINX to cache these requests. We have a very simple architecture where NGINX acts as a reverse proxy for Tomcat servers.
Here, every request is intercepted by NGINX and appropriate requests are passed back to Tomcat server. This is done using
proxy_pass directive of NGINX.
This sets up the basic caching in NGINX as described here
Now comes the fun part, how do we cache the authenticated requests. The key is to understand the What Cache Key Does NGINX use?
The default keys that NGINX generates is MD5 hash of the following NGINX variables:
For this sample configuration, the cache key for
http://www.example.org/profile is calculated as
However, for token based authentiated requests, the cached response of
http://www.example.org/profile will not be differentiated for different users, because the response is generated based on the
X-AUTH-TOKEN field coming in the HTTP headers.
To solve for this, we simply add the token field as part of
This will ensure that a new cache copy is created for each request having different
X-AUTH-TOKEN in its header field.
To verify this, you can print out the key field in the response headers using
add_header X-Cache-Key $http_x_auth_token$request_uri;