Skip to content

Did you hear that web’s fiber optic wires light up in different colors during Holiday season? Well, we are not certain that’s true, but without a doubt holiday shopping online is now more popular than ever.

Just to drive the point home, PracticalEcommerce predicts that online retail will increase by 13.9% in 2015.1 If that’s not enough to motivate you to get pull_quote_api_traffic_largeyour e-commerce API into tip-top shape – here are some stats that might convince you. According to data on Kissmetrics, 47% of people leave a website if it doesn’t load within two seconds and 79% of people will not return to a website if they are initially dissatisfied with its performance.2 And to bring it all the way home – 44% of dissatisfied visitors will also tell their friends about their dissatisfying experience.3

You should be sufficiently motivated at this point to take a close look at your API and make sure that it’s ready to meet the heavy demands of holiday shoppers. The configurations below are recommendations from SlashDB developers to help optimize the performance of your API during the holiday season.

 

Reconfiguring Your API to Optimize the User Experience

Our developers took the time to break down some useful NGINX configurations that will help prepare your API for holiday traffic. While there are many shopping_cart_scaled2factors that contribute to the overall functionality of your API, improving the configuration of your load balancing, caching, and rate limiting are three solid ways that you can improve your set-up for holiday traffic. That’s why our developers are focusing on these three configurations to improve the API user experience.

 

Load Balancing

Setting up basic load balancing in NGINX is fairly simple. All you need to do is define an upstream block with multiple servers running – these can be multiple application instances on the same machine or another server in your network. The code below shows a basic load balancing configuration in NGINX.

 

http {
    upstream slashdb_app {
        least_conn;
        server 127.0.0.1:8001 weight=3;
        server 127.0.0.1:8002;
        server 127.0.0.2:8001 backup;
    }
    server {
        listen 80;

        location /db* {
            uwsgi_pass slashdb_app;
            include uwsgi_params;
        }
    }
}

 

Load Balancing Breakdown

The code above might seem a tad intimidating for those with little experience customizing NGINX. It actually makes a lot of sense if you know what these things stand for. Take a look below to get a better understanding of how the code works to help improve your API.

upstream – assigns a name to a group of servers that will receive request traffic; referenced later in the uwsgi_pass directive

least_conn is a method used for distributing requests, other options are ip_hash, hash, least_time. If you don’t set anything yourself then the requests will automatically be distributed evenly among the servers.

weight the weight is directly proportional to the number of requests the server receives. If you set the weight to a high number then the server will receive a high number of requests. Similarly, setting the weight to a low number means that the server will receive a low number of requests.

backup flags the server as a last resort. A request will be sent if other servers are unavailable.

Caching

Another simple way to improve your API performance is to add cache. Before reaching for an advanced solution like Varnish, you should check out NGINX which serves static content very efficiently and is an incredibly capable web cache when placed in front of an application server. The NGINX configuration below proxies a WSGI (Web Server Gateway Interface) application and is similar to a regular proxy_cache.

 

server {
    listen 80 default_server;
    server_name example.com www.example.com;
    uwsgi_cache_path /var/cache/slashdb levels=1:2 keys_zone=slashdb_zone:10m inactive=5m;

    # additional cache key segments from query parameters and HTTP headers
    set $api_auth $arg_app_id$arg_app_key$http_app_id$http_app_key;
    location /db* { 
        uwsgi_pass unix:///var/run/slashdb/slashdb.sock;
        include uwsgi_params;
   
        uwsgi_cache slashdb_zone;
        uwsgi_cache_key $host$request_uri$http_authorization$cookie_auth_tkt$api_auth;
        # cache freshness
        uwsgi_cache_valid 200 302 5m;
        uwsgi_cache_valid 301 1d;
        uwsgi_cache_valid any 0;

        add_header X-Proxy-Cache $upstream_cache_status;
    } 
}

 

Cache Breakdown

The code above might seem a bit lengthy and complex, so we’ve broken it down and provided some explanations to help you feel more comfortable making these changes on your own. This is an important configuration as some locations tend to change more often than others. Using the right caching time values can dramatically reduce redundant calculations, database requests, and speed up your API.

location – URL fragment to which the cache settings apply; you can specify different setting for different parts of your API

uwsgi_cache_path

/var/cache/slashdb is the path to the directory to save cached files.
levels=1:2 sets the sub-directory structure in the cache.
keys_zone=slashdb_zone:10m is shared memory (10 MB) that stores all of the active keys and metadata in the cache.
inactive=5m sets the cache lifetime. If cached data is not accessed within 5 minutes (5m) it will be removed from the cache regardless of its freshness.

uwsgi_cache_key

$host$request_uri$http_authorization$cookie_auth_tkt$api_auth is a key used to differentiate cached files and $api_auth segments assure that each API user request is cached separately.

uwsgi_cache_valid sets the caching time for different response codes.

200 302 5m requests with status code 200 or 302 are cached for 5 minutes (5m).
301 1d sets redirects to be cached for 1 day (1d).

uwsgi_cache defines which zone (shared memory) to use for caching.

add_header adds a header to the response.

X-Proxy-Cache $upstream_cache_status header X-Proxy-Cache will tell you if  a response is HIT, MISS or BYPASS in the cache.

 

Rate Limiting

A very useful feature in NGINX is the ngx_http_req_module which is used to limit the request rate by using a ‘leaky bucket’ method that delays requests so that they are processed at a defined rate. A ‘leaky bucket’ method is useful as it provides a steady rate for requests (in the same way a leaky bucket has a steady drip, hence the name).

http {
  limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
  server {
    location /db* {
      limit_req zone=one burst=20;
    }
}

Rate Limiting Breakdown

Rate limiting really is a great way to improve the functionality of your API. Take a look at our explanations below so that you understand the configuration changes and feel comfortable making these adjustments.

 

limit_req_zone

$binary_remote_addr in this example the remote address is used as a key for checking the rate. The key can be a combination of various attributes.
zone=one:10m is the name and the size of the zone.

rate is the desired frequency of requests per second.

limit_req sets the maximum length of a queue with delayed or ‘leaky bucket’ requests. The default setting is 0.

zone=one defines which zone (key or rate) is to be considered.
burst=20 is the allowed number of delayed or ‘leaky bucket’ requests. In this example 20 is set as the allowed number of delayed requests.

 

We hope these configurations will help you have a more relaxing and profitable fourth quarter. SlashDB wishes you a  happy holiday season and a very happy high traffic online shopping season.

 


  1. Armando Roggio, “4 Predictions for 2015 Holiday Shopping Season,” Practical  Ecommerce.com, accessed October 23, 2015. http://www.practicalecommerce.com/articles/92465-4-Predictions-for-2015-Holiday-Shopping-Season.
  2. Sean Work, “How Loading Time Effects Your Bottom Line,” Kissmetrics.com, accessed October 23, 2015. https://blog.kissmetrics.com/loading-time/.
  1. Ibid.
Back To Top