The algorithm
The leaky bucket algorithm is a method of temporarily storing a variable number of requests and organizing them into a set-rate output of packets in an asynchronous transfer mode (ATM) network.
The leaky bucket is used to implement traffic policing and traffic shaping in Ethernet and cellular data networks. The algorithm can also be used to control metered bandwidth internet connections to prevent going over the allotted bandwidth for a month, thereby avoiding extra charges.
The algorithm works similarly to the way an actual leaky bucket holds water: The leaky bucket takes data and collects it up to a maximum capacity. Data in the bucket is only released from the bucket at a set rate and size of the packet. When the bucket runs out of data, the leaking stops. If incoming data would overfill the bucket, then the packet is considered to be non-conformant and is not added to the bucket. Data is added to the bucket as space becomes available for conforming packets.
The leaky bucket algorithm can also detect both gradually increasing and dramatic memory error increases by comparing how the average and peak data rates exceed set acceptable background amounts.
In a broader context, the leaky bucket is an analogy for describing how inputs and outputs work in a wide variety of business and technology systems.
An explanation
A working example of an API
Amazon Marketplace Web Service (Amazon MWS) is an integrated web service API that helps Amazon sellers to programmatically exchange data on listings, orders, payments, reports, and more. Data integration with Amazon enables high levels of selling automation, which can help sellers grow their business. By using Amazon MWS, sellers can increase selling efficiency, reduce labor requirements, and improve response time to customers. There are no fees associated with Amazon MWS, but to use the Amazon MWS API you must have an Amazon MWS-eligible seller account and you must register to use Amazon MWS.
To use Amazon Marketplace Web Service (Amazon MWS) successfully, you need to understand throttling. Throttling is the process of limiting the number of requests you (or your authorized developer) can submit to a given operation in a given amount of time. A request can be when you submit an inventory feed or when you make an order report request. Throttling protects the web service from being overwhelmed with requests and ensures all authorized developers have access to the web service.
Amazon MWS uses a variation of the leaky bucket algorithm to meter the web service and implement throttling. The algorithm is based on the analogy where a bucket has a hole in the bottom from which water leaks out at a constant rate. Water can be added to the bucket intermittently, but if too much water is added at once or if water is added at too high an average rate, the water will exceed the capacity of the bucket.
To apply this analogy to Amazon MWS, imagine that the bucket represents the maximum request quota, which is the maximum number of requests you can make at one time. The hole in the bucket represents the restore rate, which is the amount of time it takes to be able to make new requests. So, if you submit too many requests at once, then the bucket overflows and, in the case of Amazon MWS, throttling occurs. If you fill up the bucket, it takes some time before you can add more water to the bucket since the water leaks from the bucket at a steady rate. So the ability to submit more requests after you have reached the maximum request quota is governed by the restore rate, the time it takes to allow you to make new requests.
A working example
A generic implementation of the leaky bucket algorithm middleware in python can be found in the following repo where I uploaded it. Right below there is also a version of it in pip so that you can easily install it immediately.
Regarding the naming schema: I just had the feeling that this would be a middleware that will help clients forget about the complexity of the leaky buckets and sync with each other. My current implementation is using Redis as the backend, but you can use and develop anything that works for you.
Redis is an open-source in-memory database project implementing a distributed, in-memory key-value store with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, hyperloglogs, bitmaps and spatial indexes. The project is mainly developed by Salvatore Sanfilippo and is currently sponsored by Redis Labs .
Testing the Software
As you can see, I tried to create some tests to verify that the logic works on my implementation. I chose some concrete examples but I know this is definitely not exhaustive.
[sourcecode language=”python”]
def test_empty_bucket_initial_request(self):
key = ”.join(random.choice(‘0123456789ABCDEF’) for i in range(20))
self.assertTrue(throttle(key=key, rate=1, capacity=5, storage=BaseStorage(), amount=3))
def test_entering_3_values(self):
key = ”.join(random.choice(‘0123456789ABCDEF’) for i in range(20))
self.assertTrue(throttle(key=key, rate=1, capacity=5, storage=BaseStorage(), amount=3))
def test_entering_3_values_every_2_second(self):
key = ”.join(random.choice(‘0123456789ABCDEF’) for i in range(20))
self.assertTrue(throttle(key=key, rate=1, capacity=5, storage=BaseStorage(), amount=3))
self.assertFalse(throttle(key=key, rate=1, capacity=5, storage=BaseStorage(), amount=3))
def test_with_wrong_client(self):
key = ”.join(random.choice(‘0123456789ABCDEF’) for i in range(20))
with self.assertRaises(ResponseError):
throttle(key=key, rate=1, capacity=5, storage=BaseStorage(client=Redis(host=’localhost’, port=6379, db=220, password=None)), amount=3)
[/sourcecode]
A possible solution to the MWS problem with lambdas and the generic middleware
MWS adds very specific throttling requirements per request as well as per different store. Thus, every implementation wrapped around an API key can be used to call information about several different stores in Amazon and call on parallel many different services. Each service has its own bucket.
In my implementation, you can just simple encode each request to a different service with the service name and the store id. That way you can keep track of each set of requests independently. It is also very scalable because all parallel services share a common middleware ( in our case, Redis ) make it possible to be completely sure that it never crashes.
The idea is that we can utilize the solution we develop in the previous paragraph with Redis a backend (or any other similar in memory bucket-like implementation).
Is it really worth it?
The question here is, does it really worth all the complexity to the client for something that is definitely a server’s responsibility? My first response would always be, definitely not. But since that’s the case, then you should make sure you comply with it.
Looking around I found out some more use cases for the library. The most interesting would be for scraping tools. Those actually are a family of scripts and libraries that have to do with quota handling (otherwise you get banned).