tokenlimit
This section will introduce its basic usage through token limit (tokenlimit).
On the whole, the token bucket production logic is as follows:
- The average sending rate configured by the user is r, then a token is added to the bucket every 1/r second;
- Assume that at most b tokens can be stored in the bucket. If the token bucket is full when the token arrives, then the token will be discarded;
- When the traffic enters at the rate v, the token is taken from the bucket at the rate v, the traffic that gets the token passes, and the traffic that does not get the token does not pass, and the fuse logic is executed;
Let’s take a look at several key attributes controlled by lua script
:
-- Return whether the expected token can be obtained alive
local rate = tonumber(ARGV[1])
local capacity = tonumber(ARGV[2])
local now = tonumber(ARGV[3])
local requested = tonumber(ARGV[4])
-- fill_time:How long does it take to fill the token_bucket
local fill_time = capacity/rate
local ttl = math.floor(fill_time*2)
-- Get the number of remaining tokens in the current token_bucket
local last_tokens = tonumber(redis.call("get", KEYS[1]))
if last_tokens == nil then
last_tokens = capacity
end
-- The time when the token_bucket was last updated
local last_refreshed = tonumber(redis.call("get", KEYS[2]))
if last_refreshed == nil then
last_refreshed = 0
end
local delta = math.max(0, now-last_refreshed)
-- Calculate the number of new tokens based on the span between the current time and the last update time, and the rate of token production
local filled_tokens = math.min(capacity, last_tokens+(delta*rate))
local allowed = filled_tokens >= requested
if allowed then
new_tokens = filled_tokens - requested
end
-- Update the new token number and update time
redis.call("setex", KEYS[1], ttl, new_tokens)
redis.call("setex", KEYS[2], ttl, now)
return allowed
It can be seen from the above that the lua script
: only involves the operation of the token, ensuring that the token is produced and read reasonably.
Seen from the above flow:
- There are multiple guarantee mechanisms to ensure that the current limit will be completed.
- If the
redis limiter
fails, at least in the processrate limiter
will cover it. - Retry the
redis limiter
mechanism to ensure that it runs as normally as possible.
The tokenlimit
current limiting scheme in go-zero
is suitable for instantaneous traffic shocks, and the actual request scenario is not at a constant rate. The token bucket is quite pre-request, and when the real request arrives, it won’t be destroyed instantly. When the traffic hits a certain level, consumption will be carried out at a predetermined rate.