-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Multiple PartitionedRateLimiter Per Endpoint #42691
Comments
This should be achievable through the CreateChained API in runtime - it allows you to pass in multiple |
Going to close this as I believe it's covered by the above comment - feel free to re-open if there's additional functionality in this request that I'm missing. |
@wtgodbe I believe the issue should be re-opened. It seems |
I see, I think you're right that we don't currently support that (endpoint policies are RateLimitPartition based, not PartitionedRateLimiter based). We'll consider this for future passes |
Triage: we're open to this but we'd like to see more use cases for this sort of pattern. |
We've moved this issue to the Backlog milestone. This means that it is not going to be worked on for the coming release. We will reassess the backlog following the current release and consider this item at that time. To learn more about our issue management process and to have better expectation regarding different types of issues you can read our Triage Process. |
We have another request for this: #44907 |
I also want to rate limit an endpoint with one partition but multiple time spans (e.g. allow 3 requests per second but only 100 per hour). As far as I can tell this is currently not possible but feels like a very basic use case. |
+100 for @BondarencoM. ATM it seems not possible to create a policy, which calls PartitionedRateLimiter.CreateChained(). I also have the usecase to create a policy for an endpoint which eg. allows 1 Request per Minute and 10 Requests per Hour. Atm this does not seem to be easy achievable. |
Greetings, I have a very similar requirement of adding a RateLimiting policy of limiting the requests to 50 per minute and also not allowing more that 10 concurrent. At the moment this is only achievable using PartitionedRateLimiter.CreateChained() as you have already mentioned, but only for the global limiter. I have an API with several endpoints that need different limiting policies, so this is not the best option for me. Is there any progress on this request or is there any other way to achieve my purpose? Thanks for your work and help! |
I agree we should do this for 8. Right now, you apply |
Thanks for contacting us. We're moving this issue to the |
I've just come at this from exactly the same direction as @BondarencoM. |
I've spent the last hour frustrated looking for a solution for this problem as well. In my case, I have an endpoint that invokes another, 3rd party service, which has its own rate limiting. As such, I would have liked to create a policy on that single endpoint that
|
I'd like to +1 on this one. We'd like to migrate away from the https://github.com/stefanprodan/AspNetCoreRateLimit project and some of our policies will be expressed by a chain of fixed window validators but they should be a custom |
We face the same challenge that @stukalin describes. |
+1 for this. i tried to apply multiple policy to an endpoint but only the last one applied work. |
+1 We tried to implement a token bucket policy on top of fixed window policy but hit a wall. |
Unfortunately did not make it in .NET 8 :( |
+1 Has anyone been able to find a solution to this, which doesn't involve using the AspNetCoreRateLimit NuGet package? |
Personally planning on building my own and following this guide https://developer.redis.com/develop/dotnet/aspnetcore/rate-limiting/middleware which has the added benefit (aside from the ability to do distributed rate limiting) of sending all rate limit rules that apply to a single endpoint over to Redis in a single request. |
Nice! Thanks for sharing. I ended up using the ASPNetCoreRateLimit package in the end. |
Spent a day so you don't have to. Here's the Redis Lua script (co-written by ChatGPT ofc) that implements the Sliding Window Rate Limit algo with support for multiple rules per invocation AND support for returning the remaining_tokens so you can power the X-Rate-Limit-Remaining header in your web api |
@mkArtakMSFT Any plans to address this issue for .Net 10? |
Any progress? |
+1 |
Is there an existing issue for this?
Is your feature request related to a problem? Please describe the problem.
Right now RateLimiting middleware has two
PartitionedRateLimiter
. A global and an endpoint limiter. This means for each endpoint there could be at most two level of rate limiters and also the global one is somehow limited because it must be same for all endpoints.So I can not limit my endpoints based on more than two partitions. As an example I need to limit first based on request IP, then current user Id and then based on the current endpoint.
Let's say I need to limit 10 requests per second per IP, no matter which endpoint. And also limit 5 requests per second per User Id.
Also I can't have different window and limit based on one partition. Again let's say I need to limit 10 requests per second per IP and also limit 40 requests per minute per IP.
Describe the solution you'd like
I'm suggesting to create
PartitionedRateLimiter<HttpContext>
based on policy and have aDictionary<string, PartitionedRateLimiter<HttpContext>
which policy is the key. The endpoints could have multipleIRequireRateLimitMetadata
and every one of them is a policy with its ownPartitionedRateLimiter
. And the middleware would always call Acquire on each of these limiters whetherIsAcquired
is true or false and only limit the request if one of them has IsAcquired = false.This way part of the limiter like checking based on User Id could be shared between multiple endpoints too.
Additional context
cc @wtgodbe @BrennanConroy @halter
The text was updated successfully, but these errors were encountered: