Question

Our organisation is in the process of migrating to Apigee.

What we want to achieve is the following scenario. Can you please advise me how to do it? Maybe using together SpikeArrest, Quota and ConcurrentRatelimit?

I would like to ask for practical example configuration of a proxy, as guide lines how to link the policies to achieve the desired result.

I have been through the documentation and it is somewhat skim and has gaps, as to what is distributed, what is not, etc.

http://apigee.com/docs/api-services/content/shield-apis-using-spikearrest
http://apigee.com/docs/api-services/content/rate-limit-api-traffic-using-quota#identifying-apps
http://apigee.com/docs/api-services/content/throttle-backend-connections-using-concurrentratelimit

An example gap is my previous question about SpikeArrest, which arouse as a result of configuring SpikeArrest and not getting the expected behaviour, due to the fact that in the documentaiton it's not specified that SpikeArrests are not distributed: Apigee SpikeArrest Sync Across MessageProcessors (MPs)

These guys were also caught out by the same scenario: Apigee - SpikeArrest behavior

Scenario and Desired Result:

In our organisation we have 6 MessageProcessors (MP) and I assume they are working in a strictly round robin manner.

We have the following backend Api - Api-1.

Api-1 has consumers from within our organisation alongside it's Apigee consumers. We want to prevent our Api-1 from getting hammered and going down. Lets say it has been loadtested to take a maximum of 50 requests per second. We have calculated that through Apigee we want to limit at a maximum of 30 requests per second, as the other 20 requests per second capacity is for the users from within our organisations (mainly our own other products) that do not go through Apigee.

From the number of DeveloperApps consuming Api-1 through Apigee, we have identified, 4 Apps/Clients, which have the highest spikes in consumption. Out of the 30ps rate allocated to Apigee, we would like to be able to be able to allocate a 5ps for each of these 4 high consumption DevApps/Clients and have the rest 10ps rate shared amongst the other DevApps/Clients.

The main problem I keep getting at the TargetEndpoint is the one described here, as SpikeArrest policy is not distributed across MessageProcessors: Apigee SpikeArrest Sync Across MessageProcessors (MPs)

How can we get around it and achieve the desired scenario?

Here are examples of what I had tried to try and make achieve the desired behaviour:

TargetEndpoint of a proxy:

ConcurrentRatelimit:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ConcurrentRatelimit async="true" continueOnError="false" enabled="true" name="Concurrent-Rate-Limit-1">
    <DisplayName>Concurrent Rate Limit 1</DisplayName>
    <AllowConnections count="1" ttl="5"/>
    <Distributed>true</Distributed>
    <TargetIdentifier name="default"></TargetIdentifier>
</ConcurrentRatelimit>

SpikeArrest:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<SpikeArrest async="true" continueOnError="false" enabled="true" name="Spike-Arrest-2">
    <DisplayName>Spike Arrest 2</DisplayName>
    <FaultRules/>
    <Properties/>
    <Identifier ref="request.header.some-header-name"/>
    <MessageWeight ref="request.header.weight"/>
    <Rate>30ps</Rate>
</SpikeArrest>

Also what happens if the organisation adds or removes MPs without my knowledge, this will totally scramble the throttling?

Much appreciated!

Thanks!

Était-ce utile?

La solution

Problem with synchronizing spike arrest across MP is that you're already under severe load (in theory) and creating additional chatter between MP in a high load could have impact on the very tool you're using to protect from high load.

Therefore your only option in Per Second restrictions is to do the math and divide by MP. For minutes and above you can do the Quota policy with no identifier and distributed and synchronous turned on.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top