SQS doesn’t just have limits; it actively wants you to hit them so it can tell you to ask for more.
Let’s see SQS in action. Imagine you’ve got a fleet of workers processing messages. They’re all hitting an SQS queue, and you’re seeing those "slow" messages start to pile up. Here’s a basic producer-consumer setup.
Producer (Python):
import boto3
import time
sqs = boto3.client('sqs', region_name='us-east-1')
queue_url = 'https://sqs.us-east-1.amazonaws.com/123456789012/my-processing-queue'
for i in range(1000):
message_body = f'Message {i+1}'
sqs.send_message(
QueueUrl=queue_url,
MessageBody=message_body,
MessageAttributes={
'processing_step': {
'DataType': 'String',
'StringValue': 'initial'
}
}
)
if (i + 1) % 100 == 0:
print(f"Sent {i+1} messages...")
time.sleep(0.01) # Simulate some delay between sends
print("Finished sending messages.")
Consumer (Python):
import boto3
import time
sqs = boto3.client('sqs', region_name='us-east-1')
queue_url = 'https://sqs.us-east-1.amazonaws.com/123456789012/my-processing-queue'
while True:
try:
response = sqs.receive_message(
QueueUrl=queue_url,
MaxNumberOfMessages=10, # Fetch up to 10 messages at once
WaitTimeSeconds=20, # Long polling
VisibilityTimeout=30 # Make messages invisible for 30 seconds
)
if 'Messages' in response:
for message in response['Messages']:
print(f"Received message: {message['Body']}")
# Simulate processing
time.sleep(0.5)
sqs.delete_message(
QueueUrl=queue_url,
ReceiptHandle=message['ReceiptHandle']
)
print(f"Processed {len(response['Messages'])} messages.")
else:
print("No messages received. Waiting...")
except Exception as e:
print(f"Error: {e}")
time.sleep(5) # Wait before retrying
When this producer runs fast, and you have many consumers also running fast, you’ll eventually hit a wall. SQS, by default, is configured for a certain level of throughput. For standard queues, this is 300 messages sent per second (MPS) and 2,000 messages received/deleted per second (RPS/DPS). If you’re using FIFO queues, it’s 300 MPS and 2,000 RPS/DPS, but with ordering guarantees.
The core problem is that SQS partitions your queue internally. Each partition has its own throughput limits. When you send or receive messages, SQS routes them to a specific partition. If you send messages too quickly, you can saturate the individual partition you’re hitting, even if the total throughput across all partitions isn’t maxed out. The same applies to receiving.
How to Scale Past Default Quotas
The primary way to increase SQS throughput is by requesting a quota increase from AWS. This isn’t a "click a button" thing; it’s a formal request.
-
Understand Your Current Usage:
- Check CloudWatch Metrics: Look at
NumberOfMessagesSentandNumberOfMessagesReceivedfor your queue. Pay attention toApproximateAgeOfOldestMessage. If this metric starts climbing, you’re likely hitting a bottleneck. - Identify Bottleneck: Are you hitting send limits or receive limits? If
NumberOfMessagesSentis consistently high and the producer is being throttled (you won’t see explicit throttling errors in the SDK, but yoursend_messagecalls will take longer or fail to send enough), it’s a send bottleneck. IfNumberOfMessagesReceivedis high and consumers are waiting orApproximateAgeOfOldestMessageis growing, it’s a receive bottleneck.
- Check CloudWatch Metrics: Look at
-
Request a Quota Increase:
- Go to the AWS Service Quotas console.
- Navigate to Amazon Simple Queue Service (SQS).
- You’ll see quotas like "Maximum API requests per second per region" and "Maximum API requests per second per queue."
- Select the quota you want to increase (e.g., "Maximum API requests per second per region" or "Maximum API requests per second per queue").
- Click "Request quota increase."
- Be Specific: In the request form, specify:
- Desired Value: For standard queues, you can request up to 3,000 MPS and 10,000 RPS/DPS per region. For FIFO queues, it’s 300 MPS and 3,000 RPS/DPS per FIFO queue.
- Use Case: Explain why you need the increase. "We are experiencing high message processing volume and need to scale our application to handle 5,000 messages per second for our standard queue."
- Region: Specify the AWS region.
- Why it works: AWS reviews these requests based on your account history and the overall capacity in the region. They’ll grant an increase if your use case is valid and they have capacity. The limits are often per-region or per-queue, and increasing them allows SQS to provision more resources or expand internal partitioning for your queue.
-
Optimize Your Application (Before/After Quota Increase):
-
Batching:
- Send: Use
SendMessageBatchto send up to 10 messages in a single API call. This dramatically reduces the number of API requests and improves your MPS.
# Example SendMessageBatch entries = [] for i in range(10): # Up to 10 messages entries.append({ 'Id': str(i), 'MessageBody': f'Batch Message {i+1}', 'MessageAttributes': {'batch': {'DataType': 'String', 'StringValue': 'true'}} }) sqs.send_message_batch(QueueUrl=queue_url, Entries=entries)- Receive: Use
ReceiveMessagewithMaxNumberOfMessagesset to 10. This is critical for RPS.
# Example ReceiveMessage with MaxNumberOfMessages response = sqs.receive_message( QueueUrl=queue_url, MaxNumberOfMessages=10, # Fetch up to 10 messages at once WaitTimeSeconds=20, VisibilityTimeout=30 )- Why it works: Batching reduces the overhead of individual API calls. Sending 10 messages via
SendMessageBatchcounts as 1 MPS, whereas sending them individually counts as 10 MPS. Similarly, receiving 10 messages withMaxNumberOfMessages=10counts as 1 RPS, not 10.
- Send: Use
-
Long Polling:
- Always use
WaitTimeSeconds(up to 20 seconds) inReceiveMessage. This reduces the number of emptyReceiveMessagecalls, saving on RPS and reducing cost. - Why it works: Instead of repeatedly polling the queue, your consumer effectively tells SQS, "Hold onto these messages for up to 20 seconds until one is available or the timeout occurs." This significantly cuts down on empty
ReceiveMessagecalls, which still count against your RPS quota.
- Always use
-
Visibility Timeout:
- Set
VisibilityTimeoutinReceiveMessageto be longer than your average message processing time. - Why it works: When a consumer receives a message, it becomes invisible to other consumers for the duration of the
VisibilityTimeout. If the consumer successfully processes and deletes the message, it’s gone. If it fails or times out, the message becomes visible again and can be picked up by another consumer. Setting this too low can lead to message duplication if processing is interrupted. Setting it too high can delay reprocessing of failed messages.
- Set
-
Deduplication (FIFO Queues):
- For FIFO queues, use
MessageDeduplicationIdor content-based deduplication to avoid duplicate messages during retries or when batching. - Why it works: FIFO queues have stricter throughput limits. Efficiently handling potential duplicates (which can happen during network glitches or retries) ensures you’re not wasting RPS on messages that have already been processed or are already in flight.
- For FIFO queues, use
-
After requesting and receiving your quota increase, you’ll see the new limits reflected in the Service Quotas console. The next hurdle you’ll likely encounter is managing the complexity of distributed tracing across your message-driven architecture.