This Terraform module elegantly bundles SQS queue, Dead-Letter Queue (DLQ), and their associated IAM policies into a single, manageable unit.

Let’s see this in action. Imagine you’re spinning up a new microservice that needs a reliable way to process messages. You’ll want a main queue for your primary messages and a DLQ to catch anything that fails processing after a few retries.

Here’s a simplified Terraform configuration using the module:

module "my_processing_queue" {
  source = "terraform-aws-modules/sqs-queue/aws"
  version = "3.1.0" # Use a specific, recent version

  name = "my-service-processing-queue"

  # Configure the main queue
  fifo_queue = false
  content_based_deduplication = false
  visibility_timeout = 300 # 5 minutes

  # Configure the DLQ
  enable_dlq = true
  dlq_name                 = "my-service-processing-dlq"
  dlq_max_receive_count    = 5
  dlq_visibility_timeout   = 600 # 10 minutes for DLQ investigation

  # IAM policy to allow a specific Lambda function to send messages
  # In a real scenario, this would be more granular
  policy_statements = [
    {
      actions = [
        "sqs:SendMessage",
        "sqs:SendMessageBatch"
      ]
      principals = [
        {
          type        = "AWS"
          identifiers = ["arn:aws:iam::123456789012:function:my-lambda-processor"]
        }
      ]
      resources = [
        "arn:aws:sqs:us-east-1:123456789012:my-service-processing-queue"
      ]
    }
  ]

  tags = {
    Environment = "production"
    Project     = "MyMicroservice"
  }
}

When you run terraform apply with this configuration, Terraform will provision:

  1. A standard SQS queue: my-service-processing-queue. This is where your application will send messages for processing.
  2. A Dead-Letter Queue (DLQ): my-service-processing-dlq. This queue is specifically configured to receive messages that fail processing on the main queue after dlq_max_receive_count (5 in this case) attempts.
  3. An IAM policy: This policy is attached to the SQS queue and grants the specified Lambda function (arn:aws:iam::123456789012:function:my-lambda-processor) permission to send messages to the main queue. The DLQ also gets automatically configured with a policy to allow the main queue to send messages to it.

The power of this module lies in its ability to manage these related resources atomically. You define your message processing pipeline’s ingress point and its failure handling mechanism in one place. This simplifies dependencies and reduces the chance of misconfiguration. For instance, the module automatically sets the redrive_policy on the main queue, linking it to the DLQ and specifying the maxReceiveCount.

The redrive_policy is a crucial SQS feature that dictates when messages should be moved to a DLQ. It’s a JSON document attached to the source queue. It looks something like this:

{
  "deadLetterTargetArn": "arn:aws:sqs:us-east-1:123456789012:my-service-processing-dlq",
  "maxReceiveCount": 5
}

The module constructs this redrive_policy for you based on the enable_dlq, dlq_name, and dlq_max_receive_count variables. This eliminates the need for you to manually define and manage this JSON policy, which is a common source of errors when setting up SQS queues with DLQs.

The policy_statements variable allows you to define IAM policies directly within the module. This is incredibly convenient for granting permissions to producers (like Lambda functions, EC2 instances, or other AWS services) to send messages to your queue. You can specify actions (e.g., sqs:SendMessage), principals (who is allowed to perform the action), and resources (which queue the action applies to). The module handles the creation of the QueuePolicy resource for you.

Beyond the basic setup, you can fine-tune queue behavior using parameters like receive_wait_time_seconds for long polling, message_retention_seconds to control how long messages stay in the queue, and kms_master_key_id for server-side encryption.

A subtle but powerful aspect of this module is how it handles queue naming and ARNs. When you specify a name for the queue, the module constructs the full ARN. This ARN is then automatically used within the redrive_policy and can be referenced in the policy_statements for producers. This cross-referencing is handled implicitly, making your Terraform code cleaner and less prone to ARN-related errors.

If you were to omit the policy_statements, your Lambda function would likely fail with an AccessDeniedException when trying to send messages to the queue.

The next logical step after setting up your queue and its producers is to configure consumers, such as Lambda functions or EC2 instances, to poll messages from the queue and implement robust error handling within your processing logic.

Want structured learning?

Take the full Sqs course →