
QueueUrl=queue_url, AttributeNames=, MaxNumberOfMessages=10 :param sqs_client: boto3 sqs client to connect to AWS SQS :param queue_url: URL of the SQS queue to read. Note: this continues to generate messages until the queue is empty.Įvery message on the queue will be deleted. Parser.add_argument("-r", "-region", required=True, help="Region of the SQS queue, assume src and dest SQS"ĭef get_messages_from_queue(sqs_client, queue_url, max_nr_msg=0): "Use this to limit the message processed to avoid pending message from others") Help="Max number of messages to process, no limit if not specify." Parser.add_argument("-m", "-max-msg", required=False, type=int, Parser.add_argument("-d", "-dst-url", required=True, help="Queue to move messages to") Parser.add_argument("-s", "-src-url", required=True, help="Queue to read messages from") Usage: redrive_sqs_queue.py -src-url= -dst-url= -max-msg=ĭescription="Move all the messages from one SQS queue to another." Move all the messages from one SQS queue to another. Generate 100 Messages to Dead-letter Queue For Testing.
READ MESSAGES FROM SQS QUEUE PYTHON CODE
This post also provides CDK code to create SQS queues to test and the python script to generate a bunch of messages. The automation script moves SQS messages between queues with proper way to avoid impact to other services which using the same queue.

There’s no way to do this in SQS directly, so we’ve written a script to do it for us. Once we’ve found the problem in the worker, fixed the bug and deployed a new version, we want to send all the messages from the DLQ back to the original input queue, so they can be processed by the updated worker. Sometimes, for example, if there’s a bug in the worker code, you can configure SQS to send such problematic messages to a dead-letter queue (DLQ), where you can inspect them in isolation and work out what went wrong. Dead-letter queue is used to send undeliverable messages to a dead-letter queue.
