Searching the AWS SES global suppression list for a specific email when you have more than 5,000 suppressed emails is cumbersome. Hopefully Amazon fixes this soon by providing a search option in the suppression list page.
Until then, I’ve resorted to created my own custom solution, as outlined below. Hopefully this helps someone implement something on your own website.
Lambda Python Function to Create S3 file containing all emails on list
You’ll have to page through the list_suppressed_destinations response and store off all the emails in a csv file. Then you can upload the whole list to S3.
import boto3 from botocore.exceptions import ClientError import datetime import csv def lambda_handler(event, context): # ######################################################### # page through suppression list and generate json string # ######################################################### client = boto3.client('sesv2') nToken = "start" totalEmails = 0 sendEmailArgs = {} temp_csv_file = csv.writer(open("/tmp/csv_file.csv", "w+")) temp_csv_file.writerow(["email", "reason", "ts"]) while nToken != "": # and totalEmails < 3000: if totalEmails > 0: sendEmailArgs['NextToken'] = nToken response = client.list_suppressed_destinations(**sendEmailArgs) suppressedDestinations = response['SuppressedDestinationSummaries'] totalEmails += len(suppressedDestinations) for s in suppressedDestinations: temp_csv_file.writerow([s['EmailAddress'],s['Reason'],s['LastUpdateTime'].strftime("%Y-%m-%d %H:%M:%S.000")]) if 'NextToken' in response: nToken = response['NextToken'] print("fetching " + str(totalEmails)) else: nToken = "" print(str(totalEmails) + " total emails on suppression list") # ######################################################### # upload json to s3 # ######################################################### s3 = boto3.client( service_name='s3', region_name='us-east-1', aws_access_key_id=envs['AWS_PYTHON_CONNECTOR_KEY'], aws_secret_access_key=envs['AWS_PYTHON_CONNECTOR_SECRET'] ) bucketName = '<your-bucket-name>' fname = '<your-file-path>' # upload csv file print("uploading file: " + fname) response = s3.upload_file('/tmp/csv_file.csv', bucketName,fname) print(response) if __name__ == "__main__": event = [] context = [] lambda_handler(event, context)
Download the CSV and search in Excel
Now that you have all the emails in a CSV on S3, you can navigate to the file in S3 and download it, then search in your desktop application for an email.
Athena Table Created to Search the File
I found it useful to create a table in Athena so that I could write SQL queries to find suppressed emails then also be able to join up to other data in Athena.
CREATE EXTERNAL TABLE `suppressionlist`( `email` string, `reason` string, `ts` timestamp) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3://<bucket-name>/<filename>'
Hopefully one of these techniques helps! Let me know in the comments.
-Sean