Using Amazon SES with HP Scan-to-Email Printer

Overview

We have just replaced our aging Canon MX885 Multi Function printer with a new colour Laser printer, an HP M281fdw multi function. One of the scanning options is to “Scan to Email”. This is something I’ve found really useful in my current clients office.

What’s the issue then?

The issue I found setting up this functionality is that I need to configure the printer to relay through an SMTP server to send the messages to me. This is a pain and I didn’t want to have to stand up a simple SMTP server here just so I could use this feature.

My next thought was to try and use smtp.gmail.com. Even with the usual port and authenticating using my emails creds this still didn’t work properly so I figured why not make use of Amazon’s Simple Email Service. It’s simple, reliable and you can relay through it.

Setting it up

The following steps should get you working - the key sticking points are making sure that you’ve verified that you can send to the email address you’re configuring.

Verifying the Email Address

First things first is to log into the AWS Console and navigate to Simple Email Service.

Under Identity Management select Email Addresses then click the Verify a New Email Address button.

This will being up the dialog where you can specify the email you want to approve.

Verify an Email Address dialog

Once you’ve submitted this, you’ll get an email to the specified address which you need to validate by clicking the link in the email. You should then see the email verified as below;

Verified Email Address

Creating the SMTP credentials

The next thing you need to do is create some credentials with which to relay through the SMTP server with. Clicking on SMTP Settings under the Email Sending section will show you the details - as below;

SMTP Settings

Click on the Create My SMTP Credentials button and either accept the IAM user name or change to something more appropriate

SES IAM User

Clicking Create will generate the new credentials for the IAM user which you can download and make a note of.

IAM User Credentials

Configuring the Printer

Finally, we can update the configuration in the printer. In my case, the screen looks something like this.

MFP Email Config Screen

Last step is to do a test scan and make sure it gets routed through your email.

Creating a Kerberos Keytab file with ktutil

NOTE: Creating a keytab file is easy enough but I have to refresh myself each time so I thought I would document it in a blog post.

Assumptions

I’m assuming for anyone who is doing this that you have your /etc/krb5.conf in order and that isn’t going to get in your way.

One thing you’re going to want to know is what your permitted and default enctypes and the realm are from this file. In my case I’m going to use aes128-cts-hmac-sha1-96 and my realm is DPE.INTERNAL.

Creating the keytab file

To create the keytab file you’re going to need ktutil (and a number of other kxxxxxx commands)

RHEL/Centos

sudo yum install krb5-workstation

Ubuntu

sudo apt-get install krb5-user

Now you have the required programs installed, you can create your keytab file using ktutil.

ktutil

This will present you with a prompt for you to add the entries in the keytab file

add_entry -password -p user@DPE.INTERNAL -k 1 -e aes256-cts-hmac-sha1-96
Password for user@DPE.INTERNAL: <enter password here>

write_kt user.keytab
quit

Breaking this down, we are saying that we want to add an entry to the keytab using a password for authentication.

The -p is the principal that we will be logging in as using the end file.

The -k refers to the Key Version Number which in some situations isn’t really needed and is ignored (in Windows environment for example). You can get the current Key version number (kvno) by using the kvno command

kvno user@DPE.INTERNAL
user@DPE.INTERNAL: kvno = 1

The -e refers to the enctype mentioned earlier. This needs to be one of those that are permitted in your krb5.conf file so you’re using an accepted and appropriate encryption.

Testing the Key

We can now test the keytab for successfully login

kinit -kt user.keytab user@DPE.INTERNAL

This should exit normally, then we can check we’ve got a ticket using klist

klist

Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: user@DPE.INTERNAL

Valid Starting           Expires                Service principal
01/23/2019 14:27:28      01/24/2019 00:27:28    user@DPE.INTERNAL

To clear out the ticket, you can use kdestroy. This will remove all current authentications.

Creating a simple Dockerised Flask App

This post covers the steps to create a simple dockerised flask app to cover some of the basic steps required when creating a REST(ish) service that can be run as a Docker container.

The App

Rather than go with the obvious “Hello, World!” type example, I decided I’d try and do something just a touch more interesting and create a REST(ish) resource that will return a response with the status code that was passed in the path of the request. This might be useful for test frameworks where you want to validate some codes reaction to a given response status code or similar.

I’m using Flask to create a quick an dirty solution, mostly to keep it simple. Firstly, the requirements.txt file is simple, just one requirement;

flask

requirements.txt

This will get us the Flask package to use in our simple REST(ish) service, which is essentially this; (forgive the inline status_codes dict)

import json
from flask import Flask, Response 

status_codes = {
        "100": "Continue",
        "101": "Switching Protocols",
        "102": "Processing",
        "103": "Early Hints",
        "200": "OK",
        "201": "Created",
        "202": "Accepted",
        "203": "Non-Authoritative Information",
        "204": "No Content",
        "205": "Reset Content",
        "206": "Partial Content",
        "207": "Multi-Status",
        "208": "Already Reported",
        "226": "IM Used",
        "300": "Multiple Choices",
        "301": "Moved Permanently",
        "302": "Found",
        "303": "See Other",
        "304": "Not Modified",
        "305": "Use Proxy",
        "307": "Temporary Redirect",
        "308": "Permanent Redirect",
        "400": "Bad Request",
        "401": "Unauthorized",
        "402": "Payment Required",
        "403": "Forbidden",
        "404": "Not Found",
        "405": "Method Not Allowed",
        "406": "Not Acceptable",
        "407": "Proxy Authentication Required",
        "408": "Request Timeout",
        "409": "Conflict",
        "410": "Gone",
        "411": "Length Required",
        "412": "Precondition Failed",
        "413": "Payload Too Large",
        "414": "URI Too Long",
        "415": "Unsupported Media Type",
        "416": "Range Not Satisfiable",
        "417": "Expectation Failed",
        "421": "Misdirected Request",
        "422": "Unprocessable Entity",
        "423": "Locked",
        "424": "Failed Dependency",
        "425": "Too Early",
        "426": "Upgrade Required",
        "428": "Precondition Required",
        "429": "Too Many Requests",
        "431": "Request Header Fields Too Large",
        "451": "Unavailable For Legal Reasons",
        "500": "Internal Server Error",
        "501": "Not Implemented",
        "502": "Bad Gateway",
        "503": "Service Unavailable",
        "504": "Gateway Timeout",
        "505": "HTTP Version Not Supported",
        "506": "Variant Also Negotiates",
        "507": "Insufficient Storage",
        "508": "Loop Detected",
        "510": "Not Extended",
        "511": "Network Authentication Required"
}

app = Flask(__name__)

@app.route('/<code>', methods=['GET', 'POST', 'HEAD', 'PUT'])
def status_code(code):
    message = status_codes.get(code, "Unknown Status Code")
    return Response(status=int(code), response=message)


if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0')

app.py

You can test the code by running python app.py which will launch the app on port 5000. Quick test might be;

curl -v http://localhost:5000/405

All being well, this will give you a response of

*   Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 80 failed: Connection refused
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5000 (#0)
> GET /405 HTTP/1.1
> Host: localhost
> User-Agent: curl/7.54.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 405 METHOD NOT ALLOWED
< Content-Type: text/html; charset=utf-8
< Content-Length: 18
< Server: Werkzeug/0.14.1 Python/3.7.2
< Date: Sat, 19 Jan 2019 16:26:34 GMT
<
* Closing connection 0
Method Not Allowed%

NOTE, the response status is the code that we’ve passed HTTP/1.0 405 METHOD NOT ALLOWED

Running it as a Docker container

Installing Docker

First, you’re going to need to have Docker on your machine. Best approach is going to be downloading the Docker Desktop for your particular machine.

Creating the DockerFile

Dockerfiles require a base image to start from, for a lightweight Python container we can just use the Alpine image to derive our container. This image is a minimal Docker image which is only 5mb in size. You can learn more about Alpine here

The Dockerfile below is all that we’re going to need. It assumes the basic file structure of the project is similar to the tree below;

.
├── Dockerfile
├── app
│   ├── __init__.py
│   └── app.py
└── requirements.txt

We’ve covered app.py, requirements.txt and __init__.py is an empty file. All thats left is the Dockerfile

FROM python:alpine

EXPOSE 5000

# Copy over the application
WORKDIR /app
COPY . /app

RUN python3 -m pip install -r requirements.txt

# Start the application
CMD ["python3", "app/app.py"]

Dockerfile

Breaking this down we’re saying that our image is

  • going to be based on the python:alpine image.
  • going to expose something on port 5000 (in this case the app)
  • going to use /app as its working directory
  • going to copy the contents of app to the /app folder on the image
  • going to install the requirements as specified in requirements.txt

Finally, we end with the CMD which specifies what will happen when the container starts. In this case, we’re going to be starting the Flask app.

Building the docker file

We need to build the image to be able to use it. This is assuming you’ve installed and started Docker on your machine.

To build the image we used the docker build command.

docker build . -t httpcodes:latest

This will give an output with the steps that are performed while building the image

Sending build context to Docker daemon  11.78kB
Step 1/7 : FROM python:alpine
 ---> 1a8edcb29ce4
Step 2/7 : LABEL Name=docker Version=0.0.1
 ---> Using cache
 ---> 2076201409c8
Step 3/7 : EXPOSE 3000
 ---> Using cache
 ---> 63588eaed844
Step 4/7 : WORKDIR /app
 ---> Using cache
 ---> feb03f342d39
Step 5/7 : ADD . /app
 ---> Using cache
 ---> fe2d365303a5
Step 6/7 : RUN python3 -m pip install -r requirements.txt
 ---> Using cache
 ---> b3ecdb9890ad
Step 7/7 : CMD ["python3", "app/app.py"]
 ---> Using cache
 ---> 1616f252e49d
Successfully built 1616f252e49d
Successfully tagged httpcodes:latest

We can now run the image

docker run -d -p 80:5000 httpcodes

This command is telling Docker to start a container based on the httpcodes (infering latest because no version was specified) and to do a port forward from the host (your machine) to port 5000 on the container. In this case, we’re saying route all traffic that comes to http://localhost:80 to 5000 on the container.

Testing the endpoint

As before, we can test the endpoint to make sure it does as we expected.

curl -v http://localhost/405

All being well, this will give you a response of

*   Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 80 failed: Connection refused
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /405 HTTP/1.1
> Host: localhost
> User-Agent: curl/7.54.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 405 METHOD NOT ALLOWED
< Content-Type: text/html; charset=utf-8
< Content-Length: 18
< Server: Werkzeug/0.14.1 Python/3.7.2
< Date: Sat, 19 Jan 2019 16:26:34 GMT
<
* Closing connection 0
Method Not Allowed%

Adding retry logic to urllib3 Python code

In this post I’m going to cover the basics of implementing retry logic using urllib3.

There is probably a solid argument saying “why aren’t you just using requests?”, as it happens, requests uses urllib3 and it’s Retry functionality.

For the purposes of this post, lets imagine that we have a REST service and one of the resources is particularly popular, or flakey, and is throwing the occasional 503 HTTP Code.

Our initial code might look something like;

import urllib3

http = urllib3.PoolManager()
r = http.request('GET', 'http://www.myflakyendpoint.com/dicey')
if r.status == 200:
    logger.info('That was lucky')

We have once chance to get it right. Yes, some convoluted while loop against the status code could be used, but thats ugly.

Another option available to us is to make use of urllib3.util.Retry and get our request to retry a specified amount of times.

import urllib3
from urllib3.util import Retry
from urllib3.exceptions import MaxRetryError

http = urllib3.PoolManager()
retry = Retry(3, raise_on_status=True, status_forcelist=range(500, 600))

try:
    r = http.request('GET', 'http://www.myflakyendpoint.com/dicey', retries=retry)
except MaxRetryError as m_err:
    logger.error('Failed due to {}'.format(m_err.reason))

In this code we’ve created a Retry object telling it to retry a total of 3 times and throw an exception if all retries are exhausted. The status_forcelist is the HTTP status codes that will be considered to be failures.

Some other interesting arguments for the Retry object are.

ArgumentComment
totalThe total number of retries that are allowed. Trumps the combined figure of connect and read
readHow many read retries that are allowed
connectHow many connect errors that are allowed
redirectHow many redirects to allow. This is handy to prevent redirect loops
method_whitelistWhich pethos are allowed. By default only idempotent methods are allowed, ruling out POST
backoff_factorHow much to increase the back off factor (see docs for more info)
raise_on_statusWhether to return the failed status or raise an exception

For more information, see the urllib3 documentation

Refreshing AWS credentials in Python

In a recent post I covered an using RefreshingAWSCredentials within .NET AWS SDK to solve an issue with the way my current organisation has configured SingleSignOn (SSO) and temporary credentials.

Essentially, the solution involves a background process updating a credenial file then using a time limited AWSCredential object to refresh the credentials.

Next…

The next issue to surface was satisfying the same requirement but for the Python based component of the 3rd party solution.

Refreshing Credential File

In this case, on RedHat instance, there is a cron job executing a Python script which handles the SSO process and writes the updated credentials and session token to a file which can be used by the 3rd party component.

Refreshing the Credentials in code

The exising code creates a session then creates the required resources. This works fine for the first hour till the temporary credentials expire.

from botocore.session import get_session

queues['incoming'] = session.resource('sqs', region).get_queue_by_name(QueueName='incoming_queue')

There is only a small amount of work to make this refreshing against the externally updated credential file. For this we’ll make use of the RefreshableCredentials from botocore.credentials.

from botocore.credentials import RefreshableCredentials
from botocore.session import get_session
from configparser import ConfigParser
from datetime import datetime, timedelta, timezone

def refresh_external_credentials():
    config = ConfigParser()
    config.read(credential_file_path)
    profile = config.get(profile_name)
    expiry = (datetime.now(timezone.utc) + timedelta(minutes=refresh_minutes))
    return {
        "access_key": profile.get('aws_access_key_id'),
        "secret_key": profile.get('aws_secret_access_key'),
        "token": profile.get('aws_session_token'),
        "expiry_time": expiry.isoformat()
    }

There are a few config entries here.

  • credential_file_path is the location of the credential file that is getting externally updated
  • profile_name is the profile in the credential file that you want to use
  • refresh_minutes is the time before the AWS credential will expire and the refresh_external_credentials() function will get called.

We now need to create the credential object for a session which will then be able to auto refresh.

session_credentials = RefreshableCredentials.create_from_metadata(
    metadata = refresh_external_credentials(),
    refresh_using = refresh_external_credentials,
    method = 'sts-assume-role'
)

Going back to the original code, the new session_credentials can be plugged in to provide long life application against temporary tokens.

import boto3

# ideally taken from config
region = 'eu-west-1'
incoming_queue_name = 'incoming_queue'

session = get_session()
session._credentials = session_credentials
autorefresh_session = boto3.Session(botocore_session=session)

queues['incoming'] = autorefresh_session.resource('sqs', region).get_queue_by_name(QueueName=incoming_queue_name)