• JWT Authentication with Salesforce

    JWT Logo

    Salesforce, for those of you who might have been living under a rock, is a customer relationship management tool. I work with developers who develop code for salesforce. We follow the usual CI/CD route, but have been dependent on 3rd party tools to authenticate to a Salesforce Org. This is OK, but not ideal, so I set about authenticating to Salesforce with JWT. The official documentation is here: This sets out the following steps:

    1) Create an OpenSSL certificate

    2) Create an app within the Salesforce Org that you will authenticate to

    3) Manage Permissions for the Managed App

    4) Authenticate using the SSL private key

    Create an OpenSSL certificate

    You will need to have the command line tool openssl installed - this can be found (in a *nix based operating system) by typing which openssl. If the command line returns something other than an error, then you are set. If you get an error, you will need to install the package openssl.

    To create an SSL certificate you should use these following commands:

    mkdir jwt
    cd jwt

    Generate a private key and store it in a file called server.key:

    openssl genrsa -des3 -passout pass:SomePassword -out server.pass.key 2048
    openssl rsa -passin pass:SomePassword -in server.pass.key -out server.key

    Generate a certificate signing request (.csr file):

    openssl req -new -key server.key -out server.csr

    Generate a self-signed digital certificate from the server request (.csr file) and private key (server.key):

        openssl x509 -req -sha256 -days 365 -in server.csr -signkey server.key -out server.crt

    Connected App in the Salesforce Org

    Create a Connected App

    This connected app is the place where the SSL certificate is held and allows you to connect. Go to Setup, and type ‘App’ in the quick find box. Then click on ‘App Manager’ link.

    Click the ‘New Connected App’ button in the right hand top corner. Give you app a name in the ‘Connected App Name’ field. You must provide an email address.

    Under the ‘API (Enable OAuth Settings)’ click the checkbox for ‘Enable OAuth Settings’. A dialogue box will appear. You need to add the settings ‘Access and manage your data’, ‘Full Access’, ‘Perform Requests on your behalf at any time (refresh_token, offline_access)’ and ‘Provide access to your data via the Web (web)’ in the Selected OAuth Scopes. If you need more permissions, add them (or create them in the first place.)

    Also click the ‘Use Digital Signatures’ checkbox. A ‘Browse File’ button will appear. Here you will need to upload your server.crt file that was created in the SSL stage.

    In the ‘Callback URL’ enter If you have a production org that you are connecting to, the test part should be replaced with login.

    Click the save button.

    You will see, on the newly created app page’ a Consumer Key - this is often called the client_id in the JWT specification. You will need this later on, so make a note of it.


    Manage Profiles and Permission Sets for the Connected App

    The page will load, with your changes saved, but you still need to add information to this app to allow JWT to authenticate.

    Click ‘Manage’ at the top of the connected app, then ‘Edit Policies’.

    In the ‘OAuth Policies’ section, choose ‘Admin approved users are pre-authorized’. A dialogue warning box will appear. Accept the warning. Click save at the bottom of the page.


    Scroll down and click ‘Manage Profiles’. Choose a profile that suits the needs of your connection. Then, click on ‘Manage Permission Sets’ and choose a permission set to use with this profile (or create one if necessary).

    Authenticate using the SSL private key

    Now that you have the app ready to accept connections, you need to authenticate initially using a browser, or an API tool like Postman or Advanced Rest Client. For simplicity, we are going to use the browser.

    Then, type in the command line

    sfdx auth:jwt:grant --clientid=3MVG9SHET737DDSDB4lkkcuR.z3NkG98GEIq5h9hcF.YBNJ.PkDOEDE66785AODEDEa78TvyzcJ \ 
    --jwtkeyfile=./server.key \

    You will then be authenticated to your organisation and will be able to type in commands. This method allows for CI/CD pipelines to authenticate themselves.

  • Tiny Tiny RSS

    Tiny Tiny RSS is a simple syndication application. I’ve been using RSS syndication readers for a long time. I just don’t have time to scour the web to check if a website had added new content. I started with Google Reader a long time ago (as did many of us), but was also sad that it went away. So, then I moved to Newsblur. This suited me and helped me read my blog posts for another year or so. It also has (or had - I haven’t checked recently) an open source model. However, with the pricing it was offering, I didn’t need to host it myself as it was doing just a fine job, for the price that I paid. Then the price went up. For some reason, I felt that it was too much. I was probably hasty, but also ready for a change.


    I switched my feeds to The Old Reader. This was an obvious attempt to recreate the experience with Google reader.

    Recently, I installed TinyTinyRss on a raspberry pi. There was a package already, but it attempted to install Apache - but I was already running Nginx, so didn’t want this extra dependency - I therefore downloaded the source code, put it in a directory that was served via the webserver and everything worked instantly. I had already creaed a database and user/pass combination in mariadb, so the setup was simple. At this point, I imported an ~.ompl~ file that I already had. I then read about running the update script. I started a tmux terminal and issued the command ~/usr/bin/php /path/to/tt-rss/update.php –feeds –quiet~ and this runs automatically.

    You can add feeds using the following dialogue:


    The output is really good to look at:


    There is also an app for your phone to hook up to your server, so you can read your articles on the go.

  • Verify Azure Blob Storage Automatically

    This blog post outlines a problem I had whereby there were lots of files in Azure storage and I wanted to check that they had been uploaded correctly.


    You want to verify lots of files that have uploaded to Azure Blob Storage? Look no further.

    So, I had to upload a lot of media assets for work in a hurry as a server was being shut down. We had Azure, and as these were static files, that seemed like a good solution. I think in total, there was roughly 200,000 files. I wanted to md5sum them at each part of the stage. I did that for the huge 5Gb zip file I was given and asked my colleague who provided it to me to do the same. It was good. I could do this on all my machines, but not once the zip file was unzipped.


    So, I wrote a script[0]. Doing the local verification was fairly easy.

    Doing the Azure solution was not so easy. Azure store the md5sum in a weird way and lots of people have written about this. Most of my research returned this kind of post. No one seemed to have my huge amount of files problem, but just had the problem whereby the md5sum format wasn’t the same. I tried reverse engineering the problem, but found it tricky. Then I hit upon gold-dust:

    import binascii
    remote_md5 = binascii.hexlify(b)

    This turned the md5sum stored in Azure into something that could be compared (and was the same as the typical md5sum command that you would run locally.

    How to handle Azure blobs individually with the Python Azure SDK

    Get a BlobServiceClinent:

    blob_service_client = BlobServiceClient.from_connection_string(connection_str)

    The connection string is usually picked up from an environment variable that you have set up locally (and is provided nicely in the Azure console).

    Then, get a container client:

    container = blob_service_client.get_container_client(container=container_name)

    Then, with the container, you can list all the blobs:

    blob_list = container.list_blobs()

    Once you have a list of blobs, you can iterate through them and then get a blob client:

    for blob in blob_list:
    blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob)

    and then then the blob properties and the md5_properties:

    a = blob_client.get_blob_properties()
    b = a.content_settings.content_md5

    Once I had that I could use my binascii.hexlify() magic and everything would be great to write out to a file.

    This file only really solves my problem, but please feel free to run with it and make adaptions. I’m interested in any pull requests that improves it. It does need less ‘hard-coding’.


  • Apache Rewrite Mod

    Apache_HTTP_server_logo_(2016) mod_rewrite is a powerful module that Apache can utilize. It is a way of rewriting URLs, modifying the request that Apache recieves. This could be a moved document, or enforcing SSL (rewriting the URL from http to https).

    This is a complex subject, which cannot be covered here, but for further information, refer to the full documentation. Rewrite rules can exist in a .htaccess file, or the main configuration file, or preferably a <Directory> stanza. mod_rewrite uses Regex (compatible with Perl) for its pattern matching engine. This is nearly unlimited in searching across a range of URLs.

    Enable Rewrite Engine

    To enable mod_rewrite you should include the following in you Apache Documentation:

    RewriteEngine on

    A restart of Apache will be required to load the engine.

    Declaring a Rewrite Rule

    To declare a rule you will have something similar to the code below:

    RewriteRule ^/old.html$ new.html [R]

    This will redirect a request that Apache receives for old.html page to new.html page. The ^ indicates that it must be the initial part of the request, and the $ indicates that it is the end of the request. If /subdir/old.html is passed, it will fail this search pattern as /subdir/ is at the beginning of the pattern. This is in line with Regular Expressions pattern matching.

    These rules can be embedded within a particular directory, using the <Directory stanza:

    <Directory /var/www/html/subdirectory>
      RewriteEngine on
      RewriteRule "^old.html$" "new.html"

    The above rule only applies to the directory named subddirectory.

    Rewrite Flags

    At the end of each RewriteRule is a set of flags that determines what should be done - these are enclosed in a set of square brackets. One of the most common is [R] which is a redirect, carried out at the browser level (issued by the webserver).

    A full list of flags is documented at

    Regular Expressions and mod_rewrite

    Character Meaning Example
    . Match any character c.t matches cat
    + Repeats the previous match one or more times a+ a, aa, aaa
    * Repeats the previous match zero or more times a* matches the same as a+ but will also match an empty string
    ? Makes the match optional colou?r will match color and colour
    \ Escape the next character . will match a . (dot) and not any single character as explained above
    ^ Called an anchor, matches the beginning of thes string ^a will match a string that begins with a
    $ The other anchor that matches the end of a string a$ will match a string that ends with a
    ( ) Groups several characters into a single unit, and captures a match for use in a backreference (ab)+ matches abababab - the + applies to the group
    [ ] A character class - matches one of the characters c[uoa]t matches cut, cot, cat
    [^ ] Negative character class - matches any character not specified c[^/] matches cat or c=t but not c/t

    mod_rewrite Log Level for < v2.4

    The mod_rewrite will write to the usual apache log files.

    For previous versions (pre Apache 2.4) the directive RewriteLogLevel will set the level of logging written, ranging in values from 0-9 with 0 being no logging and 0 being the most verbose. Logs will appear as pass through lines in the file.

    mod_rewrite Log Level for v2.4

    With the current version of Apache (2.4), mod_rewrite, the older methods of controlling logging have now been replaced by a new per-module logging method:

    LogLevel alert rewrite:trace3
  • Bike-packing preparation

    Due to covid-19, we are all currently in lock-down. I was planning to go bike-packing this spring, but cannot get out. But that didn’t stop me from having some fun in the garden with a tarpauline: IMG_20200329_165257

subscribe via RSS