LambdaAccessDenied error in AWS Load Balancer – Solution

Permission handling in ELB and Lambda is somewhat magical, some of the tools autoprovision permissions behind the scene, and some of them sometimes mess up.

I had a Lambda that I was invoking from a load balancer and it simply did not work. The only hint was “LambdaAccessDenied” in the ALB logs.

I had everything configured correctly. I have added a lambda permission for the entire service to invoke my function. I had the proper target groups. I had even enabled AWS SAM to autoprovision the IAM roles. The Lambda function was firing correctly, I had logs to show that it was executing.

But I kept getting “502 Bad Gateway” from the load balancer and the logs kept showing LambdaAccessDenied.

I removed all the custom stuff I created. I removed the alias. I removed and re provisioned the entire lambda function. I removed and recreated the target group.

Eventually I removed the target group and the permission I created,
and provisioned an “Application Load Balancer” Trigger from the Lambda console. This created a new target group and a new resource-based policy under Permissions, and suddenly everything started working, even though the new entries looked exactly the same as the entries I created.

Since there are only five entries on Google that even mention this error message, I figured you might want to save some time and learn from my experience.

j j j

How to backup and restore an Easy-RSA certificate authority

Easy-RSA is great, but the documentation doesn’t cover much about backup and restore, so this is a quick write up on this topic.

If you want to back up your entire CA, save your easyrsa3/pki directory. You can simply restore this pki directory in a new install of easy-rsa and you will be back in business.

If you don’t want to backup your issued certificates, because for example you are using your CA for VPN authentication (then you only need the certificate serials for revocation, those are in pki/index.txt), then you only need to save the following four files:


These files don’t ever change, so you don’t need to back them up frequently.

When you want to restore your easy-rsa install, you first have to create a skeleton pki directory with the easy-rsa init-pki command, then put the four files from above back in their previous places.

easy-rsa will still complain about other missing files and directories, but it doesn’t expect any data in those, so we can simply create empty files and directories to fix this:

touch easy-rsa/easyrsa3/pki/serial
touch easy-rsa/easyrsa3/pki/index.txt
touch easy-rsa/easyrsa3/pki/index.txt.attr
mkdir easy-rsa/easyrsa3/pki/certs_by_serial

So if you see errors like:

Easy-RSA error:

Missing expected CA file: serial (perhaps you need to run build-ca?)

Then run the empty file creation commands above.

If you have any questions, your best bet is to reach me on twitter at

j j j

ELTE stunnel setup for Mac in 2021

ELTE is a great university but they don’t support Apple products well. If you are an ELTE student, use a Mac, and trying to access ELTE resources from home during the lockdown, this is the tutorial you need.

You have to have a Caesar or IIG username and password for this to work.

Step 1: install the Homebrew package manager from

  • Click on Applications -> Utilities -> Terminal
  • Copy the following line into the Terminal window (this is one single line):
/bin/bash -c "$(curl -fsSL"
  • When it asks you for your password, enter your computer’s password.

Please note: this can take 10-20 minutes to complete.

Step 2: install the stunnel package using Homebrew

  • in the same Terminal window, type the following line:
brew install stunnel

Step 3: put the ELTE stunnel.conf file in the stunnel directory

The following 7 lines are the configuration for stunnel. You need to save this into a file on your computer called /usr/local/etc/stunnel/stunnel.conf

foreground = yes
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
accept = 8080
connect =
client = yes

Step 4: Start up stunnel

brew services start stunnel

This will make sure that stunnel will always be running on your computer, even after rebooting.

Step 5: Configure your computer to go through ELTE for web browsing

  • Go to Apple Icon -> System Preferences -> Network
  • Click on the “Advanced” button in the bottom right corner
  • Click on the “Proxies” tab on the top row
  • Select “Web Proxy (HTTP)”
  • Add under Web Proxy Server
  • Add 8080 next to the after the colon symbol
  • Enable the “Proxy server requires password” option
  • Enter your Caesar/IIG username and password
  • ALSO repeat this under “Secure Web Proxy (HTTPS)”

This is it! Your web browsers will start going through ELTE with all their traffic.

To test, start up a browser, and google the following phrase “what is my ip address”. If you did everything right, the IP address Google will report back will start with 157.181.

Step 6: Turn off the ELTE browser redirect when you don’t need it

The setup above will send all your web browsing through ELTE, including YouTube and Netflix traffic, so it will be slow for you and problematic for them. It’s better to turn it off when you don’t need it.

  • Go to Apple Icon -> System Preferences -> Network
  • Click on the “Advanced” button in the bottom right corner
  • Click on the “Proxies” tab on the top row
  • UNselect “Web Proxy (HTTP)”
  • UNselect “Secure Web Proxy (HTTPS)”

That’s it, you are all set.


j j j

How to monitor and alert on the Sidekiq Retry Queue

Sidekiq is the most popular queue processing service for Ruby on Rails. It has many brilliant features, one of them is its automatic retry when a queued job fails, to account for intermittent problems.

The retry system is automatic, by default Sidekiq retries a job 25 times before putting it on the Dead Job Queue. The retry delay grows exponentially – by the 25th retry a job would have spent three weeks in the Retry Queue!

Of course generally everybody has an alert system for when jobs fail. But, the Sidekiq retry logic works well and most errors are transient, so people grow complacent and start ignoring the messages about the failed jobs.

This works well until it doesn’t. This was the point when I started looking into ways to properly monitor the Sidekiq Retry Queue.

I had the following questions:

  • How to alert on jobs that have failed too many times for comfort?
  • How to alert if a deluge of jobs fail?
  • How to make sure the alerts we send are actionable?
  • How to check if the alerting system is operational?

I took some time during Christmas and wrote a single file ruby app called This app queries a Sidekiq server’s Retry Queue and sends alerts to a Slack channel when a single job keeps failing repeatedly, and if it finds a lot of failing jobs, it tallies them up into easily read Slack messages.

This is how it looks in Slack:

PRODUCTION ALARM: 2 NameOfTheImportantJobs on the Important queue have failed X+ times

The app remembers the previous state of the queue, so you only get messages when the queue’s state changes.

To check if the alerting system works, I wrote a second script that simply sends a daily report to the Slack channel. If you don’t see the daily report, chances are your alert system has stopped working.

This is how the daily report looks in Slack:

Daily report on production sidekiq retries:
ImportantQueue: 2 NameOfTheImportantJobs are retried

I recommend running them from cron.

I hope this helps!


j j j

How to edit an existing Certificate Revocation List

How can one edit a Certificate Revocation List aka CRL? If you use openssl or easy-rsa to manage client certificates, they already have the tools built in to generate a CRL based on the certificates that exist in your PKI.

What if you don’t have all the original PKI files? Fortunately easy-rsa is simpler under the hood than how it looks like. All you need is the original CA key and certificate, and you can dump the contents of the existing CRL back into the easy-rsa format, edit the human readable file of certificates to revoke, and generate an updated CRL.

The details: easy-rsa only really cares about the existence of pki/ca.crt and pki/private/ca.key. It will complain about missing directories and files, but feel free to create them as empty files and directories.

A CRL is a list of serial numbers of certificates, with the entire file signed by the CA, and saved in X509 format.

To add a certificate to the CRL, you don’t need the original key, you don’t need the certificate either, only the serial number of the certificate.

You can print the serial number of a certificate using this openssl command: openssl x509 -noout -serial -in CERTIFICATEFILE.crt

easy-rsa keeps the tally of the certificates it manages in the human readable pki/index.txt file. It’s a list of certificate serial numbers, their expiration dates, and their status (Valid, Expired, Revoked)

If you don’t have this file any more, it’s fine. The following command takes all the serials from an existing CRL file and prints it in the easy-rsa index.txt format:

openssl crl -in DOWNLOADED-CRL.pem -noout -text | grep "Serial Number:" | awk ' { print "R\t200330000000Z\t200330000000Z\t" $NF "\tunknown\t" } '

You can save this output in pki/index.txt.

The format is pretty simple, it’s tab-separated. The fields are:

– status (R for revoked)
– expiration datetime in ‘YYMMDDhhmmssZ’ format
– revocation datetime in ‘YYMMDDhhmmssZ’ format
– serial number
– name of file, interestingly it’s kept as ‘unknown’
– Subject Name of certificate, but it can be left empty

Now you have recreated your index.txt and you also know what data is in it. If you want to add a new certificate to revoke, add another line and enter the information above.

When you are satisfied, run ./easyrsa gen-crl and it will create an updated /pki/crl.pem file containing the list of your existing and new revoked certificates.

If you use certificate based VPN systems like Amazon AWS VPC Client VPN, this can save your hide. HTH

j j j

How to print your AWS access key in Ruby? Solution

Want to see what AWS credentials your ruby code defaults to? Here you go:

credentials =
pp credentials.access_key_id
pp credentials.secret_access_key

If you want to see what credential your command line aws cli uses, the following command will show you:

aws sts get-caller-identity

j j j

Problem and Solution: Bundle install failed with fatal: Could not parse object

If you specify a gem with a github url and branch in your Gemfile, you can occasionally run into the following problem:

fatal: Could not parse object '261964fdec8051e5d55f85e9074ed77be555e8a5'.
Git error: command `git reset --hard 261964fdec8051e5d55f85e9074ed77be555e8a5` in directory
has failed.
If this error persists you could try removing the cache directory

You will scratch your head because the branch is definitely available on github, so what gives?

Removing the cache directory doesn’t change the outcome either.

Solution: The root of the problem is that Bundler saves the last commit ID of the branch in Gemfile.lock, and next time you try to run bundler install it will try to pull the same commit ID.

IF the repo owner has removed the last used commit, say by merging it, git won’t be able to pull it any more, and gives up, blaming the local cache directory instead of the local Gemfile.lock

To solve this, run ‘bundle update’ which will ignore the contents of Gemfile.lock and refreshes it.

j j j

Diversity Hiring and the Startup Manager

If you are reading this, you are most likely in a managerial position at a small company, and you are thinking about diversity and what you can do about it. You are already doing the work of multiple people and you are realizing you are the bottleneck in the company. You decide to hire someone.

You are smart, scrappy and capable and you know a lot of people who are a lot like you – you speak the same language, you have similar backgrounds and experiences. You are also trying to make a risky business succeed so you are trying to make very safe decisions in hiring.

In fact, you only know a few token people in your industry that are visibly different from you – women, minorities, people with disabilities. They are very smart, successful and you have no way of hiring them – they are so good they are out of your league.

Your conscience dictates to try supporting diversity so you set out to cast a wider net for your next candidate – advertise on Monster and Dice and Indeed and LinkedIn and resumes start pouring in.

Interestingly the candidates that have minority-sounding names are not the most qualified candidates – and you can’t risk the success of your company on someone who cannot hit the ground running.

The ones that pass muster look sufficiently different from you that you are afraid to crack a joke in front of them – and then you reject them based on “culture fit” or “potential communication issues”.

At the end you lament the fact that there are no minorities worth hiring in the pipeline and hire the people who resemble you again.

Congratulations – you are the reason there is no diversity in the workplace.

If you are willing to accept this, here is how you can address your issues:

1. YOUR FEAR OF INSUFFICIENT KNOWLEDGE: Think about your own attributes instead of your accomplishments, then try to find the attributes in your next employee. You know you have a good base understanding of technology, and you could pick up new technologies fast – so look for people who demonstrate a good understanding of the basics and who show that they can understand new concepts, instead of past accomplishments.

2. YOUR FEAR OF INSUFFICIENT SELF-LEADERSHIP: Regardless how how egalitarian you think you are, remember that you have almost complete control over your employees livelihood – you get to decide if they can afford to pay rent next month or not. They are not going to have the same “ownership” in the company you do. But they will try their best to meet your expectations – if your expectations are very clear.

3. YOUR FEAR OF WASTED TIME: You, the hiring manager, need to put in the time to support and build an employee. Even if you hire your exact copy, they need communication, feedback, support, guidance, direction, without which they will flounder. So dedicate serious time to support your employees so they can flourish under you.

4. YOUR FEAR OF SPECIAL TREATMENT: Don’t try to make everybody equal. Different people have different needs, and situations change – children, illness, family issues. Commit to supporting special needs as they arise, and your employees will also support your changing needs.

5. YOUR FEAR OF CONFLICTS: Conflicts are normal and common in human life. Competing priorities, missed communication, time constraints should not be considered surprises. Discuss hard things in private, and focus on getting a workable outcome. We are all people first.

j j j

Tutorial: How to create the smallest possible Ubuntu Docker image with apache, nginx, python, php, java or anything else you want in it

You must have read the tutorials that start with “docker pull ubuntu:14.04”, continue with apt-get update, and after a couple of apt-get installs end up with a docker image larger than a gigabyte.

There is a much leaner way – using Ubuntu as the host operating system and pulling in only the binaries and libraries that you will use.

The results are impressive: my last apache/python image was 1.6GB, but using the following method I ended up with a 0.3GB image.

The trick is that you can copy the executables into a docker image as long as you also copy the system libraries they depend on. To find out what libraries an executable depends on can be queried with the ldd command:

$ ldd /usr/sbin/apache2 => /lib/x86_64-linux-gnu/ => /usr/lib/x86_64-linux-gnu/ => /usr/lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/

Apart from, a virtual library that the kernel emulates, all other libraries are actual files that can be copied into the Docker image. linux-vdso.1 will be taken care of by the kernel.


You have to build your image on an Ubuntu host. I still run Ubuntu 14.0.4 LTS. Make sure you keep it up to date with apt-get update and apt-get upgrade!

Install your apache/mod_wsgi/python server on your host, if it works correctly on its own it will most likely work correctly in your container as well.

NOTE: remember to stop your webserver on your host before starting up your docker container – you don’t want the host’s webserver to conflict with the container’s webserver!

We will use a 5MB docker image as base that supports Ubuntu glibc and conveniently has busybox and an entire init system built in: busybox:glibc

Then we will create a directory and copy all our files and dependent libraries into it from our host system.

Finally we run docker build to create our new image.

1. Create the build directory and build config

Create a directory for your new build where your Dockerfile and all other files will reside.

touch Dockerfile
mkdir root

Your Dockerfile will be simple:

FROM busybox:glibc
COPY root /
// these are from /etc/apache2/envvars
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/run/apache2
ENV APACHE_PID_FILE=/var/run/apache2/
CMD /usr/sbin/apache2 -D FOREGROUND

2. Copy the executables and config files into build directory

First the executable:

mkdir -p root/usr/sbin
cp -a /usr/sbin/apache2 root/usr/sbin/

Then the loadable modules:

mkdir -p root/usr/lib/apache2
cp -a /usr/lib/apache2/modules root/usr/lib/apache2/

Then the configuration files:

mkdir -p root/etc
cp -a /etc/mime.types root/etc/
cp -a /etc/apache2 root/etc/

Then the html directory:

mkdir -p root/var/www
cp -a /var/www/html root/var/www/

3. Copy the library dependencies into build directory

This is the section that makes people stay away from hand-building Docker images: library files look arcane at first, but they are pretty straightforward.

First a simple one-line script to find and copy the dependencies of all executables in the build directory:

    mkdir -p root/lib
    for i in `find root -type f -executable | xargs ldd | grep -v "linux-vdso" | grep "=>" | awk ' { print $3 } '`; do
        cp -a $i* root/lib/

Apache has some loadable modules that are not executable but they still pull in other libraries. A slight modification of the above script pulls those libraries in as well:

    for i in `find root/usr/lib/apache2/modules/ -type f | xargs ldd | grep -v "linux-vdso" | grep "=>" | awk ' { print $3 } '`; do
        cp -a $i* root/lib/

Then make sure we copied the actual libraries not just the symlinks pointing to them:

    for i in `find root/lib -type l`; do
        if [ ! -e "$i" ]; then
           missing=`readlink $i`
           cp `find /lib -name $missing` root/lib/

And finally the one missing library that somehow still got out:

    cp -a /lib/x86_64-linux-gnu/ root/lib/

4. Add a few missing directories

mkdir -p root/var/log/apache2
mkdir -p root/var/run/apache2

5. Build the image

This is the easiest part:

    docker build --rm --no-cache -t tiny-apache:latest .

6. Test the image

We run the image interactively to see all error messages and use net=host to skip having to specify port mapping. Of course you can specify port mapping if you prefer.

    docker run -ti --net=host -P tiny-apache:latest

The resulting apache Docker image is 21 megabytes. The equivalent ubuntu image is 233 megabytes.

Where to go from here

I use these instructions to build and debug mysql, nginx, redis, elasticsearch and other docker images.

I prefer to combine programs that depend on each other in the same container, for example I run nginx, gunicorn, celery and cron in one container. For this I use the busybox runit init system and I start runsvdir as the main command that starts everything else.

For logging I simply map my host’s syslog socket /dev/log into /dev/log inside the container as a volume: -v /dev/log:/dev/log

If you have any questions, ping me on twitter: imreFitos

j j j

Node NPM install fails with “Error: Cannot find module” – Solution

There are 187 thousand results on Google about this npm install error “Error: Cannot find module” and pretty much all responses say the say the same “delete your entire node installation.”

You might have an error like this:

> node install.js

throw err;

Error: Cannot find module 'readable-stream'
at Function.Module._resolveFilename (module.js:326:15)
at Function.Module._load (module.js:277:25)
at Module.require (module.js:354:17)
at require (internal/module.js:12:17)
at Object. (/usr/lib/node_modules/phantomjs-prebuilt/node_modules/extract-zip/node_modules/concat-stream/index.js:1:78)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:417:10)
at Module.load (module.js:344:32)
at Function.Module._load (module.js:301:12)
at Module.require (module.js:354:17)
npm ERR! Linux 3.13.0-32-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "-g" "install" "[email protected]"
npm ERR! node v4.3.0
npm ERR! npm v2.14.12

npm ERR! [email protected] install: `node install.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node install.js'.
npm ERR! This is most likely a problem with the phantomjs-prebuilt package,
npm ERR! not with npm itself.

I took the time to actually troubleshoot the error and found that it comes to file and directory permissions – npm can install the dependent modules as root, change the permissions and then unable to open them again!

Solution: You can fix the issue by changing the directories and files in /usr/lib/node_modules to be allowed to be read by everybody on your system:

find /usr/lib/node_modules -type d | xargs chmod go+rx
find /usr/lib/node_modules -type f | xargs chmod go+r

j j j