Let's Talk AWS Lambda

[Originally on LinkedIn in March 2025. It eventually became a Talk at the AWS User Group Bonn, called “Local Development with AWS Lambda”]
For some silly reason, I started digging at AWS Lambda while working on a personal project. As a matter of practice, I started writing and using Go for almost all operations code, moving on from Python. Reasons for that are numerous, but one main reason is avoiding JIT compilation of the code.
For running a Lambda locally, there’s ton of less than clear documentation and half-assed blog posts with recommendations; most of which focus on Node/TypeScript. These all tend to include the AWS Lambda Runtime Interface Emulator (RIE), and explicitly force invocation or inclusion of it without going in to why it’s necessary, or even what the RIE does internally.
AWS provides a bunch of runtime containers over at gallery.ecr.aws, the good here is your can pick your Lambda runtime
of choice. AWS is not exactly open about what packages, etc are dumped in, and the repository just has compressed tar
files and the Dockerfile to build the image. Outside of the lambda/provided image, all of these mask the runtime
interface, simplifying the appearance of the environment.
So, in digging at RIE, it revealed the curious things that happen when you run, build, and test a Lambda Function locally.
RIE doesn’t reflect the real Lambda execution environment. By default in AWS the memory limit is 128MB per function, and can be raised if you need more CPU by increasing RAM assigned to the function. But in the local invocation in a container, the limit is set to a portion of the total RAM of the system (so, on my laptop, 3008MB, about 1% of total RAM) and that is what’s passed the app by RIE, regardless of what the limits are for the container itself.
The runtime container will mask the invocation of RIE, which means you may not realize what environment variables are being passed to the application in container. There’s a handful of details passed to the function, which RIE provides either from the container execution environment, or through the configuration.
There is no benefit to running a function without a container. AWS will put your Function in a runtime container anyway. You’re better off optimizing the code, and simplifying the deployment, by building the container locally in your CI/CD process, and pushing it to AWS Elastic Container Registry than compiling a binary and pushing it to AWS through another method.
As CPU, memory, and execution time are the billed aspects for a Lambda Function, the invocation to response speed has to be fast, even if the function persists and handles multiple events in series. So, pre-compiled bytecode or machine code is the only real way to get performant Lambda functions, and any interpreted or JIT compiled language is going to hurt performance significantly, and in turn run a larger bill.