Solving Vendor Lock-In And Other Issues In Serverless

Posted on March 03, 2019

Vendor LockIn

One of the biggest things you will hear about when it comes to serverless is the vendor lockin.

This is basically the concern that you will become too tightly tied to a specic cloud vendor in order to leave. And the concern is real.

While the Serverless Framework lets you write code for various clouds, it really does do enough to keep you protected from lock-in.

Once you get the taste for what you can accomplish with serverless, you’ll start reaching for functions that are triggered by queues, database updates, and more.

There really isn’t much you can do to avoid picking a vendor unless you are willing to pick an open sourced framework like OpenFaas and write all of your code there.

For me, I’ve basically selected AWS. They have great pricing and a good free tier. Most bootstrapped/smaller engineering teams with sub 10 people are tied to everything they write anyway.

There are simply too much business logic to write to consider using something like a Cloudflare load balancer to route traffic between multiple providers.

Vendor Lockin

Example: Cloudflare Workers Vs Lambda

For example, let’s consider this article. We can see that Cloudflare workers are capable of having much faster cold starts than AWS Lambdas.

That’s cool. Perhaps, we’re writing a user facing app that needs super fast response times 100% of the time.

Well, we could write this logic with the serverless framework and have them deploy these client facing endpoints to Cloudflare.

(If you’re super cool, you could even write them in Rust.)

But, what if we start to need a queue worker? Well, we’ll probably want to go with AWS SQS. Which means we’ll need to use some Lambda’s because cloudflare can’t work with AWS SQS.

You can start to get pigeon holed like this really quickly.

I don’t think it’s really a bad thing for 90% of business usecases. But it is something you should consider.

Running Locally

Another quirk I ran into with the Serverless Framework while trying to avoid vendor lock-in was dealing with running code locally.

The Serverless Offline Plugin works great with Python and Nodejs.

But, it doesn’t work with Go. :(

You can use the AWS SAM framework to develop Go locally. But, they you are committing to a cloud provider again.

Local Dev

Enviroment Variables

Enviroment Variables are kind of another interesting problem to solve with the current state of serverless.

I’m currently using variables from the AWS SSM Parameter Store. And then, I’ve build a script that detects if I’m on local or live mode. Depending on the enviroment, it loads the correct variable.

But once again, you can’t really get away from AWS as a vendor here. Cloudflare (for example) doesn’t really have built in env variable support unless you want to commit your env variables to your repo. And, to me, that defeats the point.


For continuous integration, I’ve been using a simple make file for golang and just npm scripts for nodejs. These are run after a webhook from Github is fired.


Database Migrations & ORM

Here’s a quick win for those looking to avoid vendor lockin.

I was really impressed with the AWS Appsync product.

But, I was able to find Prisma instead.

It replaces the need for a traditional ORM, and let’s you quick build highly performant, structured data models with code.

I’m using the Go version with RDS & DynamoDB. It’s great.