There’s nothing like building something to get the creative problem-solving skills to work. I’ve had what I’ve thought to be a simple question around serverless integration with on-premises applications. It doesn’t seem like a stretch of the imagination to believe that a modern app developer would want to create serverless functions using AWS Lambda or Azure Functions to reach back to an application in the private data center. However, how do we manage network security between a serverless platform and server-centric applications?
We recently connected the CTO Advisor Hybrid Infrastructure (CTOAHI) to Oracle’s Cloud Infrastructure for our upcoming study about the various VMware cloud services. We now have a 10Gbps port, which virtually points at any of the major cloud providers. Check out my latest lightboard video describing the project.
The activity reminded me of a question I’ve had for some time. How do you protect a service living in the data center from a rouge AWS Lambda function? Lambda is serverless; meaning, there isn’t a headend IP address we can place in our firewall to allow or deny specific applications running in Lambda functions.
Think of the CTOAHI needing to integrate a set of Lambda functions to extend the use of a security video application. The application collects and stores video data inside an SQL database. In order to ensure the chain of custody, the frontend application logs access to the security footage.
We wish to create a set of Lambda functions that query the application for a specific type of video metadata. Once Identified, another Lambda function requests a copy of the data to an S3 object-store. Once in the object store, another Lambda function kicks off to re-encode the video for view on a mobile device. Next, one last function copies the footage back to the security application.
How does our network security team identify the Lambda function from any other function in our AWS VPC? From an IP address perspective, one function looks like the other.
Enter Service Mesh
I’ve long understood the idea of a service mesh. You have two or more applications that need to communicate. Application One makes a call to Istio-based service mesh. Istio does a lookup for Application Two and checks the security policy to ensure the two applications are allowed to communicate. If they are, Istio establishes the connection, and the two applications conduct network communication. Hashcorp presented this and other scenarios during their Cloud Field Day 8 presentation for its Consul solution.
But let’s get back to our Lambda example. There’s a Function that makes a call to the security video app. Let’s pause here. As stated, the solution doesn’t have an https based interface. How do you make a REST call from the Lambda function?
Consul can provide this necessary capability, but we can bring in another presenter from Cloud Field Day 6, Solo.io. Solo.io Gloo, based on Envoy is an application gateway that provides translations from these legacy apps to services meshes such as Consul. Gloo provides the https calls to the legacy application. Gloo provides an application gateway that will register with Consul as the service for accessing the security application.
The end to end solution would involve placing a Consul instance at the edge of our AWS VPC and another Consul instance at the edge of the network in our data center. Gloo would exist inside the walls of the data center to frontend the security application.
What’s the catch?
Nothing is ever perfect. I’m not the first to ask the question about Lambda and Consul integration. Here’s a thread on Github that goes into detail about the considerations. https://github.com/hashicorp/consul/issues/6540
Overall, I’m reasonably sure service mesh is the way to solve this challenge of centrally enforces security between an on-premises resource and a headless or serverless workload such as Lambda.
I’d love to hear any real-world feedback on how customers are addressing this challenge. Feel free to reach out to us via the contact us page.