Introduction
AWS Systems Manager Session Manager allows you to establish a shell session to your EC2 instances and Fargate containers even when these resources don’t have a public IP address. Also, with EC2 instance port forwarding, you can redirect any port inside your remote instance to a local port on your client to interact with your private EC2 instance based applications. A common use case for this might be to access a web application running on your instance from your browser.
However, Session Manager sessions are limited to a single resource — one EC2 instance or one Fargate container. So, it is not possible to use Session Manager alone to create an ingress point allowing access to all resources within your private VPC.
In this article, I will show how you can combine Session Manager with OpenVPN to allow a secure network path from your client to all resources within your private VPC.
Design
The below diagram illustrates the design for our solution.

We will launch an EC2 instance in a private subnet which will act as our OpenVPN server. We will then establish a Session Manager port forwarding session between our client and this EC2 instance. Then, using an OpenVPN client, we will tunnel to the OpenVPN server over the Session Manager session. With our VPN connection in place, we can then access all private applications in our VPC.
The configuration of the OpenVPN server will be done with the script at github.com/Angristan/OpenVPN-install. Because Session Manager does not support UDP, our OpenVPN server will be configured in TCP mode.
Prerequisites
To be able to use Session Manager from the AWS CLI you also need to install the Session Manager Plugin.
Install the OpenVPN client.
Install the Terraform CLI.
Terraform
The solution can be deployed via Terraform:
git clone https://gitlab.com/aw5academy/terraform/openvpn-ssm.git
cd openvpn-ssm
terraform init
terraform apply
This Terraform code will provision the required VPC and Session Manager resources. The OpenVPN server is not deployed here… that comes later.

Make a note of the output alb-dns
. This is the DNS record for a sample application load balancer deployed to the private subnets. If you try to access this you will not be able to connect.

This is expected because as we can see from the load balancer settings, this is an internal load balancer, meaning it can only be accessed from resources within the VPC.

Session Manager Preferences
The Session Manager preferences can’t be configured via Terraform. So we must set these manually with the following steps:
- Login to the AWS console;
- Open the Systems Manager service;
- Click on ‘Session Manager’ under ‘Node Management’;
- Click on the ‘Preferences’ tab;
- Click ‘Edit’;
- Enable KMS Encryption and point to the
alias/session-manager
key; - Enable session logging to S3 bucket
ssm-session-logs...
with encryption enabled; - Enable session logging to CloudWatch log group
/aws/ssm/session-logs
with encryption enabled; - Save the changes;
Start VPN Script
With our infrastructure deployed via Terraform we can now try to launch our OpenVPN server. The script provided at start-vpn.sh can be used to do this. This script performs the following steps:
- Obtains the launch template for the OpenVPN instance;
- Starts the EC2 instance;
- Waits for the instance to be ready for Session Manager sessions;
- Waits for the instance to complete its user_data which, is where the OpenVPN server is installed and configured;
- Downloads the OpenVPN client config file generated by the server;
- Starts a port forwarding Session Manager session;
Let’s try this script now by running:
bash start-vpn.sh

Our Session Manager session is up and awaiting connections.
OpenVPN Client
Now we need to configure our OpenVPN client. In the previous step, an ssm.ovpn
file was downloaded from S3. Make a note of the location of this file. Next, launch the OpenVPN client and select to import a profile by file.

Navigate to the location of the ssm.ovpn
file.

Now click on Connect.

Our tunnel is now in place.
If the VPN fails to connect on Windows Subsystem for Linux try restarting WSL by running the following from a command prompt:
wsl --shutdown
Then rerun bash start-vpn.sh
and try to connect from your OpenVPN client again.
Testing
Now we can test if our solution works by trying to access the load balancer that failed to connect earlier. Try it again in your browser:

Success!
Important Points
This solution was created for fun more than as a realistic real-world solution. The performance of this feature has not been thoroughly tested. Indeed, we are using TCP instead of UDP because Session Manager only supports TCP. TCP is known to be sub-optimal for VPN traffic and can suffer from a phenomenon know as TCP meltdown.
The security of this configuration is quite strong however. The communication between client and AWS is both HTTPS and KMS encrypted. Also, no customer managed networking ingress is required — so your VPC can be entirely private.
You may find this useful in a small development team environment. But for corporate settings, consider AWS Client VPN instead.
Cleanup
To clean-up the resources created in this guide, first destroy any EC2 instances with:
aws ec2 terminate-instances --instance-ids $(aws ec2 describe-instances --filters "Name=tag:Name,Values=openvpn-server" --query Reservations[*].Instances[*].[InstanceId] --region us-east-1 --output text |xargs) --region us-east-1
Then, from the root of your checkout of the Terraform code run:
terraform init
terraform destroy