Recently, I was working on a side project which would rely on webhooks from third-party providers, like GitHub, to be picked up by my service exposing a simple HTTP endpoint. Testing this configuration proved to be harder than I initially thought. Of course, I could have just emulated the expected payload, but being able to perform real actions in the other service and seeing the results flow in would have been even better.
This is the time where most people pull out ngrok: It's a great tool to create a real endpoint and tunnel traffic back to your local machine, and it just works.
But, as with most things, I wanted to do it a bit differently. I'm always running some compute instances for whatever tasks, be it the newsletter backend, or other purposes. With that configured already, I'd only need to point a domain (or a set of domains, using a wildcard record) to a particular instance, and then somehow forward the incoming traffic to my machine.
Let's start with how I tunnel application traffic first, then continue with some additional configuration I added along the way, to make the setup even better, and more secure.
🧲 Forwarding Local Applications using SSH
As I was already using SSH to connect to my instances, I thought that there should be a way to leverage this connection to expose the application running on my machine to anyone on the instance. Luckily, this can be done easily by running the following
ssh -N -R "8080:localhost:3000" user@example.com
While -N
simply says that we don't want to execute a remote command (because we're using this session for tunneling only), -R
specifies that all connections to port 8080
on the remote instance
should be forwarded to our local machine, more specifically to port 3000
, which our application is listening on.
To listen on privileged ports, such as 80 (http) and 443 (https), you'll have to log in as root, which is one of the reasons, why I decided to add another layer to handle incoming connections, terminate TLS, and act as a reverse proxy to send the traffic to the tunnel port, and by that, to my machine.
🔑 Terminating TLS using Caddy
Caddy is a powerful web server written in Go and maintained by a very active community, and one of its core principles is to encrypt traffic using TLS. Built on a set of tools to automate provisioning certificates, it allows you to handle HTTPS endpoints for any domain you point to it, which was perfect for my use case.
Especially, when serving multiple applications from a single host, you'll need some layer that decides which application should receive and process certain requests, and that's where Caddy comes in. As I wanted to remove TLS from my existing services anyway, this was a great moment to switch to Caddy for receiving traffic and using its reverse-proxy feature to forward traffic.
My Caddyfile looks similar to the following:
# We'll forward all incoming HTTPS traffic to port 8080,
# which SSH is listening on and tunneling back
webhooks.mydomain {
reverse_proxy localhost:8080
}
When starting up Caddy, it'll provision and retrieve the necessary certificates for your sites, then it's ready to receive traffic.
When using Cloudflare for DDoS protection and other services, make sure that you set your TLS encryption mode to "Full" or "Full (strict)". This is important as the "Flexible" option will send all traffic over HTTP, and Caddy will redirect those requests to use HTTPS, which results in a redirect loop, sending your request to the void. For configuring, you might have to use "Flexible" though, so it can involve some experiments.
And with this minimal solution, I'm able to receive webhooks, and all kinds of requests in general, from any source, connected to the internet, and forward them to my machine. All traffic is completely encrypted, but my applications don't have to think about certificates.
Sure, if I didn't have any spare compute instances running, or less time to run such experiments, I'd probably have used ngrok or something similar as well, but it happened to be this way, so that's the result!
Thanks for reading! If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on Twitter or by mail.