The same friend who posed me the cool problem with puppeteer and promises gave me a new puzzler this week. It's Christmas! This one was if I had experience using Apache as a reverse proxy that could handle down-times. So, concepts:
In computer networks such as the internet, a reverse proxy is a common type of proxy server that is accessible from the public network. Large websites and content delivery networks use reverse proxies –together with other techniques– to balance the load between internal servers.
When you go to a URL (e.g. www.google.com), the address is presented as a single machine all browsers go to. But that's far too much work for a single server to... serve... so a reverse proxy is a machine that sits in front of a bank of other machines. All it does is farm the work out to all the back end nodes, so the work of many is hidden behind a face-man. This also protects the back-end machines from direct access/ assault. Additionally, if one or more of the back end services crash, the service as a whole can keep running by failing-over between functioning servers, and removing the broken ones from the pool so the proxy won't even look at them.
So the problem statement was:
"that if the first route gives a 500, it should failover to the second route"
To test, I went to Docker and spun up three services:
- The Proxy. ubuntu/apache2. The front man
- The Service. ubuntu/apache2+php. The processor
- The Fail-over. ubuntu/apache2. The error handler
So 200OK is the http code for "things are ship shape", so in the general mode the proxy just serves the content from service.
Then. Something Breaks. We've set up a monitor that checks the service every 5 seconds. And it's just received a 502 error
Now we have an error state, the ProxySet failonstate is triggered. So we'll use the "+H" status branch rather than the OK branch (where a code starts with 2, 3, or 4). On that server, the only thing we have is an error.html to alert the user something's gone belly up with the processing server.
The health check keeps going, though, and once the service is restored, it starts going back to the original setup again.
I've built this docker setup for testing and zipped it up for you to try yourself. Let me know if there's something I've missed/ if there are better ways.