Showing posts for tag "reverse-proxies"

A Simpler Load-Balancing Setup With HAProxy

Feb 5, 2021, 3:32 PM

...where by "simpler" I mean relative to the setup I detailed six years ago.

For a good long time now, I've had a reverse-proxy + load balancer setup that uses nginx for the main front end and HAProxy as an intermediary to do the actual load balancing. The reason I set it up this way was that I was constrained by two limitations:

  • nginx's built-in load balancing didn't do sticky sessions like I needed, which would break server-side-state frameworks like XPages
  • HAProxy didn't do HTTPS

In the intervening half-decade, things have improved. I haven't checked on nginx's load balancing, but HAProxy sprouted splendid HTTPS capabilities. So, for the new servers I've been setting up, I decided to take a swing at it with HAProxy alone.

Disclaimer

Before I go any further, I should point out that this is only a viable solution because I would otherwise use nginx only for being the HTTPS frontend. In other cases, I've used it to host files directly, run CGI scripts, etc., and it'd be best to keep it around if you want to do similar things.

Basic SSL Config

The "global" section of haproxy.cfg contains settings for your TLS ciphers and related parameters, and Mozilla's config generator is your friend here. Today, I ended up with this (slightly tweaked to generate dhparams locally):

global
	#
	# SNIP: a bunch of default stuff
	#
	crt-base /etc/ssl/private

	# See: https://ssl-config.mozilla.org/#server=haproxy&version=2.0.13&config=intermediate&openssl=1.1.1d&guideline=5.6
	ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
	ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
	ssl-default-bind-options prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

	ssl-default-server-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
	ssl-default-server-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
	ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
	
	# sudo openssl dhparam -out /etc/haproxy/dhparams.pem 2048
	ssl-dh-param-file /etc/haproxy/dhparams.pem

Though arcane, that's fairly standard stuff for TLS configuration.

Frontend Config

Years ago, my original config put everything in a listen block, but it's properly split up into frontend and backend now. The frontend block is pretty simple:

frontend frontend1
	bind *:80
	bind *:443 ssl crt star.clientdomain1.com.pem crt star.clientdomain2.com.pem alpn h2,http/1.1
	http-request redirect scheme https unless { ssl_fc }
	default_backend domino

HAProxy's configuration file is almost painfully terse, but at least this part ends up readable enough. I bind to ports 80 and 443 on all IP addresses, and then provide multiple certificate files to be picked based on SNI. Conveniently, HAProxy does a nice job of just picking the right one, and you don't have to explicitly match them up with incoming host names.

One oddity here is the particular format for those ".pem" files. HAProxy expects the actual certificate, its chain, and the private key to all be concatenated together. This is as opposed to nginx, where the chain and private key are two files, or Apache's split into cert+chain+key files. It's also very explicitly not a PKCS file, which is the more-common way to package a key in with the certs: there's no encryption and no password assigned for this.

Additionally, I just put the base names for the files there because they're in /etc/ssl/private, as configured in global.

Back to the rest of the configuration: the http-request line does the work of auto-redirecting from HTTP to HTTPS. Again, very terse, and it's using the ssl_fc configuration token to check if the incoming connection is SSL.

Finally, default_backend domino ties in to the next section.

Backend Config

The backend configuration is the meat of it:

backend domino
	balance roundrobin
	cookie backend insert httponly secure
	option httpchk HEAD /names.nsf?login HTTP/1.0
	http-request add-header $WSRA %[src]
	http-request add-header $WSRH %[src]
	http-request add-header X-ConnectorHeaders-Secret 12345
	
	# "cookie d*" = set and use a cookie to tie to the backend
	# "check" = I don't know, but I assume it checks something
	# "ssl" = Connect to the backend with SSL
	# "verify none" = Don't bother with SSL verification checks
	# "sni ssl_fc_sni" = Use the incoming SNI hint when connecting to the backend
	server domino-1 domino-1.client.com:443 cookie d1 check ssl verify none sni ssl_fc_sni
	server domino-2 domino-2.client.com:443 cookie d2 check ssl verify none sni ssl_fc_sni

The balance roundrobin and cookie ... lines tell HAProxy to cycle through the backends for incoming connections, but to stick the client with a specific backend server based on the value of the backend cookie, if present, and then to set it in the response. That covers our sticky sessions.

The next line, option httpchk HEAD /names.nsf?login HTTP/1.0, tells HAProxy how to check the health of the servers. This should be something very inexpensive that's also a reliable way to tell if the server is working. I went with asking for headers for the default login page - something all Domino servers (with session auth) will have and which doesn't risk running application code like / might.

The next three lines are my beloved Domino connector headers, plus the shared secret from my locking-down DSAPI filter (I mean, it's not the actual shared secret, but that's where it goes). Note that I don't need to include $WSSN to denote the requested Host value, since HAProxy passes that along by default.

Finally, there are the actual backend configuration lines. Because the load balancer is communicating with Domino via SSL, I tell it to do so and to not bother validating the certificates. Additionally, I tell it to pass along the incoming SNI hint to Domino, which, since Domino finally supports SNI, routes the request to the correct web site on the Domino site.

If you were to connect to the Domino servers via HTTP, you could snip off a bit from those lines and add http-request add-header $WSIS True above.

Conclusion

I haven't actually put this into production yet, so the details my change, but I'm thoroughly pleased that I can simplify the configuration a good deal. I've found learning about how to configure HAProxy a little less pleasant than learning about nginx, but part of that is just learning some of the terminology and how to navigate the documentation - it's all there; it's just a little arcane.

A Partially-Successful Venture Into Improving Reverse Proxies With Domino

Jan 30, 2021, 4:00 PM

I've long been an advocate for Domino's HTTPEnableConnectorHeaders notes.ini setting. That's the one that lets you pass some WebSphere-derived headers like $WSRA and (particularly-dangerously) $WSRU to Domino and have them fully override the normal values for the incoming host, user, and more.

I'm still a big fan of it, but it always come with the irritating absolute requirement that Domino not be publicly-accessible, lest any schmoe come along and pretend to be any user on your server. That's fine and all, but sometimes it's useful to have Domino be accessible without the proxy, such as for troubleshooting. What I really want it selective enabling of this feature based on source IP. So I set out this weekend to try to implement this.

The Core Idea

The mechanism for doing the lowest-level manipulation you can with HTTP on Domino is to use DSAPI, the specialized API for the web server task specifically. Unfortunately, working with DSAPI means writing a native library with C ABI entrypoints for a handful of known functions. I don't enjoy writing C (though I respect it thoroughly), but I've done it before, so I set out making a project plan.

My goal was to support the X-Forwarded-* headers that are common among reverse proxies, allowing administrators to use this common idiom instead of the $WS* headers, and also having the side effect of removing the "act as this user" part from the equation. I figured I'd make up a notes.ini property that took a comma-separated set of IP patterns, look for requests coming in from those hosts, and elevate the X-Forwarded-* headers in them up to their equivalent real properties.

Initial Side Trip Into Rust

Since C in the hands of a dilettante such as myself has a tendency to result in crashed servers and memory bugs, I figured this might be a good opportunity to learn Rust, the programming language that sprung out of Mozilla-the-organization that focuses heavily on safety, and which has some mechanisms for low-level C interoperability.

I figured this would let me learn the language, have a safer program, and avoid having to bring in third-party C libraries for e.g. regex matching.

I made some progress on this, getting to the point where I was able to actually get the filter compiled and loaded properly. However, I ended up throwing up my hands and shelving it for now when I got lost in the weeds of how to call function pointers contained in pointers to C structs. I'm sure it's possible, but I figured I shouldn't keep spending all my time working out fiddly details before I even knew if what I wanted to do was possible.

Back to C

So, hat in hand, I returned to C. Fortunately, I realized that fnmatch(3) would suit my pattern-matching needs just fine, so I didn't actually need to worry about regexes after all.

Jaw set and brimming with confidence, I set out writing the filter. I cast my mind back to my old AP Computer Science days to remember how to do a linked list to allow for an arbitrary number of patterns, got my filter registered, and then set about intercepting requests.

However... it looks like I can't actually do what I want. The core trouble is this: while I can intercept the request fairly early and add or override headers, it seems that the server/CGI variables - like REMOTE_HOST - are read-only. So, while I could certainly identify the remote host as a legal proxy server and then read the X-Forwarded-For header, I can't do anything with that information. Curses.

My next thought was that I could stick with the $WS* headers on the proxy side, but then use a DSAPI filter to remove them when they're being sent from an unauthorized host. Unfortunately, I was stymied there too: the acceptance of those headers happens before any of the DSAPI events fire. So, by the time I can see the headers, I don't have any record of the true originating host - only the one sent in via the $WSRA header.

The Next-Best Route

Since I can't undo the $WS* processing, I'd have to find a way to identify the remote proxy other than its address. I decided that, while not ideal, I could at least do this with a shared secret. So that's what I've settled on for now: you can specify a value in the HTTPConnectorHeadersSecret notes.ini property and then the filter will verify that $WS*-containing requests also include the header X-ConnectorHeaders-Secret with that value. When it finds a request with a connector header but without a matching secret, it unceremoniously drops the request into processing oblivion, resulting in a 404.

I'm not as happy with this as I would have been with my original plan, but I figure it should work. Or, at least, I figure it will enough that it's worth mulling over for a while to decide if I want to deploy it anywhere. In the mean time, it's up on GitHub for anyone curious.

Additionally, kindly go vote for the X-Forwarded-For ideas on the Domino Ideas portal, since this should really be built in.

Seriously, Though: Reverse Proxies

Sep 16, 2015, 5:28 PM

So, Domino administrators: what are your feelings about SSL lately? Do they include, perhaps, stress? It's "oh crap, my servers are broken" season again, and this time the culprit is a change in Apple's operating systems. Fortunately, in this case, the problem isn't as severe as an outright security vulnerability like POODLE and, better still, there is a definitive statement from IBM indicating that they are going to bring their security stack up to snuff almost in time.

But this isn't the first time we've been in this position, nor will it be the last. The focus on cracking and hardening TLS, particularly in the context of HTTPS, is not going to let up any time soon, nor will the basic movement towards encryption everywhere. So I would like to reiterate my stance: Domino is not suitable for direct external exposure via HTTP. The other protocols are problematic as well, but HTTP is the big one and, fortunately, the easiest to solve.

Whenever I've made this exhortation, part of the response I get is that administrators "should not" have to take this step. That Domino should be fully modern in its security stack or, at least, that IBM should handle this problem for them in one way or another. Or that one of Domino's traditional strengths is its all-on-one nature, with a single easy installation that takes care of everything, and that installing a separate web server is a complicated step that administrators shouldn't have to take.

Well... tough.

The promise of an integrated server system that took care of everything is a great promise, but it's always been extremely difficult to achieve, even for a platform firing on all cylinders. No matter the ideal, Domino does not perform at this level, and I still maintain that it should not need to. Outside of Domino and PHP, the application server is not generally expected to also be a full-fledged front-end web server, for exactly this sort of reason. Domino's job with respect to the web is to generate and serve up HTML, JSON, and other content; it's something else's job to make sure that that leaves your company's network securely.

If you still maintain that this should be Domino's job due to how much you pay for licensing, then that's a conversation between you and your IBM sales rep. I, though, am entirely fine with a paid-for app server not covering this ground, and that's in large part because the products that do perform this task are superb and often open-source.

These other products – nginx, Apache, HAProxy, and so forth – are made for this job. This flurry of SSL/TLS features and bugs you've been hearing about? These are all implemented or fixed in dedicated products, sometimes years before they come to your attention. And when new problems crop up, they're fixed and talked about immediately across the web, with guides for what to do appearing as soon as the problem arises.

Is it easier to continue using Domino HTTP directly than to set up a reverse proxy? Sure! Well, sort of, when there's not an active disaster to mitigate. And, much like how keeping an XPages (or other web) app up to spec and working on all target devices is more complicated than a legacy Notes app, sometimes that's just how the world goes. Deciding that it's complexity you don't want, or that your company's policy doesn't allow for an additional server, is not a tenable stance. Unless you're Apple, your company's policy will not bend the arc of the industry.

So, I implore you, at least give this kind of setup a real look and a trial run. I think you'll find that the basic setup is not dramatically more complicated than just Domino alone and will also open the door to new non-security features like improving page load speeds on the fly. If you want, with eyes open, to maintain an externally-facing Domino HTTP stack, that's fine, but I'll see you when the next security apocalypse comes around.