Skip to content

Category: Computers

Ending an Era of OpenBSD: Or a Brief History of my Firewalls

For something approaching 20 years, I’ve used OpenBSD to firewall my network from the internet and provide basic network services (DHCP, DNS, NTP, VPN, etc.). Just recently I’ve decided to retire OpenBSD and stand alone computers from the role of firewalls for something smaller, lower power, and easier to manage and upgrade.

I’ve been steadily moving towards smaller and lower power systems for as long as I’ve been doing OpenBSD based firewalls. My first machines were nothing more than mid-tower desktops that I had upgraded away from. In 2000-2003 I made my first moves towards building something more specialized, when I switched from using old towers to building a specific micro-atx pizza box style machines; though still with standard Athlon XP CPUs and parts.

Lua String Compare Performance Testing (Nginx-Lua)

In another article I wrote about my ongoing attempt to move my server’s WordPress’s security plugin’s firewall functionality out of PHP and into the embedded lua environment in Nginx. While I’m certainly not nearly as the scale where the C10K problem is a real issue for me, I still do my best to insure that I’m doing things as efficiently as possible.

In my last post, I was looking at the performance degradation between doing no firewalling at all (just building the page in WordPress and serving it), and using the embedded Lua environment to do basic application firewalling tests.

In that article, I saw approximately 425 microsecond latency impact form the Lua processing compared to just building the page. Of course, that was still on the order of 2 orders of magnitude faster than doing the same work in PHP.

Part of the larger part of the actual processing that is being done, is looking for various strings in the myriad of data that’s pushed along as part of the various requests. Things like, know bad user agents, key bits used in SQL injection attacks, and various things like that.

Lua and Nginx both offer some options for searching strings. On the Lua side, there’s the built in string.find() (Lua5.1 docs) and associated functions. On the Nginx-Lua side of things there’s ngx.re.find() (lua-nginx-module docs) which allows calls into Nginx’s regex engine.

I’ve done a significant amount of digging trying to find performance informational about both of these methods, and I haven’t been able to find any. So I sat down and did my own testing.

Nginx-Lua Module: Access Control Performance Testing

I’ve been playing with the Lua engine in Nginx for a while. My primary intent is to offload most, if not all, of my WordPress security stuffy from running in the PHP environment to running in something that potentially won’t use as much in the way of resources. The first question I need to answer before I can reasonably consider doing this is what kind of of overhead doing extended processing in Nginx–Lua imposes in terms of performance.

To put some perspective on this, I’ve been running the WordPress security plug-in Word Fence for a while now. When I compare my production server (which has Wordfence enabled) and my development server (which doesn’t have word fence installed, but is otherwise running the same plugins and code base), I see on average a 10–20 ms increase page rendering times, and nearly 20 additional database queries per page.

The overhead from Wordfence isn’t creating a performance problem per say, however, shaving even 15 ms off a 50–60 ms page render time is an appreciable improvement. Additionally, less resources consumed by a bad actor means more resources are available for actual users.

In any even the question here is how much performance overhead does the Nginx-Lua module carry for doing some reasonable processing.

Hidden “Features”

There’s a trend in modern computing that I don’t understand; hiding features and interactions. Actually, it goes beyond just hiding features to making it difficult to discover or understand what features are available or what is causing things to happen. And honestly, I’m getting kind of sick of it.

Take this gem in Windows 10.

I just upgraded to the Anniversary Edition, build 1607, but this may apply to earlier builds as well.

The biggest outward change for me with AE, is that I can no longer disable the lock screen with a group policy. Given that, I decided, that if I can’t not use it, I might as well customize it a bit.

One of the options you can set on the lock screen is the image. The choices currently are to use; Microsoft’s stream of images, a picture of your own choosing, or a slide show of your own images. I had set a picture, but I thought that a slide show would be kind of interesting. After all I have a number of my own images that I wouldn’t mind seeing there randomly.

Only there’s a big hidden catch. If you turn the slideshow on for the lock screen, instead of turning off your displays after N minutes, it does, but it also would lock the the computer and return to the lock screen. At least that’s what it was doing to me.

Edit: There are advanced configuration options for the slideshow located on a separate screen that you get to by clicking a not-very-link-like-looking text link — this flat UI thing is really starting to be more of a pain than it seems to be worth honestly. In there, there is an option to turn off using the lock screen instead of turning off the displays. Though as long as the slideshow is being used, the computer will lock when it turns off the the displays and you’ll have to re-enter your password.

Setting up OpenVPN with Certificates

I did this a couple of years ago, with certificates that had a 1 year expiry date. Then my certs expired, and I’d forgotten what to do. So I figured it out again, and this time I’m writing it down.

There are two ways to setup client auth in OpenVPN, a shared secret and TLS certificates. TLS certificates are the preferred way if you can manage them, as they make it possible to revoke access to devices without having to change the shared secret for every other device.

To do this you need to setup a certificate authority and sign and issue your own certificates. Most OpenVPN guides tell you how to do this using OpenSSL and it’s associated long cryptic commands. I like my method better.

Testing TLS Cipher Performance

As part of my investigation of TLS performance, I decided to benchmark various ciphers and hashing algorihtms on my dev server. My dev machines is a Xeon E3-1220 v2 with 8GB of RAM. For these tests I set the CPU governor performance to insure I wasn’t seeing effects from speedstep throttling the CPU up or down.

The short of it is that I was seeing significantly higher baseline CPU load after enabling H2 on my VPS compared to what I expected. Up from 0.5% to 2-3%. AWS t2.micro instances are burstable configurations designed to operate at a baseline CPU load of 10%. Going from from <1% to ~3% was pretty significant. Not a deal killer, but with no change in traffic that increase in compute load would dramatically decrease the headroom I had to grow before I had to consider a higher tier instance.

I appear to have resolved the production problem by applying the simple principal; encryption strength is proportional to computational complexity, so if there’s a lot of computational load, turning down the encryption strength may improve performance. What I didn’t do was much in the way of actual controlled testing to see if my premise was reliable.

HTTP/2, Encryption, AES, and Load

I’ve been working slowly towards moving to HTTP/2 over the past couple of months. Why? Mostly because its the new shiny and it’s supported by Nginx. Partly because H2 has benefits in reducing network connections by built in multiplexing which improves the efficency of my server and potentially the experience of visitors when loading multiple resources.

Part of HTTP/2, at least by defacto requirement is TLS encryption. The standard for HTTP/2 allows for unencrypted transfers, but none of the browsers that implement it support the unencrypted mode, and therefore there is functionally no unencrypted mode. Given that, phase one of moving to HTTP/2 was getting TLS certificates and getting that up and running.

One of the major counter arguments against TLS everywhere was that it adds compute overhead. Of course, pretty much every discussion I saw on the topic had the proponents shouting down the opponents claiming that it was only a tiny percent; hardly anything to worry about.

The reality is that the overhead of TLS can be a tiny percent, or it can be not so tiny. What it is all depends on your configuration.

Phase 2 of my plan to roll of HTTP/2 for my sites was to slowly move lower traffic stuff to HTTP/2, and see how it affected my server loads, and then move the heavier traffic sites over when I know what kind of CPU loads I could expect.

Switched the last major portion of my site over to HTTP/2 Tuesday night, and Wednesday afternoon I jumped in to the AWS control panel to see how my CPU load was doing.

What I saw was a rather continuous downward slump in CPU credits and a noticeably higher CPU load.

3-4% baseline CPU load on a t2.micro AWS instance isn’t a total deal breaker. Eventually the CPU credit pool would level out with 80-100 in reserve, and I wouldn’t be in much danger of running out unless I got seriously overwhelmed with traffic. But I’d rather not be half way in the hole if I can avoid it. Especially if there is something relatively easy to optimize to improve things — like maybe turning down the encryption strength.

There’s principal about encryption that many, including myself, seem to forget; encryption isn’t about creating a impenetrable barrier, it’s about delaying the adversary until the information is no longer useful or the amount of work necessary is greater than the value of the information that can be gained.

A corollary to that is the more valuable the target, the more resources the attacker will be willing to devote to attacking it.

So back to TLS on my VPS.

When I initially setup my TLS configuration I followed a couple of guides for strong TLS. The first cipher suites I used were from this one. I also tried The Mozilla foundation’s SSL config generator’s modern recommendation.

Both of these guides produce cipher suites that are strongly favor AES256 for the bulk cipher.

The easy assumption — the naive assumption — is that if there’s an AES256, you should use that it will be the best option. Big numbers == more secure!

The thing is, when AES128 was designed, it was designed to be really resistant to being broken. In fact, as far as I’m aware, there is no practical attack on any form of AES of merit. At best, there have been some attacks that can reduce the complexity for AES128 from 2128 to ~2126. However, 2126 complexity is still unimaginably hard to break with current technology.

But again, the question still stands, how valuable is what’s being protected?

And more to the point, having a constant 5% CPU load after enabling HTTP/2 with the bulk of the ciphers being AES256 based, is that necessary for my (or your) needs?

And this was ultimately my argument against TLS everywhere; some things just aren’t sensitive enough to require much if any security or authorization. When that’s the case, even a trivial increase in overhead becomes a much bigger concern to balance. Going from 0.05% to 3-4% CPU load, dramatically reduces the potential number of hits per second you can support.

Dropping my TLS config back to only using AES128 cipher suites, put my CPU load right back where it was before I switched everything over to HTTP/2. Given that performance, is relevant to me and none of the content here is really sensitive, I don’t see a real compelling reason to run a stronger cipher suite right now or for the foreseeable future.

And it’s not like I’m in poor company using AES128 either. Google use AES128, the Mozilla Foundation uses AES128, IRS.gov uses AES128, lots of banks use AES128, Apple uses AES128. I don’t think it’s unreasonable to consider AES128 to be secure enough for a tech blog or a photography blog.

My point here, in a rather convoluted way is simply this. If you’re moving to HTTPs or HTTP/2, and you’re in a position where CPU load is relevant. Don’t go overboard on your TLS settings unless your site or API really warrants it.

 

Rebuilding the Windows System Reserved Partition

I’ve been googling this like crazy for the past couple of days to find out how rebuild a Windows 7 System Reserved partition. So lets start with the back story of why I needed to do this.

A few weeks ago I upgraded my Samsung 840 EVO to a new 850 EVO, and installed the 840 EVO in another computer here. In both cases I used Samsung’s Migration Tool to copy the old drives to the new drives. In both cases, at least so far as I can tell, Samsung’s tool renamed the 100MB System Reserved partition to “Data” and filled it until it had 5MB free. For whatever reason, the file filling the partition is completely invisible.

The problem with this problem, is that while the system works just fine, if you use Window’s image backup utilities (wbadmin), for a drive that’s smaller than 500MB, there must be at least 50MB free. Well in the default configuration from a Windows install, there’s about 70MB free and everything backs up just fine. However, with only 5 MB free, the volume shadow copy can’t be made, and the backup will error out. Backups failing was the symptom that clued me in that the system reserved partition was messed up again.

In any event, I’ve tried a couple of ways to recover this situation without resorting to the most oft given advice of format and reinstall—advice I find simply appalling in almost every situation that it’s given. The first time I had this problem, I just repartitioned the disk so that I had more space on the system reserved partition so it would back up—it also helped that I needed >300MB since I was going to try and convert the computer to boot off UEFI instead of the legacy BIOS.

Bash: Watching Aliases

If you’re trying to watch the output of an alias, you need to make watch an alias of watch as well.

For example, if you have an alias like say:

alias zfslist='zfs list -o name,volsize,used,avaiable,referenced,compressratio,mountpoint'

And want to watch the output over time as if you ran:

user@host:~/$ watch zfslist

Then you need to set up watch as an alias to watch, as follows.

alias watch='watch '

The space after watch is necessary to get bash to expand subsequent aliases after the first one.

Writing on the iPad

When it comes to writing I have a love-hate relationship with just about every product I’ve ever used on my iPad. None of them are really the one stop solution that I want.

For me, the gold standard is MS Word. Why am I not using Word for everything you may ask? Well because Microsoft hasn’t released a version of it for the iPad, and more and more I’m working on my iPad–even if the experience is mildly frustrating at the best of times.

I like word, in a large way because you style things semantically not visually, even if it doesn’t seem that way. That is, you define text as a heading 1 and then tell Word to make a heading 1s look how you want them. Ultimately this translates very nicely to eventual publication on my sites via Wordpress. H1s become H1s and the website styling gets applied, there’s no serious translation or rewriting involved.

Working with this train of thought, I first started with Apple’s Pages. In a way, Pages does a lot of what I wanted in a word processor, well kind of. The behind the scenes things were great, pages could write out a Word doc file and save it to a WebDAV server. Since I already run a Linux development server, and Apache supports WebDAV pretty much out of the box, it was a no brainer to turn WebDAV on and mutually share a folder on both WebDAV (for the iPad) an CIFS for my Windows workstations. Easy-peasy and data is flowing from iPad to workstation for final editing and publication.

Unfortunately, the experience with Pages was less than seamless for me. It worked, but there were enough minor hassles that I couldn’t quite get over that Pages, to Word, to Wordpress wasn’t quite the way I wanted to work.

Enter WriteRoom.

When I bought it, I almost immediately had buyers remorse. The thought, “what did I just waste $5 of my money on,” was pretty prominent in my head. Now that I’ve been using it for a couple of weeks, I actually like it.

WriteRoom is not without it’s flaws, and man are they gaping holes–in my opinion at least–but since WriteRoom isn’t a fancy rich text editor there’s not a whole lot of room for incompatibilities moving the data from WriteRoom’s text files to something else. Moreover, the UI is clean, and it plays well enough with a bluetooth keyboard, which is a fundamental requirement if you’re actually trying to write on an iPad.

That said, WriteRoom falls down in three major ways.

First, there’s no way to resize the width of the document, the writing area remains the same width regardless of whether the iPad is vertical or horizontal. When writing in 18 pt, which is quite comfortable to work at, the page is only 53 characters wide, even though there’s a good inch of margin on either side of the writing area with the iPad horizontal. While this may be ideal for reading, I find it to be somewhat less than ideal when I’m writing. That said, I do appreciate the ability to change the font size and line spacing, even if things feel a bit claustrophobic.

Second, there’s no way to reorganize documents after you’ve created them. Well there is, but it’s one of those workarounds that only an idiot would consider acceptable (copy the content from one doc, paste it in to a new doc, and delete to old doc). The in ability to sort things however, is rather frustrating when you have to move more than one thing around.

Finally, WriteRoom will only sync via Dropbox and iTunes–or you can email yourself the file.

I’m sure there’s probably a ton of people who don’t have a problem with that, I’m not. To me, dropbox is a liability not a feature. It’s yet another service that I have to have an account with, yet another unique password to manage, and yet another place my email address or other personal information can be lifted or lost from.

Unfortunately, from what I’ve been able to determine from the WriteRoom feature/request discussion group, that the author would rather focus on features for WriteRoom instead of writing a WebDAV library for it. As much as I don’t want to fault him for not wanting to add WebDAV support, I do; I also fault Apple for not including WebDAV client support as a basic part of their iOS APIs.

In the end, perhaps the most amusing thing about all of this, is that ultimately I’m publishing most of what I write to a Wordpress blog and there’s a native iOS Wordpress app. So why aren’t I using it for the writing?

Well in truth I do a little. However, the Wordpress app is occasionally broken when using an external keyboard. Which is odd, but apparently there’s a bug in either their code or the iOS virtual keyboard driver/screen resizing, that causes the keyboard’s advanced formatting bar (which I don’t really need or want anyway) to cover the last line of text if the on-screen keyboard hadn’t been docked at the bottom of the screen. Since I use the on-screen keyboard split almost always, the glitch almost always bites me. It’s a big enough PITA to fix, that it’s just easier to write in WriteRoom and copy it over when I’m done.

In the end, WriteRoom is a pretty good piece of software, I certainly like it better than Pages for writing. It’s certainly made my iPad a little more productive, though ultimately I keep finding that although the iPad shows little glints of genius in terms of what it could be, the design and software continue to fall just slightly short of being a real good productivity tool.