Tag: HowTo

Resources for pfSense, Private Internet Access, Netflix, and Hulu

You've probably heard by now that the US Congress just repealed Obama-era regulations preventing Internet service providers from selling their users' browsing data to advertisers. I'll probably talk more about that in future posts. For now, I'm going to focus on a specific set of steps I've taken to prevent my ISP (Cox) from seeing what sites I visit.

I use a VPN called Private Internet Access, and a hardware firewall running pfSense. If that sentence looked like gibberish to you, then the rest of this post is probably not going to help you. I plan on writing a post in the future that explains some more basic steps that people who aren't IT professionals can take to protect their privacy, but this is not that kind of post.

So, for those of you who are IT professionals (or at least comfortable building your own router), it probably won't surprise you that streaming sites like Netflix and Hulu block VPNs.

One solution to this is to use a VPN that gives you a dedicated IP (I hear good things about NordVPN but I haven't used it myself); Netflix and Hulu are less likely to see that you're using a VPN if they don't see a bunch of connections coming from the same IP address. But there are problems with this approach:

  • It costs more.
  • You're giving up a good big chunk of the anonymity that you're (presumably) using a VPN for in the first place; your ISP won't be able to monitor what sites you're visiting, but websites are going to have an easier time tracking you if nobody else outside your household is using your IP.
  • There's still no guarantee that Netflix and Hulu won't figure out that you're on a VPN and block your IP, because VPNs assign IP addresses in blocks.

So I opted, instead, to set up some firewall rules to allow Netflix and Hulu to bypass the VPN.

The downside to this approach is obvious: Cox can see me connecting to Netflix and Hulu, and also Amazon (because Netflix uses AWS). However, this information is probably of limited value to Cox; yes, they know that I use three extremely popular websites, when I connect to them, and how much data I upload and download, but that's it; Netflix, Hulu, and Amazon all force HTTPS, so while Cox can see the IPs, it can't see the specific pages I'm going to, what videos I'm watching, etc. In my estimation, letting Cox see that I'm connecting to those sites is an acceptable tradeoff for not letting Cox see any other sites I'm connecting to.

There are a number of guides on how to get this set up, but here are the three that helped me the most:

OpenVPN Step-by-Step Setup for pfsense -- This is the first step; it'll help you route all your traffic through Private Internet Access. (Other VPNs -- at least, ones that use OpenVPN -- are probably pretty similar.)

Hulu Traffic -- Setting up Hulu to bypass the VPN is an easy and straightforward process; you just need to add an alias for a set of FQDNs and then create a rule routing connections to that alias to WAN instead of OpenVPN.

Netflix to WAN not OPT1 -- Netflix is trickier than Hulu, partly because (as mentioned above) it uses AWS and partly because the list of IPs associated with AWS and Netflix is large and subject to change. So in this case, instead of just a list of FQDNs, you'll want to set up a couple of rules in pfBlockerNG to automatically download, and periodically update, lists of those IPs.

That's it. Keep in mind that VPN isn't a silver bullet solution, and there are still other steps you'll want to take to protect your privacy. I'll plan on covering some of them in future posts.

How I Created and Self-Published My eBook

I wrote Old Tom and the Old Tome in Scrivener. I converted it to an EPUB with Sigil. I tested it using Calibre, FBReader, Nook, Kobo, and Google Play Books. Then I published it on Smashwords and Kindle Direct.

Why a short story?

This one is probably obvious, but just in case it isn't: I started with a short story because when you want to learn a new skill, you want to start small. I didn't want to write something novel-length and then run into a bunch of problems.

A short story's the perfect length to start with. Old Tom and the Old Tome clocks in around 3,000 words, split up into 4 separate sections (cover, copyright, story, About the Author). It has a great structure for learning the ropes.

Of course, you don't have to go the fiction route. In fact, it occurs to me that this blog post would actually convert quite nicely into a short eBook. Hm, food for thought.

Scrivener

I checked out Scrivener because Charles Stross swears by it. It's basically an IDE for writing books; it's quite simply the most advanced and mature piece of software there is for the purpose.

There's a Linux version, but it's abandonware. For a GNU/Linux user such as myself, this is something of a double-edged sword: on the plus side, I get Scrivener for free, where Mac and Windows users have to pay $40 for it; on the minus side, if a system upgrade ever causes it to stop working, I'm SOL. If Scrivener stops working on my system, there's not going to be a fix, I'll be locked into a platform I can no longer use. I could try and see if the Windows version will run under WINE, but there's no guarantee of that.

The good news is that Scrivener saves its files in standard formats, so if it ever stops working I'll still be able to access my content in other programs. The bad news is that it saves its individual files with names like 3.rtf and 3_synopsis.txt.

So Scrivener's pretty great, and I'll probably stick with it for a little while even though there are no more updates on the horizon for my OS -- but there's a definite downside to using the Linux version. (And if they decided the Linux version wasn't going to bring in enough profit to justify maintaining it, what happens if they decide the same thing for the Windows version someday, maybe leave it as a Mac-only product?)

Getting Started

Scrivener's got a great tutorial to run through its functionality; start there.

When you're done with the tutorial and ready to get to work on your book, I recommend using the Novel template, even if you're not writing a novel, because it automatically includes Front Matter and Back Matter sections; the Short Story template does not.

Scrivener's got your standard MS-word-style tools for formatting your work. I didn't use them. Since I was targeting a digital-only release and no print version, I wrote my story in Markdown, which converts trivially to HTML but isn't as verbose as HTML.

Output Formats

Since I went the Markdown route, I found that the best option for output at compile time was Plain Text (.txt). The most vexing thing I found about the output was the limited options under the "Text Separator" option -- the thing that goes between different sections. What I wanted was a linebreak, followed by ***, followed by another linebreak. Scrivener doesn't have any option for that -- your options are Single Return, Empty Line, Page Break, and Custom. Under Custom you can put ***, but there doesn't seem to be any way to put a linebreak on either side of it. So I found the best option was to just do that, and then manually edit the text file it put out and add a linebreak on either side of each one.

If you plan on making an EPUB file, you'll probably want to keep all the "smart quotes" and other symbols that Scrivener adds to your text file. However, if you want to distribute the Markdown file in plain text and want it to be readable in Chrome, you'll need to remove all the pretty-print characters, because Chrome won't render them correctly in a plain-text file (though it'll do it just fine in a properly-formatted HTML file). You'll also want to use the .txt extension rather than .md or .markdown if you want the file to display in Firefox (instead of prompting a download).

You've got different options for converting from Markdown to HTML. Pandoc is a versatile command-line tool for converting between all sorts of different formats, but I don't like the way it converts from Markdown to HTML; not enough linebreaks or tabs for my tastes. There are probably command-line flags to customize those output settings, but I didn't find them when I glanced through the man page.

I thought Scrivener's Multimarkdown to Web Page (.html) compile option worked pretty well, although the version I used (1.9 for Linux) has a bug that none of the checkboxes to remove fancy characters work correctly: you're getting smartquotes whether you want them or not. You also don't want to use *** as your section separator, because Scrivener reads it as an italicized asterisk (an asterisk in-between two other asterisks, get it?) instead of an HR. Similarly, it reads --- as an indicator that the previous line of text is an h2.

So your best bet for a section break is something like

</p><hr/><p>

or

<div class="break">*</div>

(Actually, you don't want to use HR's at all in an EPUB, for reasons I'll get to later, but if you want to distribute an HTML version of your book, it's fine to use them in that version.)

Sigil

Sigil is an excellent, very straightforward tool for editing the EPUB format. I recommend you grab the Sigil User Guide, go through the Tutorial section, and do what it tells you -- even the stuff that generates seemingly ugly code. For example, if you use Sigil's Add Cover tool, you wind up with code that looks like this:

<svg xmlns="http://www.w3.org/2000/svg" height="100%" preserveAspectRatio="xMidYMid meet" version="1.1" viewBox="0 0 900 1350" width="100%" xmlns:xlink="http://www.w3.org/1999/xlink">
  <image width="900" height="1350" xlink:href="../Images/cover.jpg"/>
</svg>

If you're like me, looking at that makes you wince. And your instinct will be to replace it with something simple, like this:

<img src="../Images/cover.jpg" alt="Cover" />

But don't do that. Removing the <svg> tag, or even removing those ugly-ass inline styling attributes, will prevent the cover from displaying correctly as a thumbnail in readers.

(If there is a way to clean up that ugly <svg> tag and still have the thumbnail display correctly, please let me know; I'd love to hear it.)

Now, Sigil is for the EPUB2 format. It doesn't support any of the newfangled fancy features of EPUB3, and neither do most readers at this point. You're going to want to keep your styles simple. In fact, here's the entire CSS file from Old Tom and the Old Tome:

img {
  max-width: 100%;
}

h1 {
  text-align: left;
  text-indent: 0;
  font-size: 200%;
}

.noindent {
  text-indent: 0;
}

.break {
  margin: 1em 0;
  text-align: center;
}

Oh, and that last class, .break? That's there because some readers ignore <hr/> tags. FBReader on Android, for example, will not display an HR. No matter how I tried formatting it, it wouldn't render. Not as a thin line, not even as a margin. If you use an <hr/> tag in your EPUB file, FBReader will act as if it isn't there.

So I wound up cribbing a style I saw in Tor's EPUB version of The Bloodline Feud by Charles Stross:

<div class="break">*</div>

where, as noted in the above CSS, the .break class centers the text and puts a 1em margin above and below it.

(Some readers won't respect even that sort of simple styling, either; Okular parses the margin above and below the * but ignores the text-align: center style. Keep this in mind when you're building an EPUB file: keep the styles simple, and remember that some readers will straight-up ignore them anyway.)

(Also: this should go without saying, but while it's okay to look through other eBooks for formatting suggestions and lifting a few lines' worth of obvious styling is no problem, you don't want to go and do anything foolish like grab an entire CSS file, unless it's from a source that explicitly allows it. Even then, it might not be a good idea; formatting that works in somebody else's book may not be a good idea in yours.)

Testing

Once my EPUB was done, I tested it in a number of different readers for a number of different platforms at a number of different resolutions. There are a lot of e-readers out there, and their standards compliance is inconsistent -- much moreso than the browser market, where there are essentially only three families of rendering engines.

If you're used to using an exhaustive, precise set of CSS resets for cross-browser consistency, you probably expect to use something similar for e-readers. Put that thought out of your head; you're not going to find them. The best you're going to get are a few loose guidelines.

Consistency across different e-readers just isn't attainable in the way that it is across different web browsers. Don't make that a goal, and don't expect it to happen. You're not looking for your eBook to display perfectly in every reader; you're just looking for it to be good enough in a few of the most-used readers.

For example, I found that the margins the Nook reader put around my story were fine on a tablet, but I thought they were too much on a phone. If I'd wanted, I could have futzed around with media queries and seen if that was possible to fix -- but I decided no, it was Good Enough; it wasn't worth the effort of trying to fix it just for that one use-case.

Smashwords

Smashwords has a useful FAQ on self-publishing, and also provides a free EPUB download called the Smashwords Style Guide.

If you already know HTML, here's what I can tell you about the Smashwords Style Guide: read the FAQ at the beginning, then skip to Step 21: Front Matter. Because it turns out that Steps 1-20 are about how to try and make Microsoft Word output clean HTML and CSS. If you already know how to write HTML and CSS yourself, there is of course absolutely no fucking reason why you would ever want to use Word to write your HTML and CSS for you.

It's probably a good idea to read the rest of the guide from Step 21 through the end, but most of it's pretty simple stuff. To tell the truth, there are exactly two modifications I made to the EPUB for the Smashwords edition: I added the phrase "Smashwords edition" to the copyright page, and I put ### at the end of the story (before the back matter). That's it.

For all the time the guide spends telling you how easy it is to fuck up and submit a file that will fail validation, I experienced none of that. My EPUB validated immediately, and it was approved for Smashwords Premium the next day (though Smashwords says it usually takes 1-2 weeks; the quick turnaround may have been a function of how short my short story is).

Description

Most of the forms you fill out on the Smashwords Publish page are well-documented and/or self-explanatory. The Long Description and Short Description fields are exceptions; it's probably not entirely clear, at a glance, where your listing will show the short description and where it will show the short one. So here's how they work:

On Smashwords, your book's listing shows the short description, followed by a link that says "More". When you click "More", the long description appears underneath the short description.

  • Smashwords default
  • Smashwords expanded
Smashwords

Kobo and iBooks don't appear to use the short description at all. Your book's listing will show the first few lines of your long description, followed by an arrow (on Kobo) or a "More..." link (on iBooks), which you can click to expand to show the entire description.

  • Kobo default
  • Kobo expanded
Kobo
  • iBooks default
  • iBooks expanded
iBooks
Aside: Why the fuck does it do this?
Look at all that whitespace. What's the point of hiding the text?

Inktera shows the long description, followed by an HR, followed by the short description.

Inktera

Inktera

And Scribd just shows the long description.

Scribd

Scribd

Lastly, Blio doesn't show either description of my book. Clearly this is a problem and I should probably talk to tech support about it.

As you might expect, the various different ways the different sites use the two descriptions create a bit of a conundrum: how can you write a short description that is the primary description on one site and a long description that is the primary description on four other sites, and write the two descriptions so that they don't look pointless and redundant when you put them side-by-side?

I haven't come up with a good solution for this in the case of Old Tom yet.

Amazon

It turns out the Amazon conversion is really easy. I just set up an account at kdp.amazon.com, filled out the forms, uploaded the cover and the EPUB file, and Amazon's automatic conversion software switched it over to Kindle format with no trouble at all. Amazon's even got a really nice online reader that lets you check how your file will look in the Kindle Reader on various devices (Kindle Fire HD, iPhone, iPad, Android phone, Android tablet, etc.).

I only hit one speed bump when I submitted to Amazon: after a few hours, I got an E-Mail back saying that the book was freely available online (because of course it is; I've posted it in multiple places, including this site). Amazon required me to go back through and reaffirm that I am the copyright holder of the book -- which meant just going through the exact same forms I'd already filled out and clicking the Submit button again. It was a little bit annoying, but not time-consuming and mostly painless, and the book appeared for download on Amazon shortly after.

And that's it.

The hardest part of self-publishing an eBook was finding the time, figuring out what resources to use, and learning the EPUB format. And now that I know what resources to use and understand the EPUB format, it doesn't take nearly as much time. For my next book, I'll be able to spend a lot more time writing and a lot less time formatting. Hopefully this blog post has helped you so that you can do the same.

E-Mails and Passwords

So the other day I decided it was past time to reset all my passwords.

I'm pretty good about password hygiene. I've been using a password locker for years, with a unique, randomly-generated* password for every account I use. But I'll admit that, like most of us, I don't do as good a job of password rotation as I might. That's probably because I've managed to amass over 150 different accounts across different sites, and resetting 150 different passwords is a giant pain in the ass.

(I'm thinking that, from here on in, I should pick a subset of passwords to reset every month, so I never wind up having to reset all 150 at once again. It would also help me to clear out the cruft and not keep logins for sites that no longer exist, or which I'm never going to use again, or where I can't even find the damn login page anymore.)

There was one more reason I decided now was a good time to do a mass update: I've got two E-Mail addresses that have turned into spam holes. As I've mentioned previously, I'm currently looking for work and getting inundated with job spam; unfortunately I went and put my primary E-Mail address at the top of my resume, which in hindsight was a mistake. Never post your personal E-Mail in any public place; always use a throwaway.

Which I do, most of the time -- and that's created a second problem: I've been signing up for websites with the same E-Mail address for 15 years, and also used to use it in my whois information. (I've since switched to dedicated E-Mail addresses that I use only for domain registration.) So now, that E-Mail has turned into a huge spam hole; it's currently got over 500 messages in its Junk folder, and that's with a filter that deletes anything that's been in there longer than a week. My spam filters are well-trained, but unfortunately they only run on the client side, not the server side, so any time Thunderbird isn't running on my desktop, my spam doesn't get filtered. (If I'm out of the house, I can tell if the network's gone down, because I start getting a bunch of spam in my inbox on my phone.)

So now I've gone and created two new E-Mail addresses: one that's just for E-Mails about jobs, and another as my new all-purpose signing-up-for-things address.

My hope is that the companies hammering my primary E-Mail address with job notifications will eventually switch to the new, jobs-only E-Mail address, and I'll get my personal E-Mail address back to normal. And that I can quit using the Spam Hole address entirely and switch all my accounts over to the new address. Which hopefully shouldn't get as spam-filled as the old one since I haven't published it in a publicly-accessible place like whois.

Anyway, some things to take into account with E-Mail and passwords:

  • Don't use your personal E-Mail address for anything but personal communication. Don't give it to anyone you don't know.
  • Keep at least one secondary E-Mail address that you can abandon if it gets compromised or filled up with spam. It's not necessarily a bad idea to have several -- in my case, I've got one for accounts at various sites, several that I use as contacts for web domains, and one that's just for communication about jobs.
  • Use a password locker. 1Password, Keepass, and Lastpass are all pretty highly-regarded, but they're just three of the many available options.
  • Remember all the different devices you'll be using these passwords on.
    • I'm using a Linux desktop, an OSX desktop, a Windows desktop, and an Android phone; that means I need to pick a password locker that will run on all those different OS's.
    • And have some way of keeping the data synced across them.
    • And don't forget that, even with a password locker, chances are that at some point you'll end up having to type some of these passwords manually, on a screen keyboard. Adding brackets and carets and other symbols to your password will make it more secure, but you're going to want to weigh that against the hassle of having to dive three levels deep into your screen keyboard just to type those symbols. It may be worth it if it's the password for, say, your bank account, but it's definitely not worth it for your Gmail login.
  • Of course, you need a master password to access all those other passwords, and you should choose a good one. There's no point in picking a bunch of unique, strong passwords if you protect them with a shitty unsecure password. There are ways to come up with a password that's secure but easy to remember:
    • The "correct horse battery staple" method of creating a passphrase of four random words is a good one, but there are caveats:
      • You have to make sure they're actually random words, words that don't have anything to do with each other. Edward Snowden's example, "MargaretThatcheris110%SEXY.", is not actually very secure; it follows correct English sentence structure, "MargaretThatcher" and "110%" are each effectively one word since they're commonly-used phrases, the word "SEXY" is common as fuck in passwords, and mixed case and punctuation don't really make your password significantly more secure if, for example, you capitalize the beginnings of words or entire words and end sentences with periods, question marks, or exclamation points. Basically, if you pick the words in your passphrase yourself, they're not random enough; use a computer to pick the words for you.
      • And this method unfortunately doesn't work very well on a screen keyboard. Unless you know of a screen keyboard that autocompletes words inside a password prompt but won't remember those words or their sequence. I think this would be a very good idea for screen keyboards to implement, but I don't know of any that do it.
    • There are programs and sites that generate pronounceable passwords -- something like "ahx2Boh8" or "ireeQuaico". Sequences of letters (and possibly numbers) that are gibberish but can be pronounced, which makes them easy to remember -- a little less secure than a password that doesn't follow such a rule, but a lot more secure than a dictionary word. But read reviews before you use one of these services -- you want to make sure that the passwords it generates are sufficiently random to be secure, and that it's reputable and can be trusted not to snoop on you and send that master password off to some third party. It's best to pick one that generates multiple passwords at once; if you pick one from a list it's harder for a third party to know which one you chose.
  • Of course, any password is memorable if you type it enough times.
  • And no password is going to protect you from a targeted attack by a sufficiently dedicated and resourceful attacker -- if somebody's after something you've got, he can probably find somebody in tech support for your ISP, or your registrar, or your hosting provider, or your phone company, or some company you've bought something from, somewhere, who can be tricked into giving him access to your account. Or maybe he'll exploit a zero-day vulnerability. Or maybe one of the sites you've got an account with will be compromised and they'll get everybody's account information. Password security isn't about protecting yourself against a targeted attack. It's about making yourself a bigger hassle to go after than the guy sitting next to you, like the old joke about "I don't have to outrun the tiger, I just have to outrun you." And it's about minimizing the amount of damage somebody can do if he does compromise one of your accounts.
  • And speaking of social engineering, security questions are deliberate vulnerabilities, and they are bullshit. Never answer a security question truthfully. If security questions are optional, do not fill them out. If they are not optional and a site forces you to add a security question, your best bet is to generate a pseudorandom answer (remember you may have to read it over the phone, so a pronounceable password or "correct horse battery staple"-style phrase would be a good idea here, though you could always just use letters and numbers too -- knowing the phonetic alphabet helps) and store it in your password locker alongside your username and password.
  • You know what else is stupid? Password strength indicators. I once used one (it was Plesk's) that rejected K"Nb\:uO`) as weak but accepted P@55w0rd as strong. You can generally ignore password strength indicators, unless they reject your password outright and make you come up with a new one.

* For the purposes of this discussion, I will be using the words "random" and "pseudorandom" interchangeably, because the difference between the two things is beyond the scope of this post.

Getting Sprint LTE to Work on CyanogenMod 12

Update 2015-10-12: My new advice for getting Sprint data to work on a Nexus 5 phone running CyanogenMod 12 is "Don't bother." I never did get it working right, and had to reboot at least once a day to get it working. I've since reverted back to KitKat. Original post follows, but if you want my advice it's "Stick with CM11."


First, let's get one thing out of the way: if you're using a custom Android ROM on your phone (or any device that can receive text messages), you're going to want to make sure it's up-to-date. There's a vulnerability in an Android component called Stagefright that is potentially devastating; it allows an attacker to gain control by doing nothing more than send a text message, and there are now attacks in the wild.

If you've got the stock firmware on your phone, and your phone is relatively recent, you should get the patch to fix this vulnerability automatically. (If, for example, your phone is running Lollipop, either because it came with it or automatically updated to it, you're probably good.)

But if you're running a custom ROM and don't have automatic updates enabled, you're going to want to check on whether you're running a current version that includes the Stagefright fix.

I'm a CyanogenMod user. If you're using the latest version of CyanogenMod 11.0, 12.0, or 12.1, then you've got the Stagefright fix.

I recently took the opportunity to upgrade my phone to the latest 11.x series to get the fix. And I figured while I was at it, why not upgrade to 12.1 and see if it's any good?

So I installed CyanogenMod 12.1, and everything looked like it was working fine at first -- when I was using it in my own house, on my wifi network. It wasn't until a day or two later that I realized my Sprint data connection wasn't working.

It took rather more searching than it should have, but it turns out there's an easy solution (albeit an annoying one if you've already got your phone set up the way you want it, because it involves wiping it to factory again).

mjs2011 at XDA Developers links to a sprint.zip file assembled by somebody named Motcher41, and gives these instructions for use:

The fix should be flashed during initial installation, so:

  1. Flash ROM
  2. Gapps
  3. SU (if necessary)
  4. Sprint APN Fix zip

I can confirm that you don't need to worry about setting up root before sprint.zip; it's fine if you do it afterward (my recovery, for example, sets up su right before reboot). However, I can confirm that you need to install sprint.zip after Gapps and before your first boot; if you install it before Gapps or after your first boot then it won't work.

Update 2015-09-30: After a few days my data connection quit working again. I rebooted to recovery, reinstalled sprint.zip, and it started working again. So never mind about not working if you install it after you've already booted the ROM; it will still work just as well. Unfortunately, "just as well" appears to mean "just for a few days"; I'm not sure what happened that changed my settings to make it stop working, but if I figure it out I'll update this post again.

You may notice that the linked thread is old (it's from November 2013) and was written in reference to pre-11.0 versions of CyanogenMod. However, I can confirm that it applies to the 12.x series too. This issue appears to be a regression; CM fixed it in version 11 but then broke it again in version 12.

So if you're a Sprint customer and you just installed CyanogenMod 12 on your phone and then discovered Sprint data was no longer working, this is what you're gonna wanna do to fix it.

Web Design Got Complicated

It's probably not surprising that rebuilding my website has gotten me thinking about web development.

The first six years I ran this site, I did it all by hand -- my own HTML, my own CSS, no scripting languages. I thought that CMS software was for pussies.

But ultimately, plain old HTML just doesn't scale. I conceded that when I started using b2evolution for my blog back in '06, and it's truer now than it was then.

You can poke around some of the old sections of the site a bit, the ones that haven't been updated significantly since the turn of the century -- KateStory's a good one, or the Features page (though I'd like to get at least the Features page up to date sooner than later, and maybe the KateStory one too, so maybe there'll be people reading this post well after those pages shed their 1990's style) -- and they get the job done. Breadcrumb navigation at the bottom of every section, leading you back to either the parent page or the main index.

But Jesus, you can only manually copy and paste "Back to Features / Back to Index" so many times.

And maintaining a years-long blog archive without a CMS to automate it for you? It gets old.

So, you want some automation? You're going to need a scripting language. That usually means PHP for server-side, and JavaScript for client-side.

I got to thinking the other day -- man, it's weird that you need extra toolsets to perform such common tasks as, say, reusing a navigation bar. It's weird that there's not some way just to write up a navigation bar and then write code, in HTML, no scripting required, to embed that common HTML block on the current page.

I thought this was a pretty smart observation.

For about three seconds.

At which point I realized I had just described fucking frames.

Course, the biggest problem with frames is that they weren't exactly what I'm describing. I'm talking about just an HTML snippet in some secondary file that you call from a primary file -- like an include in PHP.

That's not what frames were. Frames were complete fucking HTML pages -- <html>, <head>, <body> (or, more likely, <HTML>, <HEAD>, <BODY>, because in the old days we wrote HTML tags in all-caps) -- which is, most times, downright stupid and wasteful, and was much moreso in the days of 14.4 dialup. Even worse than the load time was the logistics -- if you used frames to build a website with a header, a footer, and a sidebar, you'd have a total of five separate web pages -- a content area, the three other sections, and some kind of main page that all of them were embedded into. This was a fucking nightmare for linking, both for the developer (who had to remember to set the target attribute on every single link, lest the page load in the navigation bar instead of the content area) and the end user (because the URL in the location bar would be the container page that called all the other pages, not the content page the user was currently looking at).

In a way, it's kinda weird that nobody's gone back to that well and tried to do it again, but do it right this time. Update the HTML spec to allow an HTML file to call a reusable snippet of HTML from another file, one that isn't a complete page.

Given all the concessions HTML5 has made to the modern Web, it's surprising that hasn't happened, even given how slowly it takes for a spec to be approved. We've got a <nav> tag, which is nice and all, but who the hell uses a <nav> tag without calling some kind of scripting language that automates code reuse? There really aren't that damn many reasons to use the <nav> tag for code that isn't going to be reused on multiple pages throughout a site.

And I dunno, I'm sure somebody's brought this up, maybe it's on the itinerary as a consideration for HTML6.

Which is another thing, really: the people making the decisions on the specs do not want the same things I want.

I liked XHTML. (In fact, lest this whole thing come off as a curmudgeonly damn-kids-get-off-my-lawn diatribe against new technologies and standards, I'd like to note that I was using XHTML Strict back when you pretty much had to be using a beta version of Phoenix -- before it was Firebird, before it was Firefox -- for it to render correctly.) I thought it was the future. I wish XHTML2 had taken off. HTML5 feels ugly and inconsistent by comparison, and, as legitimately goddamn useful as it is to be able to put something like data-reveal aria-hidden="true" in the middle of a tag's attributes, it always feels dirty somehow.

But I digress.

Point is, in 2006, I switched the blog from just plain old HTML and CSS, and added two more elements: a MySQL database to actually store all the shit, and a PHP CMS (originally b2evolution, later switched to WordPress).

And then came smartphones.

We live in a world now where every website has to be designed for multiple layouts at multiple resolutions. You wanna try doing that without using an existing library as a base? Try it for a few days. I guarantee you will no longer want that.

I think my resistance to picking up new libraries is that every time you do it, you cede a measure of control for the sake of convenience. I don't like ceding control. I like my website to do what the fuck I tell it to, not what some piece of software thinks I want it to.

I've spent the last decade arguing with blogging software to get it to quit doing stupid shit like turn my straight quotes into "smart" quotes and my double-hyphens into dashes. Just the other day, I built a page in WordPress and discovered that it replaced all my HTML comments with fucking empty paragraphs. Why would I want that? Why would anyone want that?! And that's after I put all the remove_filter code in my functions.php.

And that's the thing: WordPress isn't built for guys like me. Guys like me use it, extensively (it is the world's most popular CMS), because it automates a bunch of shit that we'd rather not have to deal with ourselves and because when we're done we can hand it off to end users so they can update their own site.

But I still write these posts in HTML. I want to define my own paragraph breaks, my own code tags, the difference between an <em> and a <cite> even though they look the same to an end user.

(And okay, I still use <em> and <strong> over <i> and <b>; there's really no explaining that except as a ridiculous affectation. I recently learned Markdown and used it to write a short story -- I'll come back to that at a later date -- and I could see switching to that. HTML really is too damn verbose.)

...and that was another lengthy digression.

So. Mobile design.

Bootstrap is the most commonly used toolkit for responsive websites. I've used it, it works well, but it's not my favorite idiom, and I've decided I prefer Zurb Foundation. So that's what I used to build the new site layout.

Except, of course, then you've got to get two dueling design kits to play nice to each other. Square the circle between WordPress and Foundation.

I started to build the new theme from scratch, and I'm glad I was only a few hours into that project when I discovered JointsWP, because that would have been one hell of a project.

JointsWP is poorly documented but has proven pretty easy to pick up anyway.

So. I've gone from HTML and CSS to HTML, CSS, and WordPress (HTML/CSS/PHP/MySQL), to HTML, CSS, WordPress, Foundation (HTML/SCSS/JavaScript, importing libraries including jQuery), and JointsWP (ditto plus PHP). And on top of that I'm using Git for version tracking, Gulp to process the SCSS, and Bower to download all the other scripts and toolkits I need and keep them updated.

So, going with Foundation (or Bootstrap, or whatever) as a standard toolkit, you get somebody else's codebase to start from. That comes with some elements that are a necessary evil (I hate fucking CSS resets, and think writing p { margin: 0; } is an abomination in the sight of God and Nature -- but if it means I can assume my site will look more or less correct in Mobile Safari without having to go out and buy an iPhone, then I guess I'll take it), and others that are actually pretty great -- I find SCSS to be really exciting, a huge saver of time and tedium, and it's hard to go back to vanilla CSS now that I've used it.

Course, with increasing complexity, you still hit those things that don't quite work right. One example I've found is that Foundation sets your placeholder text (the gray letters that appear in an "empty" input field) too light to be legible, and does not have a simple definition in _settings.scss to let you adjust it to darker. I've found a mixin that allows you to create such a definition pretty simply, but for some reason JointsWP doesn't like it (or maybe Gulp doesn't). So until I get around to finding a fix, the text stays light, and I'll just have to trust that you the user will be able to determine that the input field under the phrase "Search for:" and to the left of the big blue button that says "Search" is a search box.

I've also got loads of optimization still to do; part of that's going to mean figuring out what parts of Foundation's CSS and JS I'm not actually using and cutting them out of the calls, and part of it's probably going to mean minification.

Minification is one of those things I resisted for awhile but have come around on. It can be a real hassle for debugging, not being able to view a stylesheet or script in full, and it may not be practical just to save a few kilobytes (or a few dozen, rarely a few hundred) -- but on the other hand, well, it's not so different from compiling source code to binary; the end result is still that you take something human-readable and turn it into something much less human-readable.

And of course now that I'm using a preprocessor, my CSS file isn't my real source code anyway; it's already the result of taking my code, feeding it through an interpreter, and outputting something that is not my code. If you want to look at the stylesheet for this site, you want to look at the SCSS file anyway (it's on Github), not the CSS file. And if I'm already telling people "Look at the SCSS file, not the CSS file," then what's the harm in minifying the CSS file and making it harder for people to read?

For now -- prior to removing unnecessary code calls and minifying everything -- I feel like the site design's a lot more bloated than it needs to be. And even once I slim it down, there are going to be some compromises that go against my sensibilities -- for example, when you loaded this page, you loaded two separate navigation systems, the desktop version (top navigation and sidebar) and the mobile version (just a sidebar, which contains many of the same elements as the topnav and sidebar from the desktop version but is not exactly the same), even though you can only see one of them. That redundancy makes me wince a little bit, but ultimately I think it's the best and simplest way of doing it. Sometimes, good design does require some redundancy.

All that to say -- man, there have been a lot of changes to web design in the last twenty years. And while there are trends I really don't like (if I never have to build another slideshow it'll be too soon; gradients are usually dumb and pointless; and the trend of making visited links the same color as unvisited ones feels like a step backward into 1995), there are also a lot that I've eventually warmed up to, or at least accepted as something I've gotta deal with.

Anyway. Welcome to the new corporate-sellout.com.

And one more thing about the site before I go: it's probably worth noting that this site is different from the other sites I build, because it's mine. Its primary audience is me. I like having an audience, but frankly I'm always a little (pleasantly) surprised whenever anyone actually tells me they enjoyed something I put on this site.

Because this site isn't one of my professional sites. I didn't build it for a client. It's not my portfolio site, which I built to attract clients. This one? It's for me. As should be clear from this rambling, 2200-word stream-of-consciousness post about the technical ins and outs of web design, as it applies specifically to me and to this site.

Frankly I'm always surprised when anyone actually reads anything like this.

Ubuntu on an Old Mac Pro

So I spent the past few days trying to get Ubuntu Studio installed on my 2006-era Mac Pro 1,1. I can't speak for other Macs specifically, but here are some details you're going to want to know if you engage in that undertaking:

  • The Mac Pro 1,1 won't boot Linux from a USB stick.
  • It also won't boot it from a dual-layer DVD. Double-check and make sure you're not using dual-layer.
  • The LTS releases of Ubuntu (such as 14.04) have images that are specifically labeled "amd64+mac". Use those. Otherwise you might wind up stuck on an unresponsive "Select CD-ROM Boot Type" prompt.
  • You may or may not need to install rEFInd to help you boot from a Linux disc. If your disc isn't showing up when you hold the Option key at boot, give rEFInd a shot.
  • There's a useful guide at Ubuntu Community Help called Installing on a Mac Pro - Cylinder (Late 2013). As the title implies, it's not written for the older-model Mac Pros, but most of what it says is still applicable. (But it tells you not to use the Mac-specific ISO files. Don't listen to that part; you should use those on the 1,1 model.)

Getting Rid of Firefox Error Beeps in OpenSUSE

This is, more than anything, a Note to Self for the next time I do a clean install of OpenSUSE and can't figure out how to make Firefox stop making incredibly loud error beeps every single time I type Ctrl-F and type a search term that it can't find.

Disabling audio notifications in KDE's Configure Desktop has no effect on Firefox, because it uses GNOME's audio event system, not KDE's. (Not sure why it also completely ignores current system volume and always blasts the error beep so friggin' loudly; I'm sure there's a configuration setting for that somewhere.)

The program for controlling GNOME/GTK audio events is called pavucontrol (for Pulse Audio Volume Control).

Run that, go to the Playback tab, and click the Mute icon next to System Sounds.

This is really the sort of thing OpenSUSE should fix, seeing as its default configuration is a KDE desktop with Firefox as the default browser. It would be nice for YaST to have some kind of integrated audio notifications configuration tool, or, at the very list, to document pavucontrol better.

Trying to Fix OpenSUSE

After spending my Saturday banging my head against the wall trying to get my OpenSUSE installation working again, I spent my Sunday just reinstalling the damn thing -- aware the whole time that the result might be exactly the same thing happening next time I run an update.

I went to the effort to get OpenSUSE up and running again because I quite like it. All that shit I griped about yesterday on how difficult it is to find configuration options in Mint? Simply not the case in OpenSUSE. It's true that OpenSUSE has two separate control panels too, like Cinnamon does, and that one is for interface configuration and the other is for system configuration -- but both of them are a whole lot more comprehensive than what Cinnamon's got, and it's way easier to find what you're looking for. And OpenSUSE's package management is simply the best I've ever seen -- it doesn't have quite as comprehensive a selection as Debian/Ubuntu/Mint/et al, but it's pretty close, and -- perhaps most importantly of all -- it doesn't just give you an error when there's a dependency issue, it gives you a list of choices on what to do about it.

It's also got smooth-as-hell one-click package installation, though in Mint's defense, it supports that now too and I had a breeze setting up RSSOwl (the only program I've set up in Mint that wasn't in the default repos, and which was a monumental fucking hassle setting up in OpenSUSE).

Anyhow, I got OpenSUSE back up and running. Eventually. The first problem was that when I burned the 12.3 disc and tried to boot it, I got my old friend the frozen "Select CD-ROM Boot Type" prompt.

You know what's a bad sign? When you plug an error message into a a search engine and the third match is your own fucking blog. On the plus side, Thad From Four and a Half Months Ago told me how I got around this the last time: I stopped fucking around with the install DVD and tried the LiveDVD instead.

Then I made a mistake -- but it turned out not to matter. I forgot to set NoScript to allow JS on the 12.3 download page. And so I couldn't see any downloads except the main installer. The LiveDVD's right on the page, and so's the Rescue CD, but I couldn't see the damn things.

I poked through the Wiki and wound up stumbling onto the KDE Reloaded LiveDVD instead. Now, on the plus side, contrary to the "11.3" number and "Last Modified 10-Aug-2010" note on that page, the LiveDVD is current as of January of this year. On the minus side, it's kind of a damn mess, it leaves you with a weird hybrid of 12.1, 12.2, and Factory repos, and, well, it wouldn't have been my first choice if I'd been a little more awake and alert and noticed the damn NoScript notification.

But I found out later that the LiveDVD and the Rescue DVD both lock up too, so I would have wound up trying the KDE Remix eventually anyway. And it did work, sort of. And I found out some good things and bad things about restoring a broken OpenSUSE installation.

The good part: if you've got /home on a separate partition, OpenSUSE will use it without formatting it; all your settings will be preserved. I backed it up just to be safe, but I didn't need to; it was completely untouched.

The bad part: I found out the hard way that YAST's backup feature doesn't back up your repo list, which makes it pretty much goddamn worthless if you have a lot of software from third-party repos. Which, y'know, is the only damn reason I backed up my packages in the first place. Reinstalling packages from the default repos is time-consuming, but it's trivial. What I was worried about was going through the hassle of installing stuff like that outdated version of xulrunner that I need to get RSSOwl to run. So I guess I know for next time that I need to back up my list of repos separately.

Anyway, I got OpenSUSE working for a day or so.

And then the first time I ran mplayer X dumped me to a console and now it won't restart.

So I'm sorta back where I started, except getting an entirely different set of errors, which near as I can tell aren't related to any nVidia driver conflict like last time.

I like OpenSUSE, but I may be fucking done with it. I would really rather not reinstall it again.

I don't know for sure where I'm going from here. But I do know I'm going to start a KDE install on Mint.

For Future Reference

For the next time I get locked out of X after an nVidia upgrade:

The OpenSUSE package for nVidia drivers for a GTX570 is x11-video-nvidiaG03.

The OpenSUSE package for the nVidia kernel module for a GTX570 is nvidia-gfxg03-kmp-default.

Zelda 2: The One That Fucks Up Alphabetized Lists

Yep, got the bug from Jeremy Parish's excellent Anatomy of Zelda 2 series. I've started replaying Zelda 2.

Jeremy commented on the general unfairness of the game and said that he's using savestates. I'm using authentic NES hardware, but I do have a Game Genie.

When I played through the game as a kid, I only used one Game Genie cheat code: infinite lives. It's amazing how much it does for the game's balance to eliminate the outmoded concept of a limited number of lives (a holdover from the arcade age, of course). Frankly it's odd, in hindsight, that Zelda 2 played the old "3 lives and then Game Over" meme, given that the original game didn't. I mean, sure, 2's a side-scrolling platformer, but Metroid was too, and it didn't bother with giving you a limited number of lives either.

So I resolved to take a crack at Zelda 2 on my NES, once again cheating a bit against its unfair difficulty with the use of the infinite lives code.

And when I went to look it up, I found, via Game Winners, two more codes that weren't in the official Game Genie book and which serve to mitigate the game's lopsided leveling system. So here are the three codes I'm using:

Link has unlimited lives SZKGKXVK
Do not lose all experience when leveling SZVOUNSE
Do not lose experience when hit by enemies SXESIKSE

I think that, on the whole, those three codes go a long way to balancing out the difficulty of Zelda 2 and allowing its strengths to shine. It is a solid game.