You've probably heard by now that the US Congress just repealed Obama-era regulations preventing Internet service providers from selling their users' browsing data to advertisers. I'll probably talk more about that in future posts. For now, I'm going to focus on a specific set of steps I've taken to prevent my ISP (Cox) from seeing what sites I visit.
I use a VPN called Private Internet Access, and a hardware firewall running pfSense. If that sentence looked like gibberish to you, then the rest of this post is probably not going to help you. I plan on writing a post in the future that explains some more basic steps that people who aren't IT professionals can take to protect their privacy, but this is not that kind of post.
So, for those of you who are IT professionals (or at least comfortable building your own router), it probably won't surprise you that streaming sites like Netflix and Hulu block VPNs.
One solution to this is to use a VPN that gives you a dedicated IP (I hear good things about NordVPN but I haven't used it myself); Netflix and Hulu are less likely to see that you're using a VPN if they don't see a bunch of connections coming from the same IP address. But there are problems with this approach:
It costs more.
You're giving up a good big chunk of the anonymity that you're (presumably) using a VPN for in the first place; your ISP won't be able to monitor what sites you're visiting, but websites are going to have an easier time tracking you if nobody else outside your household is using your IP.
There's still no guarantee that Netflix and Hulu won't figure out that you're on a VPN and block your IP, because VPNs assign IP addresses in blocks.
So I opted, instead, to set up some firewall rules to allow Netflix and Hulu to bypass the VPN.
The downside to this approach is obvious: Cox can see me connecting to Netflix and Hulu, and also Amazon (because Netflix uses AWS). However, this information is probably of limited value to Cox; yes, they know that I use three extremely popular websites, when I connect to them, and how much data I upload and download, but that's it; Netflix, Hulu, and Amazon all force HTTPS, so while Cox can see the IPs, it can't see the specific pages I'm going to, what videos I'm watching, etc. In my estimation, letting Cox see that I'm connecting to those sites is an acceptable tradeoff for not letting Cox see any other sites I'm connecting to.
There are a number of guides on how to get this set up, but here are the three that helped me the most:
OpenVPN Step-by-Step Setup for pfsense -- This is the first step; it'll help you route all your traffic through Private Internet Access. (Other VPNs -- at least, ones that use OpenVPN -- are probably pretty similar.)
Hulu Traffic -- Setting up Hulu to bypass the VPN is an easy and straightforward process; you just need to add an alias for a set of FQDNs and then create a rule routing connections to that alias to WAN instead of OpenVPN.
Netflix to WAN not OPT1 -- Netflix is trickier than Hulu, partly because (as mentioned above) it uses AWS and partly because the list of IPs associated with AWS and Netflix is large and subject to change. So in this case, instead of just a list of FQDNs, you'll want to set up a couple of rules in pfBlockerNG to automatically download, and periodically update, lists of those IPs.
That's it. Keep in mind that VPN isn't a silver bullet solution, and there are still other steps you'll want to take to protect your privacy. I'll plan on covering some of them in future posts.
I've always advocated being kind to tech support people. They have a tough job, it's not their fault you have a problem, and they spend all day dealing with abuse from people who act like it is their fault.
Well, yesterday, for the first time in my life, I cursed out a phone support rep. I'm not proud of it, but in my defense, I'd been talking to support for 90 minutes by that point, and the last 30 of that had been a conversation where this tier-2 rep talked in circles, blamed me for problems with their server, repeatedly said she couldn't help me, refused to listen to my explanations of the problem, and acted like a condescending ass.
Seriously, this is the worst tech support experience I have ever had. Beating out the previous record-holder, the guy who told me that my burned-out power supply wasn't really burned-out, I was probably experiencing a software issue. After I told him there were burn marks on the power connector.
At least that one was funny. The conversation I had with Cox yesterday wasn't funny, just infuriating.
Here's what happened: on Monday evening, when I tried to send an E-Mail, I started getting this error:
An error occurred while sending mail: The mail server sent an incorrect greeting:
fed1rmimpo306.cox.net cox connection refused from [my IP address].
I tried unplugging the modem to see if I'd get a new IP assigned. No luck. I tried turning the computer off and then on again. No luck. I tried sending mail from other devices. Same result.
So on Tuesday afternoon, I pulled up Cox's live support chat to ask for some help.
The rep eventually told me he'd escalate, and that the issue should be fixed within 24 hours.
Just shy of 27 hours later, I pulled up Cox's live support chat again, to ask what the problem was.
The rep -- a different one this time -- quoted me this feedback from the ticket:
Good afternoon, the log below shows the username can send on our servers. This may be a software, device or network issue. Please review the notes and contact the customer.
In other words, they'd tested the wrong thing. The mail server was rejecting my connection, based on my IP address, before my mail client sent my username and password. And Cox's solution to this was...to confirm that my username and password were working.
I explained this to the rep, over the course of 75 excruciating minutes. I demonstrated by disconnecting my phone from my wifi network and sending an E-Mail while connected to my wireless carrier. It worked when I connected to Cox's SMTP server over LTE; the same mail app on the same phone failed when connected to my wifi.
I explained that the mail server was blocking connections from my IP address, and that they needed to either make it stop blocking my IP address or assign me a different IP address.
The rep told me that was impossible, that residential accounts use DHCP, which assigns IP addresses at random.
I told him that I know what DHCP is, and that I wasn't asking for a static IP address, I was just asking for someone to revoke my DHCP lease and assign my modem a new IP address from the DHCP pool.
He told me that the only way to get a new IP address is to disconnect your modem for 24 hours.
I told him that was unacceptable, and I asked if there was anyone else I could talk to.
He gave me a number to call.
The person who answered the phone said she'd escalate to a tier-2 tech. I said, pointedly, that I did not understand why nobody had thought to do that in the preceding 75 minutes.
As it turns out, tier-2 techs are worse than tier-1 techs. Tier-1 techs at least know that they don't know everything, and are willing to ask for help from people who know more than they do. Tier-2 techs think they do know everything, will not ask for help from someone who knows more than they do, and certainly will not listen to a customer who knows more than they do.
Well, probably not all of them. But that was sure as hell my experience with the tier-2 tech I got stuck with.
First, she had the sheer gall to tell me my modem wasn't connected to the Internet.
I told her I could connect to websites, I could receive E-Mail, and that the error message on sending mail was not a timeout, it was a Connection Refused. I added that I was doing this from a computer that was connected to my router by a cable, that I had not accidentally jumped on somebody else's wifi.
She would have none of it. She insisted "We can't see your connection here, so you're not connected." Repeatedly. When I told her that I was clearly connected to the Internet, she just kept telling me that no, I wasn't.
Finally she told me to bypass my router and plug my desktop directly into my modem. I told her that this wouldn't fix anything, because this was happening from multiple devices that all had Internet access. She got huffy and standoffish and told me she couldn't help me if I wasn't willing to do what she asked.
So I did it. I climbed back behind my computer, traced the cable to the router, and swapped it with the one coming from the modem.
Absolutely nothing changed. Except that she said. "Oh. You're running a Linux computer? We don't support Linux."
I responded, "The operating system I am using is not relevant to whether your server is accepting connections from my IP address."
But some reps aren't interested in helping. They're only interested in finding an excuse for why they don't have to help you.
I asked her if there was any way she could determine why my IP was being blocked. I noted that it seemed to be on some sort of blacklist.
She asked if I'd checked whether it was on any public blacklist. I responded that I had, and that it had an expired listing on SORBS from 2013 -- well before it was my IP address; I've only lived in this house since 2014 --, that I hadn't found it in any other blacklist, and that a SORBS listing from over two years ago should not result in my suddenly losing the ability to connect to SMTP two days ago.
She said that if I was on a blacklist, those were handled by third parties and it was my responsibility to get de-listed. I explained that I did not see my IP on any currently-active blacklists, and asked if she could look up what was causing the rejection. She said she couldn't.
I asked if she could reset my IP. She said that the only way to do it would be to shut down my modem for 25 hours. (Already I had somehow lost another hour!)
I told her that was unacceptable, and asked how I could get it reset remotely.
She told me that was impossible, that residential accounts use DHCP, which assigns IP addresses at random, and that the only way to get a new DHCP address is to disconnect your modem for 25 hours.
I told her that it is not impossible, that the same router that provides DHCP leases is capable of revoking them, and that I needed somebody to do that for me.
We went round and round like this for awhile.
At one point, she said, "We can't do that; it's done automatically."
I responded that anything a computer does automatically can also be done manually, and that there is certainly someone in Cox who has the account access to log into the router that is assigning IP addresses and revoke a lease.
She started to explain DHCP to me again -- it was about the fifth time at this point -- and I snapped.
I shouted, "I know how DHCP works; I ran an ISP, for fuck's sake!"
I feel kinda bad about that.
I finally got pushed over to a supervisor -- another twenty minutes on hold -- who tried to tell me that Cox can't help me because they don't support third-party programs like what I'm using, and that if I could send messages from webmail, that's what I should do.
I said, "Are you seriously telling me that Cox does not support sending E-Mail from phones or tablets?"
The supervisor backed off that claim and said that she didn't really understand the technical stuff, that she could send me back to tier 2.
I responded that it had been two hours and I didn't think it was in anyone's best interest for me to continue this conversation, but that if I decided to call back tomorrow, what could I do to get some service?
She said to ask for tier 2 again, and this time ask for a manager.
I'm debating whether I really want to deal with that kind of aggravation, or if I'd be happier just abandoning the Cox E-Mail address that I've been using for fifteen fucking years.
Incidentally, Cox just jacked its prices up by $7 a month. Why is it that every time the cost goes up, the quality of service goes down? I remember the first time they hiked my bill, they dropped Usenet service.
That was in 2009. Since then my bill's gone up $27. My service sucks; several times a day my connection just stops working and I have to restart the modem.
And of course I can't switch to another ISP, because there isn't one available at my address. My "choices", such as they are, are as follows:
Pay $74 a month for Cox
Steal wifi from a neighbor who's paying for Cox
See how far I can get using only my phone's data plan for Internet access
I'm pretty much fucked, like most Americans are on broadband access.
And the hell of it is, even if there were another provider available, all the alternatives seem to be even worse.
I mean, Christ, at least I don't have Time Warner or Comcast.
This one is probably obvious, but just in case it isn't: I started with a short story because when you want to learn a new skill, you want to start small. I didn't want to write something novel-length and then run into a bunch of problems.
A short story's the perfect length to start with. Old Tom and the Old Tome clocks in around 3,000 words, split up into 4 separate sections (cover, copyright, story, About the Author). It has a great structure for learning the ropes.
Of course, you don't have to go the fiction route. In fact, it occurs to me that this blog post would actually convert quite nicely into a short eBook. Hm, food for thought.
I checked out Scrivener because Charles Stross swears by it. It's basically an IDE for writing books; it's quite simply the most advanced and mature piece of software there is for the purpose.
There's a Linux version, but it's abandonware. For a GNU/Linux user such as myself, this is something of a double-edged sword: on the plus side, I get Scrivener for free, where Mac and Windows users have to pay $40 for it; on the minus side, if a system upgrade ever causes it to stop working, I'm SOL. If Scrivener stops working on my system, there's not going to be a fix, I'll be locked into a platform I can no longer use. I could try and see if the Windows version will run under WINE, but there's no guarantee of that.
The good news is that Scrivener saves its files in standard formats, so if it ever stops working I'll still be able to access my content in other programs. The bad news is that it saves its individual files with names like 3.rtf and 3_synopsis.txt.
So Scrivener's pretty great, and I'll probably stick with it for a little while even though there are no more updates on the horizon for my OS -- but there's a definite downside to using the Linux version. (And if they decided the Linux version wasn't going to bring in enough profit to justify maintaining it, what happens if they decide the same thing for the Windows version someday, maybe leave it as a Mac-only product?)
Scrivener's got a great tutorial to run through its functionality; start there.
When you're done with the tutorial and ready to get to work on your book, I recommend using the Novel template, even if you're not writing a novel, because it automatically includes Front Matter and Back Matter sections; the Short Story template does not.
Scrivener's got your standard MS-word-style tools for formatting your work. I didn't use them. Since I was targeting a digital-only release and no print version, I wrote my story in Markdown, which converts trivially to HTML but isn't as verbose as HTML.
Since I went the Markdown route, I found that the best option for output at compile time was Plain Text (.txt). The most vexing thing I found about the output was the limited options under the "Text Separator" option -- the thing that goes between different sections. What I wanted was a linebreak, followed by ***, followed by another linebreak. Scrivener doesn't have any option for that -- your options are Single Return, Empty Line, Page Break, and Custom. Under Custom you can put ***, but there doesn't seem to be any way to put a linebreak on either side of it. So I found the best option was to just do that, and then manually edit the text file it put out and add a linebreak on either side of each one.
If you plan on making an EPUB file, you'll probably want to keep all the "smart quotes" and other symbols that Scrivener adds to your text file. However, if you want to distribute the Markdown file in plain text and want it to be readable in Chrome, you'll need to remove all the pretty-print characters, because Chrome won't render them correctly in a plain-text file (though it'll do it just fine in a properly-formatted HTML file). You'll also want to use the .txt extension rather than .md or .markdown if you want the file to display in Firefox (instead of prompting a download).
You've got different options for converting from Markdown to HTML. Pandoc is a versatile command-line tool for converting between all sorts of different formats, but I don't like the way it converts from Markdown to HTML; not enough linebreaks or tabs for my tastes. There are probably command-line flags to customize those output settings, but I didn't find them when I glanced through the man page.
I thought Scrivener's Multimarkdown to Web Page (.html) compile option worked pretty well, although the version I used (1.9 for Linux) has a bug that none of the checkboxes to remove fancy characters work correctly: you're getting smartquotes whether you want them or not. You also don't want to use *** as your section separator, because Scrivener reads it as an italicized asterisk (an asterisk in-between two other asterisks, get it?) instead of an HR. Similarly, it reads --- as an indicator that the previous line of text is an h2.
So your best bet for a section break is something like
(Actually, you don't want to use HR's at all in an EPUB, for reasons I'll get to later, but if you want to distribute an HTML version of your book, it's fine to use them in that version.)
Sigil is an excellent, very straightforward tool for editing the EPUB format. I recommend you grab the Sigil User Guide, go through the Tutorial section, and do what it tells you -- even the stuff that generates seemingly ugly code. For example, if you use Sigil's Add Cover tool, you wind up with code that looks like this:
If you're like me, looking at that makes you wince. And your instinct will be to replace it with something simple, like this:
<img src="../Images/cover.jpg" alt="Cover" />
But don't do that. Removing the <svg> tag, or even removing those ugly-ass inline styling attributes, will prevent the cover from displaying correctly as a thumbnail in readers.
(If there is a way to clean up that ugly <svg> tag and still have the thumbnail display correctly, please let me know; I'd love to hear it.)
Now, Sigil is for the EPUB2 format. It doesn't support any of the newfangled fancy features of EPUB3, and neither do most readers at this point. You're going to want to keep your styles simple. In fact, here's the entire CSS file from Old Tom and the Old Tome:
Oh, and that last class, .break? That's there because some readers ignore <hr/> tags. FBReader on Android, for example, will not display an HR. No matter how I tried formatting it, it wouldn't render. Not as a thin line, not even as a margin. If you use an <hr/> tag in your EPUB file, FBReader will act as if it isn't there.
So I wound up cribbing a style I saw in Tor's EPUB version of The Bloodline Feud by Charles Stross:
where, as noted in the above CSS, the .break class centers the text and puts a 1em margin above and below it.
(Some readers won't respect even that sort of simple styling, either; Okular parses the margin above and below the * but ignores the text-align: center style. Keep this in mind when you're building an EPUB file: keep the styles simple, and remember that some readers will straight-up ignore them anyway.)
(Also: this should go without saying, but while it's okay to look through other eBooks for formatting suggestions and lifting a few lines' worth of obvious styling is no problem, you don't want to go and do anything foolish like grab an entire CSS file, unless it's from a source that explicitly allows it. Even then, it might not be a good idea; formatting that works in somebody else's book may not be a good idea in yours.)
Once my EPUB was done, I tested it in a number of different readers for a number of different platforms at a number of different resolutions. There are a lot of e-readers out there, and their standards compliance is inconsistent -- much moreso than the browser market, where there are essentially only three families of rendering engines.
If you're used to using an exhaustive, precise set of CSS resets for cross-browser consistency, you probably expect to use something similar for e-readers. Put that thought out of your head; you're not going to find them. The best you're going to get are a few loose guidelines.
Consistency across different e-readers just isn't attainable in the way that it is across different web browsers. Don't make that a goal, and don't expect it to happen. You're not looking for your eBook to display perfectly in every reader; you're just looking for it to be good enough in a few of the most-used readers.
For example, I found that the margins the Nook reader put around my story were fine on a tablet, but I thought they were too much on a phone. If I'd wanted, I could have futzed around with media queries and seen if that was possible to fix -- but I decided no, it was Good Enough; it wasn't worth the effort of trying to fix it just for that one use-case.
If you already know HTML, here's what I can tell you about the Smashwords Style Guide: read the FAQ at the beginning, then skip to Step 21: Front Matter. Because it turns out that Steps 1-20 are about how to try and make Microsoft Word output clean HTML and CSS. If you already know how to write HTML and CSS yourself, there is of course absolutely no fucking reason why you would ever want to use Word to write your HTML and CSS for you.
It's probably a good idea to read the rest of the guide from Step 21 through the end, but most of it's pretty simple stuff. To tell the truth, there are exactly two modifications I made to the EPUB for the Smashwords edition: I added the phrase "Smashwords edition" to the copyright page, and I put ### at the end of the story (before the back matter). That's it.
For all the time the guide spends telling you how easy it is to fuck up and submit a file that will fail validation, I experienced none of that. My EPUB validated immediately, and it was approved for Smashwords Premium the next day (though Smashwords says it usually takes 1-2 weeks; the quick turnaround may have been a function of how short my short story is).
Most of the forms you fill out on the Smashwords Publish page are well-documented and/or self-explanatory. The Long Description and Short Description fields are exceptions; it's probably not entirely clear, at a glance, where your listing will show the short description and where it will show the short one. So here's how they work:
On Smashwords, your book's listing shows the short description, followed by a link that says "More". When you click "More", the long description appears underneath the short description.
Kobo and iBooks don't appear to use the short description at all. Your book's listing will show the first few lines of your long description, followed by an arrow (on Kobo) or a "More..." link (on iBooks), which you can click to expand to show the entire description.
Inktera shows the long description, followed by an HR, followed by the short description.
Lastly, Blio doesn't show either description of my book. Clearly this is a problem and I should probably talk to tech support about it.
As you might expect, the various different ways the different sites use the two descriptions create a bit of a conundrum: how can you write a short description that is the primary description on one site and a long description that is the primary description on four other sites, and write the two descriptions so that they don't look pointless and redundant when you put them side-by-side?
I haven't come up with a good solution for this in the case of Old Tom yet.
It turns out the Amazon conversion is really easy. I just set up an account at kdp.amazon.com, filled out the forms, uploaded the cover and the EPUB file, and Amazon's automatic conversion software switched it over to Kindle format with no trouble at all. Amazon's even got a really nice online reader that lets you check how your file will look in the Kindle Reader on various devices (Kindle Fire HD, iPhone, iPad, Android phone, Android tablet, etc.).
I only hit one speed bump when I submitted to Amazon: after a few hours, I got an E-Mail back saying that the book was freely available online (because of course it is; I've posted it in multiple places, including this site). Amazon required me to go back through and reaffirm that I am the copyright holder of the book -- which meant just going through the exact same forms I'd already filled out and clicking the Submit button again. It was a little bit annoying, but not time-consuming and mostly painless, and the book appeared for download on Amazon shortly after.
And that's it.
The hardest part of self-publishing an eBook was finding the time, figuring out what resources to use, and learning the EPUB format. And now that I know what resources to use and understand the EPUB format, it doesn't take nearly as much time. For my next book, I'll be able to spend a lot more time writing and a lot less time formatting. Hopefully this blog post has helped you so that you can do the same.
This was an interesting project, and I hope it will be the first of many. In the coming days I expect to share some of the Making-Of information -- what tools I used to make it, the learning curve, etc.
So, per the last couple of posts, I find it entirely possible that, as vendors develop tablets that double as PC's, they may replace traditional desktop and laptop computers. For the common end user who just needs a web browser and (maybe) an office suite, I don't think that's going to be a tough sell.
But there are markets that rely heavily on more powerful computing hardware.
One is PC gamers. Others are the various types of media creators: people who create images, music, movies.
I've already mentioned dumb terminals and software as a service (SaaS) as a major current trend, with programs like Google Docs running in a browser and working as an effective substitute for traditional locally-run programs like Microsoft Word.
Of course, a word processor is one thing; an enterprise-quality photo editor is another, and a game requiring split-second timing is something else again.
But developers are working on it.
Last year Adobe released a limited beta of a streaming version of Photoshop for ChromeOS. Photoshop itself doesn't run in the browser; the app is a Remote Desktop shell that interacts with an instance of the Windows version of Photoshop running on a remote server.
So, by definition, this is no replacement for the Windows version of Photoshop -- because it is the Windows version of Photoshop. But it demonstrates a potentially compelling alternative to buying expensive, high-end hardware just to run Photoshop: what if you could buy cheap hardware, and pay a subscription fee to run Photoshop on someone else's expensive hardware?
Reactions to the ChromeOS version of Photoshop seemed generally positive; I would expect it to have some latency issues, but I also bet it runs faster on a remote server than it did on the Core 2 I had to use at GoDaddy. (Hey, when I said the Core 2 Duo was the last chip most users ever needed, I said I wasn't including Photoshop.)
Adobe has already moved Photoshop's licensing to a subscription model instead of a purchase model. (A lot of people are very angry about this, but I haven't heard anything to suggest it's led to a drop in "sales"; that's the thing about monopolies.) It's not hard to envision a transition to a subscription model where you run the program remotely instead of locally. Hell, they could even charge more money to give you access to faster servers.
Other media development suites could, potentially, move to streaming services, but there are caveats. Uploading raw, uncompressed digital audio and video files takes a lot more time than uncompressed images. And what about storing your source files? My grandmother puts together home movies on her iMac, and she's got terabytes of data going back some 15 years. That's the kind of storage requirement an amateur filmmaker can rack up; now think of how much somebody who does it for a living might wind up with. If you're renting storage space on an external server, on a month-to-month basis, that could get pretty costly.
But it's technically feasible, at least, that audio and video editing could be performed on a remote server.
Recording audio is another story. Anything more complex than a simple, single-track voice recording is still going to require specialized mixing hardware. And transferring your recording to a remote server in real-time, without lossy compression? You'd better be sitting on fiber.
So I think we can put "recording studios" -- even the home-office variety, like mine -- into the category of Stuff That's Not Going Anywhere for Awhile.
Moving games to a streaming system is a challenge -- but I'm not sure it's as big a challenge as recording studios. It's more or less the same requirement as Photoshop: take simple inputs from a human interface device, send them to a server, have the server run them and respond accordingly, stream the video output back to the client. The trick is managing to do that in real-time with minimal loss of audio and video quality. That's the challenge -- but engineers are working on it.
The OnLive streaming service was a failure, but Sony bought it out; it sees value there. nVidia's got its own streaming solution too, in GRID. One of these things is not like the other -- Sony sells consoles at a loss and would stand to benefit from selling cheaper hardware, while nVidia makes a ton of money selling expensive graphics cards to enthusiasts and surely doesn't want to cannibalize its own market -- but obviously there's more than one type of gamer, and the people who shell out over $300 for a graphics card are in the minority.
Now, as minorities go, high-end PC gamers are still a pretty sizable minority; it's still a multibillion-dollar industry. But it's a fraction of the console gaming business, and it's expected to be surpassed by mobile gaming by the end of this year. Like the PC industry as a whole, it's still big and it's still growing, but it's growing a lot slower than other sectors and could be facing a long-term threat from new platforms.
Switching to a streaming platform could have a lot of appeal to game publishers; it combines the simplicity of developing for consoles with the superior hardware capabilities of the PC. Think about the possibility of developing for the latest and greatest hardware, but only for a single specific hardware build.
It would also, at long last, produce a form of DRM that could actually work.
While the industry has tried many, many copy protection schemes over the years, all of them are, sooner or later (and usually sooner), crackable. And there's a simple, logical reason for this: no matter what you do to encrypt the data of your program, you have to give the computer the means to decrypt it, or it won't work. No matter where or how you hide the key, if you give it to your users sooner or later they're going to find it.
But that's only true if the software is running on their computer. If the binary data is never copied to their hard drive, never stored in their memory, if the program is actually stored and run on a remote server somewhere and all the client has access to is a program that takes inputs and streams audio and video? Well, then there's no way they can copy the game, unless they actually break into your servers.
(Which, given Sony's history with Internet security, might not actually be so hard.)
I am not saying this is a good thing; in fact, I consider it something of a nightmare scenario.
Consider every problem you've ever had with an online or digitally-distributed game. Now think of what it would look like if every game had those issues.
Not just latency, lag, server outages, and losing your progress every time your Internet connection goes out. Consider that if a game is no longer profitable, they'll pull the plug. If a developer loses a license, the game(s) associated with it will go away. (Was GoldenEye ever released on Virtual Console? I don't think it was.) If a game gets updated and you liked the old version better, too bad. And remember when Nintendo ended its partnership with GameSpy and killed all the online multiplayer features of every Wii and DS game ever made? Imagine an entire generation's worth of games not working at all anymore, online or otherwise. Even though you paid for them.
Now, there's recent evidence that a strategy like this would fail. The Xbox One is still reeling from customer backlash against early plans to restrict used-game sales and require an always-on Internet connection even for single-player games, even though those plans were never even implemented.
On the other hand, there's evidence that even a wildly unpopular strategy could still succeed. Have you ever heard anyone who doesn't work for EA praise the Origin distribution service (or whatever the fuck they're calling it now)? I know I haven't, but people still use it. Because if you want to play Mass Effect 3 or Dragon Age: Inquisition, your only choices are consoles, Origin, and piracy.
And then there are examples that could go either way: Ubisoft continued to use DRM that required an always-on Internet connection for about two years, from 2010 to 2012, before finally giving in to market backlash.
It's hard to say how existing high-end PC gamers would react if the major publishers tried to force a transition toward streaming games -- or whether high-end PC gamers will continue to be a big enough market for the major publishers to care what they think. But for the foreseeable future, I think PC gaming will continue on much the same as it has for the past 15 years. There could be major changes on the horizon, but I sure don't see them happening in the next 10 years.
Then again, five years ago I was saying there was no way that streaming video would outpace Blu-Ray because there was just no way to stream 1080p video over a home Internet connection. So keep that in mind before trusting any predictions I make.
Over the past couple of posts, I've given some of the reasons I think tablet PC's could replace traditional desktops and laptops. Today I'm going to talk about why I don't think that's going to happen anytime soon: the business market.
In enterprise, Microsoft still rules the roost, with Windows and Office. The lock-in is strong.
And while the BYOD trend isn't likely to go away, and in fact there are some major features of Android Marshmallow that are designed to make it easier to use a phone with dual profiles for home and work, that's still a far cry from replacing work-provided computers with devices that workers bring from home.
And there's a simple reason why: whatever costs a company incurs by buying a computer for every one of its employees are offset by standardizing on hardware and software to make IT's job easier. When everybody's running the same few programs on the same few models of computer, it limits the number of potential compatibility issues. When every computer is running the same stock image, it's easier to control devices' security, and when IT pushes every software update, it limits the possibility that the latest patch will break anything. And when a computer does break down, it makes it easy to replace it with a machine of the same model with all the same software and settings.
And when data is stored on an internal company server, it's less vulnerable than if it's in somebody's Google Docs account, or Dropbox, or whatever the hell MS and Apple call that thing where your home directory automatically gets uploaded to their servers now.
And that's just talking general best-practices for every company. You start getting into companies (and government agencies) where security is tightly restricted, whether that be military, intelligence, healthcare, or just a lot of sensitive proprietary information, and there's no fucking way you're going to allow people to use their personal devices for work.
(Unless you're the Director of the CIA and send confidential information to your personal fucking AOL account. But I digress.)
All that said, business has already started transitioning away from desktops to laptops, and I can foresee the possibility of Windows-based convertible tablets like the Lenovo Yoga and the MS Surface picking up some traction. I don't think it would be a BYOD scenario; I don't think businesses are apt to move their operations over to workers' own personal tablets -- but they could eventually start equipping every worker with a company-supplied tablet instead of a company-supplied laptop.
But first, prices are going to have to drop. The reason laptops passed desktops is that their prices approached parity; that hasn't happened with tablets yet. You can get a respectable mid-range Lenovo laptop for under $400; you can get a Lenovo tablet with a keyboard in that price range, but it's going to come with pretty anemic specs. 2GB RAM and 32GB internal storage is okay for a tablet, and might work for a device you only use when you're traveling, but I don't think a machine like that is good enough to use as a daily driver, even for end-users who only need Windows and Office. If you want a convertible tablet with comparable specs to a mid-range laptop, you can expect to pay 3 times as much -- at least, for now. Moore's Law is still in effect, and that gap's going to close, just like the gap between desktops and laptops did.
There's one more factor that can make the puny specs of a 32GB tablet moot: apps that run in a browser instead of locally. Office 365 could potentially replace the traditional client-side version of MS Office for business users.
But most business users don't just use Microsoft Office. I've worked at companies both big and small, and nearly all of them have some sort of ancient proprietary program that they rely on for day-to-day use, and often several. Transitioning from an already-long-in-the-tooth program to a new one that performs the same features but runs on a server is not a quick, easy, or cheap task.
I'll talk more about SaaS in the next post -- and, in particular, the challenges it faces in displacing high-performance applications like multimedia editors and games -- and I think it's making major inroads. But the business sector depends so heavily on legacy software that I just don't see it transitioning entirely to the cloud within the next decade. We'll have cost-competetive convertible tablets before we have every app in the cloud.
In my previous post, I established that, despite strides made in screen keyboards and text-to-speech programs, a hardware keyboard is still the best way to write text documents.
In this one, I'll look at how phones and tablets work as replacements for PC's.
Problem 3: Phones Are Still Phones
Of course, you can connect a phone to a computer monitor, and to a keyboard. Or to a game controller.
Awhile back I hooked my phone up to my TV, and paired it to my DualShock 4, and fired up Sonic 4.
The game ran fine -- I didn't like it very much but it ran fine.
And then my mom called me.
The game stopped, and my TV screen filled up with a message that I was getting a phone call. So I walked across the room, picked up my phone, disconnected it from my TV, and answered it.
This is not optimal behavior for a computer.
Now, there are possible ways to fix this.
Headsets and speakerphone are two ways to answer the phone without having it in your hand, but neither one is optimal. Speakerphone is often hard to hear and can have that awful echo. And as for headsets, well, do I carry one in my pocket? Do I keep one in every room where I might dock my phone and use it as a computer?
A better solution would be to "connect" your phone to a monitor and speakers wirelessly, maybe using a device like a Chromecast. That way you could keep it next to you, or in your pocket, while still editing documents, or playing Sonic 4, or whatever. And if it rang, you could answer it, and not lose whatever was on your screen -- say I get a call where I want to take notes with my keyboard (as frequently happens); there could be a way to do that.
But the easier solution is probably to have the device that's connected to your keyboard and monitor(s) not be your phone. Especially if people continue to buy other devices, such as laptops or tablets.
Problem 4: Phone Interfaces Don't Make Good Desktop Interfaces
Windows 8. Do I even need to elaborate?
Microsoft tried to design an interface that would work on phones and on desktops. It was a huge failure.
This was entirely foreseeable. A 4" touchscreen is completely different from a pair of 1080p monitors with a keyboard and mouse attached to them. An interface designed for the former is a lousy fit for the latter, and vice-versa.
So, with Windows 10, Microsoft tried something else, and something altogether more sensible: the OS was designed with a phone/tablet interface and a desktop computer interface, with the ability to switch between the two. If you connect your phone to a dock that's hooked up to a monitor, a keyboard, and a mouse, then the interface changes to desktop mode.
Which is a good idea (and one that Canonical has been moving toward for years), but Windows Phone hasn't exactly set the world on fire (and Ubuntu Phone isn't a thing that anybody seems to want). Windows tablets, on the other hand, including Lenovo's Yoga series and MS's own Surface line, have fared much better.
Google's moving toward this sort of convergence too; it hasn't gotten as far as MS or Canonical yet, but there have been hints of future compatibility between Android and ChromeOS.
Ah yes, ChromeOS -- and the return to dumb terminals running server-side programs.
I think that's going to be key to bringing a few of the major special-case users on board with the transition to lower-powered systems: gamers and media designers.
We'll get to them soon. But in the next post, I'll be looking at the market that's really going to continue driving PC sales: business.
…said every horse-and-buggy salesman in 1900 ever.
Which, okay, doesn't actually make a whole lot of sense. (In fact I am fairly confident that very few horse-and-buggy salesmen in 1900 ever said "If anyone believes PC is dead they need to get their head checked" and then linked to Wikipedia.) But, like many shitty analogies do, it got me thinking about why it was a shitty analogy.
Mainly, I don't think the PC will go away to the extent that horse-drawn carriages have. I think it's possible that tablets could completely replace desktop and laptop computers, but I don't think that can happen until they effectively duplicate the functionality of PC's -- in effect not actually replacing PC's but becoming them.
General Case: Typical End Users
While it's easy to point to the rise of the smartphone as the reason for declining PC sales, it's only one of the reasons. There's another one: the last processor most end users will ever need was released in 2006.
A typical end user only needs a few things in a PC: a web browser, an office suite, music, and videos. (And those last three are, increasingly, integrated into the first one; I'll circle back to that in a later post.)
In 2006, Intel released the Core 2 Duo, which, paired with even a low-end onboard graphics chip, could handle HD video and drive two 1920x1080 monitors. And it's 64-bit, so it can handle more than the 3GB of RAM that 32-bit processors max out at.
There have been plenty more, and plenty better, processors in the 9 years since. But they're not better for people who only use their computer for browsing, Office, listening to music, and watching videos. The Core 2 Duo was good enough for them.
There are people who greatly benefit from newer and better processors -- gamers and people who produce media rather than just consuming it. But they're special cases; I'll get to them later. For the average user, the difference between a Core 2 Duo and a Core i7 isn't even noticeable.
The computer industry grew up in the 1990's around the expectation that people would upgrade their computer every few years to handle new software. And people just don't do that anymore. They buy a new PC when the old one quits working; not before.
But, at least at this point, they still need a PC. People may be buying more phones than PC's, but, at least in America, a phone is not a replacement for a PC.
Problem 1: Screen Keyboards
Screen keyboards are a pain in the ass.
They're fine for short communication -- text messages and tweets -- but they're just too slow and imprecise for long-form writing. (I thought of writing this post entirely on a screen keyboard -- like last week's handwritten post -- but I think that would make me want to gouge my eyes out.)
There are still plenty of requirements for longform writing in day-to-day life -- reports for school and reports for work, for starters. And that's even in professions where you don't have to write for a living, never mind ones where you do. People who write articles, and especially people who write books, are best served with a keyboard to type on.
And maybe that won't always be the case. Maybe kids growing up with screen keyboards aren't learning to type on traditional keyboards; maybe they're faster with screen keyboards than they are with hardware ones. Maybe, within a generation, we will see essays, reports, articles, even books, all written with screen keyboards. I suspect that if we do, they'll look a whole lot different than they do today.
Or maybe screen keyboards will get better. Maybe predictions and autocorrect will improve. Maybe a faster paradigm than qwerty + swipe will catch on. There's a lot that can happen in this space.
Or maybe we won't be using keyboards at all.
Problem 2: Text-to-Speech
Speech recognition software has grown by leaps and bounds. Terry Pratchett used Dragon Dictate and TalkingPoint to write his last few novels.
But being good enough for a first draft, for a user who is no longer physically capable of using a keyboard, isn't the same thing as being able to recognize a full range of standard and nonstandard grammars and sentence structures, pick correct homonyms, and understand slang and regional dialects. (Pratchett liked to tell the story of how he had to train his text-to-speech software to recognize the word "arsehole".)
Text-to-speech software might be good enough for simple, clear documents, such as manuals, lists, daily work logs, AP-style newsbriefs, and technical writing (provided you're writing on a subject that doesn't have a lot of jargon words that don't appear in a simple dictionary). But for writing that's meant to convey personality -- editorials, reviews, fiction, even this blog post -- text-to-speech algorithms have a long way to go.
So, for now at least, a good old hardware keyboard remains the best way to input large blocks of text into a computer. In my next post, I'll examine why a dedicated PC is still the best thing to connect to that keyboard, and how phone and tablet OS's are (or aren't) working to bridge that gap.
Yesterday I ran across two 2013 articles about books, literacy, and libraries in the Guardian, one by Neil Gaiman and the other by Susan Cooper. The Gaiman one is excellent, but I was disappointed by Cooper's, partly because it digresses substantially from its point, but mostly because of a couple of paragraphs I can't stop thinking about. She starts off quoting a talk she gave in 1990:
"We – teachers, librarians, parents, authors – have a responsibility for the imagination of the child. I don't mean we have to educate it – you can't do that, any more than you can teach a butterfly how to fly. But you can help the imagination to develop properly, and to survive things that may threaten it: like the over-use of computers and everything I classify as SOS, Stuff on Screens. I do realize that the Age of the Screen has now replaced the Age of the Page. But on all those screens there are words, and in order to linger in the mind, words still require pages. We are in grave danger of forgetting the importance of the book."
All that was 23 years ago and it's all still true. The screens have just grown smaller, and multiplied. In America, there are already a few digital schools, which have no books, not even in the library. And in schools across America, so many children now work on laptops or tablet computers that cursive handwriting is no longer being taught. Maybe that's also happening here. I suppose that's not the end of the world; lots of authors write their first drafts on a computer, though I'm certainly not one of them. But there's something emblematic about handwriting, with its direct organic link between the imagining brain and the writing fingers. Words aren't damaged by technology. But what about the imagination?
I am not a luddite. I've written screenplays for small and large screens. I love my computer. But as you can tell, this last author of the weekend is offering an unashamed plea for words on pages, for the small private world of a child curled up with a book, his or her imagination in direct communication with the imagination of the person who wrote the words on the page.
I have a great deal of respect for Ms. Cooper. The Dark is Rising Sequence meant a lot to me when I was a kid. And I absolutely agree with her premise that books and libraries are vital and that we must continue to treasure, support, and protect them, even in an increasingly digital world.
But her handwringing about Kids Today and their Screens just strikes me as a bunch of Old Person Nonsense.
At least she acknowledges that the decline of cursive is no big deal.
I heard my aunt bemoan the lack of cursive education in schools recently. My response was, "What the hell do kids need to know cursive for?" It's harder to read than print, it's (at least for me) harder and slower to write than print, and in the twenty-first century it's about as essential a communication skill as Latin. It may be an interesting subject to study, but it's hardly a necessary one.
In sixth grade, I had two teachers who wouldn't let us submit typed papers. Everything had to be written in ink, in cursive. One of them even had the gall to justify this restriction by saying "This is how adults communicate."
Well, it's twenty-one years later, and you know what? I can't think of a single time in my adult life that I've ever written anything in cursive. I don't even sign my name in cursive.
You know what I, as an adult, do use to communicate, each and every single day of my life? A goddamn computer.
I'm a Millennial. At least, I think I am; nobody seems to agree on just what the fuck a Millennial is, exactly. But consensus seems to be that I'm on the older end of the Millennial Generation, and I certainly seem to fit a lot of the generalizations people make about Millennials.
I've been online since I was six years old (though I didn't have a smartphone until I was almost 30); I grew up with Stuff on Screens.
And that means I read a lot.
As far as Stuff on Screens and literacy, I'm inclined to agree with Randall Munroe:
I'd like to find a corpus of writing from children in a non-self-selected sample (e.g. handwritten letters to the president from everyone in the same teacher's 7th grade class every year)--and score the kids today versus the kids 20 years ago on various objective measures of writing quality. I've heard the idea that exposure to all this amateur peer practice is hurting us, but I'd bet on the generation that conducts the bulk of their social lives via the written word over the generation that occasionally wrote book reports and letters to grandma once a year, any day.
Millennials read all the time, and we write all the time. And that promotes the hell out of literacy, no matter how goddamn annoying it is to see somebody spell the word "you" with only one letter.
Per Cooper's contention that people experience a closer kind of bond with words on paper than words on screens, research indicates that this distinction is decreasing as more and more people become accustomed to screens. Via Scientific American:
Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.
Now, there are ways in which physical books are superior to digital ones. One is DRM. DRM is a blight; it is a threat to libraries, to academia, to preservation, and to the very concept of ownership.
But it's also optional. Its not an inherent part of ebooks; it's bullshit added to them by assholes. And I suspect that, within a generation, it will be gone, just as music DRM has been gone for about a decade now.
There's one more case where paper books are superior to digital ones: pictures. I've already spoken at length about comic books shrunk to fit a 10" screen, as well as the color problems that can arise when they're not printed on the same paper stock they were designed for. The same goes for picture books, art books, photo books; for magazines whose layouts are designed for the printed page. When you put these things on a small screen, you do lose something tangible (and if you put them on a large screen, you lose portability).
On the other hand, I've currently got some 173 books and 362 comics on a 10" rectangle that fits in my backpack, and that is amazing.
People carry libraries in their pockets now. That's not a threat to literacy, it's a boon -- so long as voters and politicians understand that these portable libraries are not meant to replace the traditional kind, but to supplement them.
But for people who love books -- at least, people of my generation who love books -- it's not an either-or question. It's not "Should I read paper books, or digital ones?" It's "Holy shit, look at all the books I have access to, and all the different ways I can read them!"
The first iPhone was released in 2007. It's too early to gauge what long-term effects, nationally or internationally, the smartphone revolution will have on literacy and reading habits.
But I'm more inclined to agree with Munroe than Cooper: a generation that's reading and writing all the time is going to be better at reading and writing than one that isn't. Even if you think they're doing it wrong.
An angry hat tip to Scott Sharkey, who used to handwrite blog posts, which gave me the utterly terrible idea for this time-consuming pain-in-the-ass of a post. (Granted, I'm pretty sure he had the good sense never to do it with a six-page essay with working links.)
Also, the part where I printed an image and then re-scanned it is kind of like something this one angry lady on a My Little Pony fan site did once.
(And yes, I'm aware that I forgot to use blue pencil for the Scientific American link. I am not going back and redoing it. That's the thing about writing stuff out on paper: it's kinda tough to add formatting to something after you've already written it.)
This morning I went into my bathroom and there were ants wandering around. They hadn't formed a line yet, but there were maybe a dozen, moving around and exploring. I didn't see them in any other rooms; I couldn't find where they were coming from but I think it was probably under the floor.
I squished as many as I could see (and took the bathroom rug out and threw it in the wash), but when I came back, more had come; there were about the same number as the first time.
I squished them again; more came again.
Then I had a bright idea: I turned my Roomba loose in the bathroom and closed the door.
The next time I went in? No ants, living or dead.
I was pretty pleased with myself, until I saw the black widow spider in my shower, at which point I decided yeah maybe it is time to call an exterminator.