Month: October 2015

Where Will the PC Go? -- Part 4: SaaS

So, per the last couple of posts, I find it entirely possible that, as vendors develop tablets that double as PC's, they may replace traditional desktop and laptop computers. For the common end user who just needs a web browser and (maybe) an office suite, I don't think that's going to be a tough sell.

But there are markets that rely heavily on more powerful computing hardware.

One is PC gamers. Others are the various types of media creators: people who create images, music, movies.

I've already mentioned dumb terminals and software as a service (SaaS) as a major current trend, with programs like Google Docs running in a browser and working as an effective substitute for traditional locally-run programs like Microsoft Word.

Of course, a word processor is one thing; an enterprise-quality photo editor is another, and a game requiring split-second timing is something else again.

But developers are working on it.

Photoshop

Last year Adobe released a limited beta of a streaming version of Photoshop for ChromeOS. Photoshop itself doesn't run in the browser; the app is a Remote Desktop shell that interacts with an instance of the Windows version of Photoshop running on a remote server.

So, by definition, this is no replacement for the Windows version of Photoshop -- because it is the Windows version of Photoshop. But it demonstrates a potentially compelling alternative to buying expensive, high-end hardware just to run Photoshop: what if you could buy cheap hardware, and pay a subscription fee to run Photoshop on someone else's expensive hardware?

Reactions to the ChromeOS version of Photoshop seemed generally positive; I would expect it to have some latency issues, but I also bet it runs faster on a remote server than it did on the Core 2 I had to use at GoDaddy. (Hey, when I said the Core 2 Duo was the last chip most users ever needed, I said I wasn't including Photoshop.)

Adobe has already moved Photoshop's licensing to a subscription model instead of a purchase model. (A lot of people are very angry about this, but I haven't heard anything to suggest it's led to a drop in "sales"; that's the thing about monopolies.) It's not hard to envision a transition to a subscription model where you run the program remotely instead of locally. Hell, they could even charge more money to give you access to faster servers.

A/V Club

Other media development suites could, potentially, move to streaming services, but there are caveats. Uploading raw, uncompressed digital audio and video files takes a lot more time than uncompressed images. And what about storing your source files? My grandmother puts together home movies on her iMac, and she's got terabytes of data going back some 15 years. That's the kind of storage requirement an amateur filmmaker can rack up; now think of how much somebody who does it for a living might wind up with. If you're renting storage space on an external server, on a month-to-month basis, that could get pretty costly.

But it's technically feasible, at least, that audio and video editing could be performed on a remote server.

Recording audio is another story. Anything more complex than a simple, single-track voice recording is still going to require specialized mixing hardware. And transferring your recording to a remote server in real-time, without lossy compression? You'd better be sitting on fiber.

So I think we can put "recording studios" -- even the home-office variety, like mine -- into the category of Stuff That's Not Going Anywhere for Awhile.

Games

Moving games to a streaming system is a challenge -- but I'm not sure it's as big a challenge as recording studios. It's more or less the same requirement as Photoshop: take simple inputs from a human interface device, send them to a server, have the server run them and respond accordingly, stream the video output back to the client. The trick is managing to do that in real-time with minimal loss of audio and video quality. That's the challenge -- but engineers are working on it.

The OnLive streaming service was a failure, but Sony bought it out; it sees value there. nVidia's got its own streaming solution too, in GRID. One of these things is not like the other -- Sony sells consoles at a loss and would stand to benefit from selling cheaper hardware, while nVidia makes a ton of money selling expensive graphics cards to enthusiasts and surely doesn't want to cannibalize its own market -- but obviously there's more than one type of gamer, and the people who shell out over $300 for a graphics card are in the minority.

Now, as minorities go, high-end PC gamers are still a pretty sizable minority; it's still a multibillion-dollar industry. But it's a fraction of the console gaming business, and it's expected to be surpassed by mobile gaming by the end of this year. Like the PC industry as a whole, it's still big and it's still growing, but it's growing a lot slower than other sectors and could be facing a long-term threat from new platforms.

Switching to a streaming platform could have a lot of appeal to game publishers; it combines the simplicity of developing for consoles with the superior hardware capabilities of the PC. Think about the possibility of developing for the latest and greatest hardware, but only for a single specific hardware build.

It would also, at long last, produce a form of DRM that could actually work.

While the industry has tried many, many copy protection schemes over the years, all of them are, sooner or later (and usually sooner), crackable. And there's a simple, logical reason for this: no matter what you do to encrypt the data of your program, you have to give the computer the means to decrypt it, or it won't work. No matter where or how you hide the key, if you give it to your users sooner or later they're going to find it.

But that's only true if the software is running on their computer. If the binary data is never copied to their hard drive, never stored in their memory, if the program is actually stored and run on a remote server somewhere and all the client has access to is a program that takes inputs and streams audio and video? Well, then there's no way they can copy the game, unless they actually break into your servers.

(Which, given Sony's history with Internet security, might not actually be so hard.)

I am not saying this is a good thing; in fact, I consider it something of a nightmare scenario.

Consider every problem you've ever had with an online or digitally-distributed game. Now think of what it would look like if every game had those issues.

Not just latency, lag, server outages, and losing your progress every time your Internet connection goes out. Consider that if a game is no longer profitable, they'll pull the plug. If a developer loses a license, the game(s) associated with it will go away. (Was GoldenEye ever released on Virtual Console? I don't think it was.) If a game gets updated and you liked the old version better, too bad. And remember when Nintendo ended its partnership with GameSpy and killed all the online multiplayer features of every Wii and DS game ever made? Imagine an entire generation's worth of games not working at all anymore, online or otherwise. Even though you paid for them.

Now, there's recent evidence that a strategy like this would fail. The Xbox One is still reeling from customer backlash against early plans to restrict used-game sales and require an always-on Internet connection even for single-player games, even though those plans were never even implemented.

On the other hand, there's evidence that even a wildly unpopular strategy could still succeed. Have you ever heard anyone who doesn't work for EA praise the Origin distribution service (or whatever the fuck they're calling it now)? I know I haven't, but people still use it. Because if you want to play Mass Effect 3 or Dragon Age: Inquisition, your only choices are consoles, Origin, and piracy.

And then there are examples that could go either way: Ubisoft continued to use DRM that required an always-on Internet connection for about two years, from 2010 to 2012, before finally giving in to market backlash.

It's hard to say how existing high-end PC gamers would react if the major publishers tried to force a transition toward streaming games -- or whether high-end PC gamers will continue to be a big enough market for the major publishers to care what they think. But for the foreseeable future, I think PC gaming will continue on much the same as it has for the past 15 years. There could be major changes on the horizon, but I sure don't see them happening in the next 10 years.

Then again, five years ago I was saying there was no way that streaming video would outpace Blu-Ray because there was just no way to stream 1080p video over a home Internet connection. So keep that in mind before trusting any predictions I make.

Where Will the PC Go? -- Part 3: Business

Over the past couple of posts, I've given some of the reasons I think tablet PC's could replace traditional desktops and laptops. Today I'm going to talk about why I don't think that's going to happen anytime soon: the business market.

In enterprise, Microsoft still rules the roost, with Windows and Office. The lock-in is strong.

And while the BYOD trend isn't likely to go away, and in fact there are some major features of Android Marshmallow that are designed to make it easier to use a phone with dual profiles for home and work, that's still a far cry from replacing work-provided computers with devices that workers bring from home.

And there's a simple reason why: whatever costs a company incurs by buying a computer for every one of its employees are offset by standardizing on hardware and software to make IT's job easier. When everybody's running the same few programs on the same few models of computer, it limits the number of potential compatibility issues. When every computer is running the same stock image, it's easier to control devices' security, and when IT pushes every software update, it limits the possibility that the latest patch will break anything. And when a computer does break down, it makes it easy to replace it with a machine of the same model with all the same software and settings.

And when data is stored on an internal company server, it's less vulnerable than if it's in somebody's Google Docs account, or Dropbox, or whatever the hell MS and Apple call that thing where your home directory automatically gets uploaded to their servers now.

And that's just talking general best-practices for every company. You start getting into companies (and government agencies) where security is tightly restricted, whether that be military, intelligence, healthcare, or just a lot of sensitive proprietary information, and there's no fucking way you're going to allow people to use their personal devices for work.

(Unless you're the Director of the CIA and send confidential information to your personal fucking AOL account. But I digress.)

Convertibles

All that said, business has already started transitioning away from desktops to laptops, and I can foresee the possibility of Windows-based convertible tablets like the Lenovo Yoga and the MS Surface picking up some traction. I don't think it would be a BYOD scenario; I don't think businesses are apt to move their operations over to workers' own personal tablets -- but they could eventually start equipping every worker with a company-supplied tablet instead of a company-supplied laptop.

But first, prices are going to have to drop. The reason laptops passed desktops is that their prices approached parity; that hasn't happened with tablets yet. You can get a respectable mid-range Lenovo laptop for under $400; you can get a Lenovo tablet with a keyboard in that price range, but it's going to come with pretty anemic specs. 2GB RAM and 32GB internal storage is okay for a tablet, and might work for a device you only use when you're traveling, but I don't think a machine like that is good enough to use as a daily driver, even for end-users who only need Windows and Office. If you want a convertible tablet with comparable specs to a mid-range laptop, you can expect to pay 3 times as much -- at least, for now. Moore's Law is still in effect, and that gap's going to close, just like the gap between desktops and laptops did.

SaaS

There's one more factor that can make the puny specs of a 32GB tablet moot: apps that run in a browser instead of locally. Office 365 could potentially replace the traditional client-side version of MS Office for business users.

But most business users don't just use Microsoft Office. I've worked at companies both big and small, and nearly all of them have some sort of ancient proprietary program that they rely on for day-to-day use, and often several. Transitioning from an already-long-in-the-tooth program to a new one that performs the same features but runs on a server is not a quick, easy, or cheap task.

I'll talk more about SaaS in the next post -- and, in particular, the challenges it faces in displacing high-performance applications like multimedia editors and games -- and I think it's making major inroads. But the business sector depends so heavily on legacy software that I just don't see it transitioning entirely to the cloud within the next decade. We'll have cost-competetive convertible tablets before we have every app in the cloud.

Where Will the PC Go? -- Part 2: Possible Solutions

In my previous post, I established that, despite strides made in screen keyboards and text-to-speech programs, a hardware keyboard is still the best way to write text documents.

In this one, I'll look at how phones and tablets work as replacements for PC's.

Problem 3: Phones Are Still Phones

Of course, you can connect a phone to a computer monitor, and to a keyboard. Or to a game controller.

Awhile back I hooked my phone up to my TV, and paired it to my DualShock 4, and fired up Sonic 4.

The game ran fine -- I didn't like it very much but it ran fine.

And then my mom called me.

The game stopped, and my TV screen filled up with a message that I was getting a phone call. So I walked across the room, picked up my phone, disconnected it from my TV, and answered it.

This is not optimal behavior for a computer.

Now, there are possible ways to fix this.

Headsets and speakerphone are two ways to answer the phone without having it in your hand, but neither one is optimal. Speakerphone is often hard to hear and can have that awful echo. And as for headsets, well, do I carry one in my pocket? Do I keep one in every room where I might dock my phone and use it as a computer?

A better solution would be to "connect" your phone to a monitor and speakers wirelessly, maybe using a device like a Chromecast. That way you could keep it next to you, or in your pocket, while still editing documents, or playing Sonic 4, or whatever. And if it rang, you could answer it, and not lose whatever was on your screen -- say I get a call where I want to take notes with my keyboard (as frequently happens); there could be a way to do that.

But the easier solution is probably to have the device that's connected to your keyboard and monitor(s) not be your phone. Especially if people continue to buy other devices, such as laptops or tablets.

Problem 4: Phone Interfaces Don't Make Good Desktop Interfaces

Windows 8. Do I even need to elaborate?

Microsoft tried to design an interface that would work on phones and on desktops. It was a huge failure.

This was entirely foreseeable. A 4" touchscreen is completely different from a pair of 1080p monitors with a keyboard and mouse attached to them. An interface designed for the former is a lousy fit for the latter, and vice-versa.

So, with Windows 10, Microsoft tried something else, and something altogether more sensible: the OS was designed with a phone/tablet interface and a desktop computer interface, with the ability to switch between the two. If you connect your phone to a dock that's hooked up to a monitor, a keyboard, and a mouse, then the interface changes to desktop mode.

Which is a good idea (and one that Canonical has been moving toward for years), but Windows Phone hasn't exactly set the world on fire (and Ubuntu Phone isn't a thing that anybody seems to want). Windows tablets, on the other hand, including Lenovo's Yoga series and MS's own Surface line, have fared much better.

Google's moving toward this sort of convergence too; it hasn't gotten as far as MS or Canonical yet, but there have been hints of future compatibility between Android and ChromeOS.

Ah yes, ChromeOS -- and the return to dumb terminals running server-side programs.

I think that's going to be key to bringing a few of the major special-case users on board with the transition to lower-powered systems: gamers and media designers.

We'll get to them soon. But in the next post, I'll be looking at the market that's really going to continue driving PC sales: business.

Where Will the PC Go? -- Part 1: Identifying the Problem

The other day, Ars Technica posted an article called Cringe-worthy “PC Does What?” campaign wants you to upgrade, about a new ad campaign the PC industry is pushing to try and convince users to buy new computers.

The PC industry is in trouble. It's built around a pattern of regular upgrades that customers just aren't buying anymore. And it's trying whatever it can to stop the bleeding.

On the other hand, rumors of its demise have been greatly exaggerated. In the comments thread on the Ars article, someone named erikbc said:

Well, if anyone believes PC is dead they need to get their head checked.
And understand some numbers:

https://en.wikipedia.org/wiki/Usage_share_of_operating_systems#Desktop_and_laptop_computers

A user named has responded:

…said every horse-and-buggy salesman in 1900 ever.

Which, okay, doesn't actually make a whole lot of sense. (In fact I am fairly confident that very few horse-and-buggy salesmen in 1900 ever said "If anyone believes PC is dead they need to get their head checked" and then linked to Wikipedia.) But, like many shitty analogies do, it got me thinking about why it was a shitty analogy.

Mainly, I don't think the PC will go away to the extent that horse-drawn carriages have. I think it's possible that tablets could completely replace desktop and laptop computers, but I don't think that can happen until they effectively duplicate the functionality of PC's -- in effect not actually replacing PC's but becoming them.

General Case: Typical End Users

While it's easy to point to the rise of the smartphone as the reason for declining PC sales, it's only one of the reasons. There's another one: the last processor most end users will ever need was released in 2006.

A typical end user only needs a few things in a PC: a web browser, an office suite, music, and videos. (And those last three are, increasingly, integrated into the first one; I'll circle back to that in a later post.)

In 2006, Intel released the Core 2 Duo, which, paired with even a low-end onboard graphics chip, could handle HD video and drive two 1920x1080 monitors. And it's 64-bit, so it can handle more than the 3GB of RAM that 32-bit processors max out at.

There have been plenty more, and plenty better, processors in the 9 years since. But they're not better for people who only use their computer for browsing, Office, listening to music, and watching videos. The Core 2 Duo was good enough for them.

There are people who greatly benefit from newer and better processors -- gamers and people who produce media rather than just consuming it. But they're special cases; I'll get to them later. For the average user, the difference between a Core 2 Duo and a Core i7 isn't even noticeable.

The computer industry grew up in the 1990's around the expectation that people would upgrade their computer every few years to handle new software. And people just don't do that anymore. They buy a new PC when the old one quits working; not before.

But, at least at this point, they still need a PC. People may be buying more phones than PC's, but, at least in America, a phone is not a replacement for a PC.

Problem 1: Screen Keyboards

Screen keyboards are a pain in the ass.

They're fine for short communication -- text messages and tweets -- but they're just too slow and imprecise for long-form writing. (I thought of writing this post entirely on a screen keyboard -- like last week's handwritten post -- but I think that would make me want to gouge my eyes out.)

There are still plenty of requirements for longform writing in day-to-day life -- reports for school and reports for work, for starters. And that's even in professions where you don't have to write for a living, never mind ones where you do. People who write articles, and especially people who write books, are best served with a keyboard to type on.

And maybe that won't always be the case. Maybe kids growing up with screen keyboards aren't learning to type on traditional keyboards; maybe they're faster with screen keyboards than they are with hardware ones. Maybe, within a generation, we will see essays, reports, articles, even books, all written with screen keyboards. I suspect that if we do, they'll look a whole lot different than they do today.

Or maybe screen keyboards will get better. Maybe predictions and autocorrect will improve. Maybe a faster paradigm than qwerty + swipe will catch on. There's a lot that can happen in this space.

Or maybe we won't be using keyboards at all.

Problem 2: Text-to-Speech

Speech recognition software has grown by leaps and bounds. Terry Pratchett used Dragon Dictate and TalkingPoint to write his last few novels.

But being good enough for a first draft, for a user who is no longer physically capable of using a keyboard, isn't the same thing as being able to recognize a full range of standard and nonstandard grammars and sentence structures, pick correct homonyms, and understand slang and regional dialects. (Pratchett liked to tell the story of how he had to train his text-to-speech software to recognize the word "arsehole".)

Text-to-speech software might be good enough for simple, clear documents, such as manuals, lists, daily work logs, AP-style newsbriefs, and technical writing (provided you're writing on a subject that doesn't have a lot of jargon words that don't appear in a simple dictionary). But for writing that's meant to convey personality -- editorials, reviews, fiction, even this blog post -- text-to-speech algorithms have a long way to go.

So, for now at least, a good old hardware keyboard remains the best way to input large blocks of text into a computer. In my next post, I'll examine why a dedicated PC is still the best thing to connect to that keyboard, and how phone and tablet OS's are (or aren't) working to bridge that gap.

Updated

The migration should be complete and everything should be here. If you trip over any missing images, wrong formatting, etc., contact me.

Magical Disappearing Posts

My hosting company is in the process of moving servers and I didn't get the memo; all my stuff was copied over a week ago but I haven't changed DNS settings yet. So if some posts disappear and reappear, that's why; normal service should resume within the next couple of days.

Stuff On Screens

Handwriting
Type

This blog post is handwritten.  Instead of alt text, the complete text is transcribed in the Typed tab.


Yesterday I ran across two 2013 articles about books, literacy, and libraries in the Guardian, one by Neil Gaiman and the other by Susan Cooper. The Gaiman one is excellent, but I was disappointed by Cooper's, partly because it digresses substantially from its point, but mostly because of a couple of paragraphs I can't stop thinking about. She starts off quoting a talk she gave in 1990:

"We – teachers, librarians, parents, authors – have a responsibility for the imagination of the child. I don't mean we have to educate it – you can't do that, any more than you can teach a butterfly how to fly. But you can help the imagination to develop properly, and to survive things that may threaten it: like the over-use of computers and everything I classify as SOS, Stuff on Screens. I do realize that the Age of the Screen has now replaced the Age of the Page. But on all those screens there are words, and in order to linger in the mind, words still require pages. We are in grave danger of forgetting the importance of the book."

All that was 23 years ago and it's all still true. The screens have just grown smaller, and multiplied. In America, there are already a few digital schools, which have no books, not even in the library. And in schools across America, so many children now work on laptops or tablet computers that cursive handwriting is no longer being taught. Maybe that's also happening here. I suppose that's not the end of the world; lots of authors write their first drafts on a computer, though I'm certainly not one of them. But there's something emblematic about handwriting, with its direct organic link between the imagining brain and the writing fingers. Words aren't damaged by technology. But what about the imagination?

I am not a luddite. I've written screenplays for small and large screens. I love my computer. But as you can tell, this last author of the weekend is offering an unashamed plea for words on pages, for the small private world of a child curled up with a book, his or her imagination in direct communication with the imagination of the person who wrote the words on the page.

I have a great deal of respect for Ms. Cooper. The Dark is Rising Sequence meant a lot to me when I was a kid. And I absolutely agree with her premise that books and libraries are vital and that we must continue to treasure, support, and protect them, even in an increasingly digital world.

But her handwringing about Kids Today and their Screens just strikes me as a bunch of Old Person Nonsense.

At least she acknowledges that the decline of cursive is no big deal.

I heard my aunt bemoan the lack of cursive education in schools recently. My response was, "What the hell do kids need to know cursive for?" It's harder to read than print, it's (at least for me) harder and slower to write than print, and in the twenty-first century it's about as essential a communication skill as Latin. It may be an interesting subject to study, but it's hardly a necessary one.

In sixth grade, I had two teachers who wouldn't let us submit typed papers. Everything had to be written in ink, in cursive. One of them even had the gall to justify this restriction by saying "This is how adults communicate."

Well, it's twenty-one years later, and you know what? I can't think of a single time in my adult life that I've ever written anything in cursive. I don't even sign my name in cursive.

You know what I, as an adult, do use to communicate, each and every single day of my life? A goddamn computer.

I'm a Millennial. At least, I think I am; nobody seems to agree on just what the fuck a Millennial is, exactly. But consensus seems to be that I'm on the older end of the Millennial Generation, and I certainly seem to fit a lot of the generalizations people make about Millennials.

I've been online since I was six years old (though I didn't have a smartphone until I was almost 30); I grew up with Stuff on Screens.

And that means I read a lot.

As far as Stuff on Screens and literacy, I'm inclined to agree with Randall Munroe:

XKCD Writing Skills strip

I'd like to find a corpus of writing from children in a non-self-selected sample (e.g. handwritten letters to the president from everyone in the same teacher's 7th grade class every year)--and score the kids today versus the kids 20 years ago on various objective measures of writing quality. I've heard the idea that exposure to all this amateur peer practice is hurting us, but I'd bet on the generation that conducts the bulk of their social lives via the written word over the generation that occasionally wrote book reports and letters to grandma once a year, any day.

Millennials read all the time, and we write all the time. And that promotes the hell out of literacy, no matter how goddamn annoying it is to see somebody spell the word "you" with only one letter.

Per Cooper's contention that people experience a closer kind of bond with words on paper than words on screens, research indicates that this distinction is decreasing as more and more people become accustomed to screens. Via Scientific American:

Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.

Now, there are ways in which physical books are superior to digital ones. One is DRM. DRM is a blight; it is a threat to libraries, to academia, to preservation, and to the very concept of ownership.

But it's also optional. Its not an inherent part of ebooks; it's bullshit added to them by assholes. And I suspect that, within a generation, it will be gone, just as music DRM has been gone for about a decade now.

There's one more case where paper books are superior to digital ones: pictures. I've already spoken at length about comic books shrunk to fit a 10" screen, as well as the color problems that can arise when they're not printed on the same paper stock they were designed for. The same goes for picture books, art books, photo books; for magazines whose layouts are designed for the printed page. When you put these things on a small screen, you do lose something tangible (and if you put them on a large screen, you lose portability).

On the other hand, I've currently got some 173 books and 362 comics on a 10" rectangle that fits in my backpack, and that is amazing.

People carry libraries in their pockets now. That's not a threat to literacy, it's a boon -- so long as voters and politicians understand that these portable libraries are not meant to replace the traditional kind, but to supplement them.

But for people who love books -- at least, people of my generation who love books -- it's not an either-or question. It's not "Should I read paper books, or digital ones?" It's "Holy shit, look at all the books I have access to, and all the different ways I can read them!"

The first iPhone was released in 2007. It's too early to gauge what long-term effects, nationally or internationally, the smartphone revolution will have on literacy and reading habits.

But I'm more inclined to agree with Munroe than Cooper: a generation that's reading and writing all the time is going to be better at reading and writing than one that isn't. Even if you think they're doing it wrong.


An angry hat tip to Scott Sharkey, who used to handwrite blog posts, which gave me the utterly terrible idea for this time-consuming pain-in-the-ass of a post. (Granted, I'm pretty sure he had the good sense never to do it with a six-page essay with working links.)

Also, the part where I printed an image and then re-scanned it is kind of like something this one angry lady on a My Little Pony fan site did once.

(And yes, I'm aware that I forgot to use blue pencil for the Scientific American link. I am not going back and redoing it. That's the thing about writing stuff out on paper: it's kinda tough to add formatting to something after you've already written it.)

Pests

This morning I went into my bathroom and there were ants wandering around. They hadn't formed a line yet, but there were maybe a dozen, moving around and exploring. I didn't see them in any other rooms; I couldn't find where they were coming from but I think it was probably under the floor.

I squished as many as I could see (and took the bathroom rug out and threw it in the wash), but when I came back, more had come; there were about the same number as the first time.

I squished them again; more came again.

Then I had a bright idea: I turned my Roomba loose in the bathroom and closed the door.

The next time I went in? No ants, living or dead.

I was pretty pleased with myself, until I saw the black widow spider in my shower, at which point I decided yeah maybe it is time to call an exterminator.

Essex County is Really Good

As I mentioned a couple weeks ago in my post about the Humble Forbidden Comics Bundle, I bought the bundle partially because I'd been meaning to read Essex County. And now that I've read it, I can say with confidence that it was worth the $15 all by itself.

Essex County was the breakhout hit for cartoonist Jeff Lemire; he went on to do Sweet Tooth (which is where I first discovered his work and became a fan), and then to become a pretty big name at DC and Valiant. Last I heard he was acting as more of a story architect across multiple titles and less of an artist on his own, smaller work; it's wonderful to see his success but I have to admit I miss his art and his originality.

So I gave Essex County a read. And I haven't read a comic like it in years. I think comparisons to Love and Rockets are inevitable -- it's a character-based work of magical realism focusing on families over generations, with a vibe of loneliness and melancholy, and its setting is an essential component in establishing its tone -- but it's not Love and Rockets. The most obvious difference is in the art: Jeff Lemire doesn't do the smooth, clean lines of Los Bros Hernandez; his work is rough, angular, and jagged. The people in Love and Rockets are beautiful; the people in Essex County are not.

Essex County page

But it's not just Lemire's art that strikes a different tone than the Hernandezes'; it's his setting. Gilbert Hernandez's Palomar may be a small town, but the streets always seem busy, and his later stories (as well as Jaime's) mostly take place in and around LA. Love and Rockets has a huge cast of characters, and it did even in the early days before 30 years of continuity piled up.

Essex County takes place, mostly, in rural Canada, on small family farms. There is one section in the second book, Ghost Stories, which takes place in Toronto; the cast is briefly packed with enough supporting characters to form a hockey team. But, before long, those characters drift away, and while Lou Lebeuf stays in the big city, he finds himself lonely despite the throngs of people around him.

And, to a large extent, Essex County is about loneliness. Lester is lonely because his mother died, he never knew his father, he's moved to a farm to live with an uncle he barely knows, and the other kids make fun of him. Anne is lonely because she works long hours, her husband is dead, and her son barely speaks to her. Lou is lonely first because of his self-imposed exile from his family, then because he goes deaf, then because he outlives everyone he knows, and finally because he gets Alzheimer's. The wide, open, snow-filled spaces of Essex County externalize their loneliness and isolation, but they're not the cause -- at least, not the only one.

These three stories aren't happy, I don't suppose, though they've got moments of happiness. And I think, really, that's what they're about: find those moments of happiness. Find a connection with someone when you can.

Or maybe I'm off-base. Maybe that's not what the book is about at all. For all that it shows that those connections are precious, it shows how fraught they can be. Lou's problems start when he connects with somebody who he shouldn't. Lester doesn't know his father because two people made a connection that they couldn't sustain. Life is like that; it tends to defy simplistic morals.

And that's what Essex County is about, really: slices of life; moments in time. And families, and history.

And hockey. There's a whole lot of hockey. This comic is Canadian as fuck, eh?

Actually, It's About How Games Journalism is a Pain in the Ass

Or, Why I Won't Be Doing That Again Any Time Soon: A Postmortem

So the last three posts comparing and contrasting five different Mega Man games required rather a lot of screenshots. It took a long time to get them all, for a number of reasons I'll get into in a moment. It wound up taking a lot longer to get those posts done and posted than planned, and it really wasn't a whole lot of fun.

The other day on Brontoforumus, I described it as taking two things I enjoy doing -- playing video games and talking about video games -- and turning them into work. More specifically, work I don't get paid for.

I like how the whole thing turned out, but it took hours and hours to put together, and playing a game to farm for screenshots is a pretty different and altogether less fun experience than playing it just to play it.

Some of it may be down to the tools I'm using, or just my lack of proficiency with them.

I opted to grab all the screenshots myself, rather than try and find a resource that already had them (or close enough). I think this was probably the right call; VG Museum has a perfectly good shot of the floating platforms in Ice Man's stage that I could have used, but it doesn't really have any other grabs of the Mega Man screens I needed, and it's got next to nothing from Mega Man X and nothing at all from the other three games I was capping.

So I could have poked around the Internet trying to find the screens I was looking for, either as static images on websites or as caps from Let's Play videos on YouTube. But I think that would have taken just as long as getting the damn things myself.

The next decision I made that made my life more difficult was to try and grab all the images at each device's native resolution, with graphical filters turned off.

Here are some of the screenshots I used in the last three posts:

  • Cut Man Stage -- Mega Man
  • Chill Penguin Stage -- Mega Man X
  • Sigma's Fortress -- Mega Man Xtreme
  • Launch Octopus Stage -- Mega Man: Maverick Hunter X

And here are what those games look like when I play the game scaled up for a 1080p screen and with a graphics filter turned on:

  • Cut Man Stage -- Mega Man
  • Highway Stage -- Mega Man X
  • Highway Stage -- Mega Man Xtreme
  • Highway Stage -- Mega Man: Maverick Hunter X

Now, first of all, those images are pretty big. In fact, unless you've zoomed this page in, you're not even looking at them at full size right now, because they've been scaled to fit the content area of this post. That's 892px wide (unless you're viewing it on a mobile device, in which case it's less), whereas the images are between 1157 and 1920px wide.

And they're PNG's, which means they're also pretty big in terms of filesize (except the Mega Man Xtreme one). Unnecessarily big; you just loaded a 1920px-wide image just to display a scaled-down 892px version. Or less. If you're reading this on a 3G connection, then I probably owe you an apology.

Now, there are things I could do differently. I could set my emulators to output as JPEG instead of PNG, but that would result in a visible decrease in quality. I could resize the images manually, but that would be more work for me. I could set up a script to scale them automatically, but we'd still end up with a bunch of images all scaled to the same width. Which isn't really ideal; it doesn't make a lot of sense for the Game Boy screenshots to be the same size as the PSP ones, and 892px is just too damn big to get multiple images onscreen and get a good comparison anyway.

So, instead of that, what I did was turn off the filters and, when I was ready to take a screenshot, toggle fullscreen off to take it.

This is a pain in the ass, not just because it interrupts the flow of the game but because it's fucking difficult to set up a good screenshot in a tiny 160x144 window on a 1080p TV when you're sitting on the couch across the room.

And that's before you get into weird shit like this:

  • Cut Man Stage -- Mega Man
  • Cut Man Stage -- Mega Man

I don't know why the fuck RetroArch did this. I told it to size the window to native NES resolution, and it gave me these monstrosities instead. That is not native NES resolution. And it's not a problem with the core I was using, because I tried it with two different cores. (I thought it might be some weird leftover setting from when I'd done the Game Boy screen grabs, but that doesn't make sense; the Game Boy screen grabs were 160x144, while these are 205x191.)

And I took a bunch of screenshots before realizing what it had done. I had to go back and replay fucking Ice Man's stage and do it all over again.

So I think the best solution would be to use emulators that output screenshots at native resolution and without filters, regardless of what scaling and filters are applied as I'm playing them. I know I've used emulators like that before, but I can't remember which ones they were offhand.

And there's another requirement: I want to be able to take a screenshot without having to use the damn keyboard. I want to be able to use one of the buttons on my controller to take the screenshot. Because having to stage a shot and then quickly take my hands off the controller to hit F12 on a keyboard doesn't just interrupt the flow of the game, it's a good way to get yourself killed if you're trying to grab a screenshot of a particularly difficult section of game.

Snes9x let me map the screenshot button to my controller, and I think FCEUX did too, but I couldn't find any feature like that in PPSSPP or RetroArch.

So I guess what I'm looking for is an emulator that lets you output screenshots with no scaling or filters applied, and lets you map that function to a button on your controller.

That would make the whole exercise a lot quicker and easier, but it wouldn't fix a number of other problems -- I'd still have to wade through a bunch of files with names like ULUS10068_00017.png and RetroArch-1011-165734.png and find the ones I wanted, and then realize "Fuck, I forgot to take a screenshot of Spark Mandrill's stage" and have to go back and replay that section, and seriously, you have no idea how many times I did that.

And that's without even getting into the editing portion.

Remember this guy from the first post, with the measurements?

Mega Man is 33x54px

I added those rulers and numbers myself, manually, in Gimp (and it probably shows). And it wound up being way more fiddly and time-consuming than it should have. I guess I probably should have gone looking for plugins to see if somebody had already coded up a tool to draw a shape like that automatically so I wouldn't have to do it myself; that is what I ended up doing for this graphic, with the arrow in it:

Flame Mammoth Stage -- Mega Man: Maverick Hunter X, with Giant Red Arrow

So, I dunno. Like I said, I'm pretty pleased with how the feature turned out (and it's gotten a positive response from the Brontos, which is nice), but it just took so long to put together, and it was not very fun. I might try it again sometime -- especially if somebody can steer me in the right direction and help make it easier next time -- but for now I'll probably go on back to my usual Wall of Text posts.

Course, in the old days I used to enjoy doing shit like this:

City of Heroes time-lapse

But there's a pretty important difference: we were already just fuckin' around and essentially posing for photos anyway; it's not like I was taking screenshots in the middle of a difficult mission. (And even if I were, it was pretty easy just to reach over and hit PrtScn without breaking stride in the game.) I wasn't trying to get a grab of any specific gameplay element -- let alone compare and contrast across five different games.

Maybe if I do this again I'll just pick an easier topic.

In the meantime, I think I'll go back to just playing games. Maybe I'll replay some more Mega Man X games. I never did get around to finishing X8. Fucking vehicle levels.


Mega Man ® 1989 and © 1987 Capcom Co, Ltd
Mega Man X ™ and © 1993 Capcom Co, Ltd
Mega Man Xtreme © 2001 Capcom Co, Ltd
Mega Man Powered Up and Mega Man: Maverick Hunter X © 2006 Capcom Co, Ltd
City of Heroes © 2004 NCsoft

I took all the screenshots myself.
I used the following emulators:
NES: FCEUX and Libretro with the FCEUmm and Nestopia cores
SNES: Snes9x and Libretro with the Snes9x Next core
Game Boy Color: Libretro with the Gambatte core
PSP: PPSSPP