Tag: Microsoft

The Linux Foundation Hates Copyleft

It's been kinda weird, seeing the Linux Foundation slowly transform into an organization that is fundamentally opposed to the license Linux is published under.

But the Linux Foundation is in the business of turning a profit, and that's meant embracing corporate America -- even Microsoft is now a member. In fact, the board is overwhelmingly made up of corporate representatives now: Facebook, AT&T, Qualcomm, Cisco, VMware (we'll come back to them tomorrow), Intel, HP, Bitnami, Panasonic, Hitachi, Samsung, IBM, Microsoft (Microsoft!), Comcast, Huawei, NEC, Oracle, Fujitsu. There used to be two community representatives on the board, but they eliminated that position (we'll come back to that on Thursday).

Linux is published under the GNU General Public License. The GPL is what GNU/Free Software Foundation founder Richard Stallman calls "copyleft": if a piece of software is licensed under the GPL, then that means anyone else is free to access, modify, and redistribute the source code, provided that if they release a modified version, they publish it under the same license.

Corporations don't much like copyleft or the GPL. They like more permissive licenses, like the MIT License and the BSD Licenses, which allow them to take someone else's code, modify it, and not give their modifications back to the community.

Linus Torvalds, the man who the Linux Foundation is named after, gets this. FOSS Force's Christine Hall recounts his remarks at LinuxCon last year:

“I think that if you actually want to create something bigger, and if you want to create a community around it, the BSD license is not necessarily a great license,” he said.

“I mean, it’s worked fairly well, but you are going to have trouble finding outside developers who feel protected by a big company that says, ‘Hey, here’s this BSD license thing and we’re not making any promises because the copyright allows us to do anything, and allows you to do anything too.’ But as an outside developer, I would not get the warm and fuzzies by that, because I’m like, ‘Oh, this big company is going to take advantage of me,’ while the GPL says, ‘Yes, the company may be big, but nobody’s ever going to take advantage of your code. It will remain free and nobody can take that away from you.’ I think that’s a big deal for community management.

“It wasn’t something I was planning personally when I started, but over the years I’ve become convinced that the BSD license is great for code you don’t care about. I’ll use it myself. If there’s a library routine that I just want to say ‘hey, this is useful to anybody and I’m not going to maintain this,’ I’ll put it under the BSD license.

“Whenever licenses come up, I want to say that this is a personal issue,” he continued, adding a disclaimer most likely meant mainly for the benefit of the BSD folks, some of whom resent Linux’s success, but also to appease big enterprise, which is where the Linux Foundation gets virtually all of it’s funding.

“Some people love the BSD license,” he said. “Some people love proprietary licenses, and do you know what? I understand that. If you want to make a program and you want to feed your kids, it used to make a lot of sense to say that you want to have a proprietary license and sell binaries. I think it makes less sense today, but I really understand the argument. I don’t want to judge, I’m just kind of giving my view on licensing.”

Jim Zemlin, Executive Director of the Linux Foundation, seems to feel a little bit differently. Hall quotes him, in an article titled The Linux Foundation: Not a Friend of Desktop Linux, the GPL, or Openness:

“The most permissive licenses present little risk and few compliance requirements. These licenses include BSD and MIT, and others, that have minimal requirements, all the way to Apache and the Eclipse Public License, which are more elaborate in addressing contributions, patents, and indemnification.

“In the middle of the spectrum are the so-called ‘weak viral licenses’ which require sharing source code to any changes made to the originally licensed code, but not sharing of other source code linked or otherwise bound to the original open source code in question. The most popular and frequently encountered licenses in this category are the Mozilla Public License and the Common Public Attribution License.

“Restrictive Licenses present the most legal risk and complexity for companies that re-distribute or distribute software. These licenses are often termed ‘viral’ because software combined and distributed with this licensed software must be provided in source code format under the terms of those licenses. These requirements present serious risks to the preservation of proprietary software rights. The GNU General Public License is the archetype of this category, and is, in fact, the most widely used open source license in the world.”

Hall adds, "While his points are accurate enough, and reflect what I’ve already written in this article, the terms he uses suggest that the foundation holds the GPL and other copyleft licenses in contempt."

So what's all that got to do with the Software Freedom Law Center filing to have the Software Freedom Conservancy's trademark terminated? Nothing, insist the Linux Foundation and the SFLC. But Bruce Perens -- who founded the Linux Standard Base, one of the organizations that became the Linux Foundation -- thinks it's retaliation for a GPL enforcement lawsuit against VMware.

But that's a story for another post. Or two...

Where Will the PC Go? -- Part 4: SaaS

So, per the last couple of posts, I find it entirely possible that, as vendors develop tablets that double as PC's, they may replace traditional desktop and laptop computers. For the common end user who just needs a web browser and (maybe) an office suite, I don't think that's going to be a tough sell.

But there are markets that rely heavily on more powerful computing hardware.

One is PC gamers. Others are the various types of media creators: people who create images, music, movies.

I've already mentioned dumb terminals and software as a service (SaaS) as a major current trend, with programs like Google Docs running in a browser and working as an effective substitute for traditional locally-run programs like Microsoft Word.

Of course, a word processor is one thing; an enterprise-quality photo editor is another, and a game requiring split-second timing is something else again.

But developers are working on it.

Photoshop

Last year Adobe released a limited beta of a streaming version of Photoshop for ChromeOS. Photoshop itself doesn't run in the browser; the app is a Remote Desktop shell that interacts with an instance of the Windows version of Photoshop running on a remote server.

So, by definition, this is no replacement for the Windows version of Photoshop -- because it is the Windows version of Photoshop. But it demonstrates a potentially compelling alternative to buying expensive, high-end hardware just to run Photoshop: what if you could buy cheap hardware, and pay a subscription fee to run Photoshop on someone else's expensive hardware?

Reactions to the ChromeOS version of Photoshop seemed generally positive; I would expect it to have some latency issues, but I also bet it runs faster on a remote server than it did on the Core 2 I had to use at GoDaddy. (Hey, when I said the Core 2 Duo was the last chip most users ever needed, I said I wasn't including Photoshop.)

Adobe has already moved Photoshop's licensing to a subscription model instead of a purchase model. (A lot of people are very angry about this, but I haven't heard anything to suggest it's led to a drop in "sales"; that's the thing about monopolies.) It's not hard to envision a transition to a subscription model where you run the program remotely instead of locally. Hell, they could even charge more money to give you access to faster servers.

A/V Club

Other media development suites could, potentially, move to streaming services, but there are caveats. Uploading raw, uncompressed digital audio and video files takes a lot more time than uncompressed images. And what about storing your source files? My grandmother puts together home movies on her iMac, and she's got terabytes of data going back some 15 years. That's the kind of storage requirement an amateur filmmaker can rack up; now think of how much somebody who does it for a living might wind up with. If you're renting storage space on an external server, on a month-to-month basis, that could get pretty costly.

But it's technically feasible, at least, that audio and video editing could be performed on a remote server.

Recording audio is another story. Anything more complex than a simple, single-track voice recording is still going to require specialized mixing hardware. And transferring your recording to a remote server in real-time, without lossy compression? You'd better be sitting on fiber.

So I think we can put "recording studios" -- even the home-office variety, like mine -- into the category of Stuff That's Not Going Anywhere for Awhile.

Games

Moving games to a streaming system is a challenge -- but I'm not sure it's as big a challenge as recording studios. It's more or less the same requirement as Photoshop: take simple inputs from a human interface device, send them to a server, have the server run them and respond accordingly, stream the video output back to the client. The trick is managing to do that in real-time with minimal loss of audio and video quality. That's the challenge -- but engineers are working on it.

The OnLive streaming service was a failure, but Sony bought it out; it sees value there. nVidia's got its own streaming solution too, in GRID. One of these things is not like the other -- Sony sells consoles at a loss and would stand to benefit from selling cheaper hardware, while nVidia makes a ton of money selling expensive graphics cards to enthusiasts and surely doesn't want to cannibalize its own market -- but obviously there's more than one type of gamer, and the people who shell out over $300 for a graphics card are in the minority.

Now, as minorities go, high-end PC gamers are still a pretty sizable minority; it's still a multibillion-dollar industry. But it's a fraction of the console gaming business, and it's expected to be surpassed by mobile gaming by the end of this year. Like the PC industry as a whole, it's still big and it's still growing, but it's growing a lot slower than other sectors and could be facing a long-term threat from new platforms.

Switching to a streaming platform could have a lot of appeal to game publishers; it combines the simplicity of developing for consoles with the superior hardware capabilities of the PC. Think about the possibility of developing for the latest and greatest hardware, but only for a single specific hardware build.

It would also, at long last, produce a form of DRM that could actually work.

While the industry has tried many, many copy protection schemes over the years, all of them are, sooner or later (and usually sooner), crackable. And there's a simple, logical reason for this: no matter what you do to encrypt the data of your program, you have to give the computer the means to decrypt it, or it won't work. No matter where or how you hide the key, if you give it to your users sooner or later they're going to find it.

But that's only true if the software is running on their computer. If the binary data is never copied to their hard drive, never stored in their memory, if the program is actually stored and run on a remote server somewhere and all the client has access to is a program that takes inputs and streams audio and video? Well, then there's no way they can copy the game, unless they actually break into your servers.

(Which, given Sony's history with Internet security, might not actually be so hard.)

I am not saying this is a good thing; in fact, I consider it something of a nightmare scenario.

Consider every problem you've ever had with an online or digitally-distributed game. Now think of what it would look like if every game had those issues.

Not just latency, lag, server outages, and losing your progress every time your Internet connection goes out. Consider that if a game is no longer profitable, they'll pull the plug. If a developer loses a license, the game(s) associated with it will go away. (Was GoldenEye ever released on Virtual Console? I don't think it was.) If a game gets updated and you liked the old version better, too bad. And remember when Nintendo ended its partnership with GameSpy and killed all the online multiplayer features of every Wii and DS game ever made? Imagine an entire generation's worth of games not working at all anymore, online or otherwise. Even though you paid for them.

Now, there's recent evidence that a strategy like this would fail. The Xbox One is still reeling from customer backlash against early plans to restrict used-game sales and require an always-on Internet connection even for single-player games, even though those plans were never even implemented.

On the other hand, there's evidence that even a wildly unpopular strategy could still succeed. Have you ever heard anyone who doesn't work for EA praise the Origin distribution service (or whatever the fuck they're calling it now)? I know I haven't, but people still use it. Because if you want to play Mass Effect 3 or Dragon Age: Inquisition, your only choices are consoles, Origin, and piracy.

And then there are examples that could go either way: Ubisoft continued to use DRM that required an always-on Internet connection for about two years, from 2010 to 2012, before finally giving in to market backlash.

It's hard to say how existing high-end PC gamers would react if the major publishers tried to force a transition toward streaming games -- or whether high-end PC gamers will continue to be a big enough market for the major publishers to care what they think. But for the foreseeable future, I think PC gaming will continue on much the same as it has for the past 15 years. There could be major changes on the horizon, but I sure don't see them happening in the next 10 years.

Then again, five years ago I was saying there was no way that streaming video would outpace Blu-Ray because there was just no way to stream 1080p video over a home Internet connection. So keep that in mind before trusting any predictions I make.

Where Will the PC Go? -- Part 3: Business

Over the past couple of posts, I've given some of the reasons I think tablet PC's could replace traditional desktops and laptops. Today I'm going to talk about why I don't think that's going to happen anytime soon: the business market.

In enterprise, Microsoft still rules the roost, with Windows and Office. The lock-in is strong.

And while the BYOD trend isn't likely to go away, and in fact there are some major features of Android Marshmallow that are designed to make it easier to use a phone with dual profiles for home and work, that's still a far cry from replacing work-provided computers with devices that workers bring from home.

And there's a simple reason why: whatever costs a company incurs by buying a computer for every one of its employees are offset by standardizing on hardware and software to make IT's job easier. When everybody's running the same few programs on the same few models of computer, it limits the number of potential compatibility issues. When every computer is running the same stock image, it's easier to control devices' security, and when IT pushes every software update, it limits the possibility that the latest patch will break anything. And when a computer does break down, it makes it easy to replace it with a machine of the same model with all the same software and settings.

And when data is stored on an internal company server, it's less vulnerable than if it's in somebody's Google Docs account, or Dropbox, or whatever the hell MS and Apple call that thing where your home directory automatically gets uploaded to their servers now.

And that's just talking general best-practices for every company. You start getting into companies (and government agencies) where security is tightly restricted, whether that be military, intelligence, healthcare, or just a lot of sensitive proprietary information, and there's no fucking way you're going to allow people to use their personal devices for work.

(Unless you're the Director of the CIA and send confidential information to your personal fucking AOL account. But I digress.)

Convertibles

All that said, business has already started transitioning away from desktops to laptops, and I can foresee the possibility of Windows-based convertible tablets like the Lenovo Yoga and the MS Surface picking up some traction. I don't think it would be a BYOD scenario; I don't think businesses are apt to move their operations over to workers' own personal tablets -- but they could eventually start equipping every worker with a company-supplied tablet instead of a company-supplied laptop.

But first, prices are going to have to drop. The reason laptops passed desktops is that their prices approached parity; that hasn't happened with tablets yet. You can get a respectable mid-range Lenovo laptop for under $400; you can get a Lenovo tablet with a keyboard in that price range, but it's going to come with pretty anemic specs. 2GB RAM and 32GB internal storage is okay for a tablet, and might work for a device you only use when you're traveling, but I don't think a machine like that is good enough to use as a daily driver, even for end-users who only need Windows and Office. If you want a convertible tablet with comparable specs to a mid-range laptop, you can expect to pay 3 times as much -- at least, for now. Moore's Law is still in effect, and that gap's going to close, just like the gap between desktops and laptops did.

SaaS

There's one more factor that can make the puny specs of a 32GB tablet moot: apps that run in a browser instead of locally. Office 365 could potentially replace the traditional client-side version of MS Office for business users.

But most business users don't just use Microsoft Office. I've worked at companies both big and small, and nearly all of them have some sort of ancient proprietary program that they rely on for day-to-day use, and often several. Transitioning from an already-long-in-the-tooth program to a new one that performs the same features but runs on a server is not a quick, easy, or cheap task.

I'll talk more about SaaS in the next post -- and, in particular, the challenges it faces in displacing high-performance applications like multimedia editors and games -- and I think it's making major inroads. But the business sector depends so heavily on legacy software that I just don't see it transitioning entirely to the cloud within the next decade. We'll have cost-competetive convertible tablets before we have every app in the cloud.

Where Will the PC Go? -- Part 2: Possible Solutions

In my previous post, I established that, despite strides made in screen keyboards and text-to-speech programs, a hardware keyboard is still the best way to write text documents.

In this one, I'll look at how phones and tablets work as replacements for PC's.

Problem 3: Phones Are Still Phones

Of course, you can connect a phone to a computer monitor, and to a keyboard. Or to a game controller.

Awhile back I hooked my phone up to my TV, and paired it to my DualShock 4, and fired up Sonic 4.

The game ran fine -- I didn't like it very much but it ran fine.

And then my mom called me.

The game stopped, and my TV screen filled up with a message that I was getting a phone call. So I walked across the room, picked up my phone, disconnected it from my TV, and answered it.

This is not optimal behavior for a computer.

Now, there are possible ways to fix this.

Headsets and speakerphone are two ways to answer the phone without having it in your hand, but neither one is optimal. Speakerphone is often hard to hear and can have that awful echo. And as for headsets, well, do I carry one in my pocket? Do I keep one in every room where I might dock my phone and use it as a computer?

A better solution would be to "connect" your phone to a monitor and speakers wirelessly, maybe using a device like a Chromecast. That way you could keep it next to you, or in your pocket, while still editing documents, or playing Sonic 4, or whatever. And if it rang, you could answer it, and not lose whatever was on your screen -- say I get a call where I want to take notes with my keyboard (as frequently happens); there could be a way to do that.

But the easier solution is probably to have the device that's connected to your keyboard and monitor(s) not be your phone. Especially if people continue to buy other devices, such as laptops or tablets.

Problem 4: Phone Interfaces Don't Make Good Desktop Interfaces

Windows 8. Do I even need to elaborate?

Microsoft tried to design an interface that would work on phones and on desktops. It was a huge failure.

This was entirely foreseeable. A 4" touchscreen is completely different from a pair of 1080p monitors with a keyboard and mouse attached to them. An interface designed for the former is a lousy fit for the latter, and vice-versa.

So, with Windows 10, Microsoft tried something else, and something altogether more sensible: the OS was designed with a phone/tablet interface and a desktop computer interface, with the ability to switch between the two. If you connect your phone to a dock that's hooked up to a monitor, a keyboard, and a mouse, then the interface changes to desktop mode.

Which is a good idea (and one that Canonical has been moving toward for years), but Windows Phone hasn't exactly set the world on fire (and Ubuntu Phone isn't a thing that anybody seems to want). Windows tablets, on the other hand, including Lenovo's Yoga series and MS's own Surface line, have fared much better.

Google's moving toward this sort of convergence too; it hasn't gotten as far as MS or Canonical yet, but there have been hints of future compatibility between Android and ChromeOS.

Ah yes, ChromeOS -- and the return to dumb terminals running server-side programs.

I think that's going to be key to bringing a few of the major special-case users on board with the transition to lower-powered systems: gamers and media designers.

We'll get to them soon. But in the next post, I'll be looking at the market that's really going to continue driving PC sales: business.

Where Will the PC Go? -- Part 1: Identifying the Problem

The other day, Ars Technica posted an article called Cringe-worthy “PC Does What?” campaign wants you to upgrade, about a new ad campaign the PC industry is pushing to try and convince users to buy new computers.

The PC industry is in trouble. It's built around a pattern of regular upgrades that customers just aren't buying anymore. And it's trying whatever it can to stop the bleeding.

On the other hand, rumors of its demise have been greatly exaggerated. In the comments thread on the Ars article, someone named erikbc said:

Well, if anyone believes PC is dead they need to get their head checked.
And understand some numbers:

https://en.wikipedia.org/wiki/Usage_share_of_operating_systems#Desktop_and_laptop_computers

A user named has responded:

…said every horse-and-buggy salesman in 1900 ever.

Which, okay, doesn't actually make a whole lot of sense. (In fact I am fairly confident that very few horse-and-buggy salesmen in 1900 ever said "If anyone believes PC is dead they need to get their head checked" and then linked to Wikipedia.) But, like many shitty analogies do, it got me thinking about why it was a shitty analogy.

Mainly, I don't think the PC will go away to the extent that horse-drawn carriages have. I think it's possible that tablets could completely replace desktop and laptop computers, but I don't think that can happen until they effectively duplicate the functionality of PC's -- in effect not actually replacing PC's but becoming them.

General Case: Typical End Users

While it's easy to point to the rise of the smartphone as the reason for declining PC sales, it's only one of the reasons. There's another one: the last processor most end users will ever need was released in 2006.

A typical end user only needs a few things in a PC: a web browser, an office suite, music, and videos. (And those last three are, increasingly, integrated into the first one; I'll circle back to that in a later post.)

In 2006, Intel released the Core 2 Duo, which, paired with even a low-end onboard graphics chip, could handle HD video and drive two 1920x1080 monitors. And it's 64-bit, so it can handle more than the 3GB of RAM that 32-bit processors max out at.

There have been plenty more, and plenty better, processors in the 9 years since. But they're not better for people who only use their computer for browsing, Office, listening to music, and watching videos. The Core 2 Duo was good enough for them.

There are people who greatly benefit from newer and better processors -- gamers and people who produce media rather than just consuming it. But they're special cases; I'll get to them later. For the average user, the difference between a Core 2 Duo and a Core i7 isn't even noticeable.

The computer industry grew up in the 1990's around the expectation that people would upgrade their computer every few years to handle new software. And people just don't do that anymore. They buy a new PC when the old one quits working; not before.

But, at least at this point, they still need a PC. People may be buying more phones than PC's, but, at least in America, a phone is not a replacement for a PC.

Problem 1: Screen Keyboards

Screen keyboards are a pain in the ass.

They're fine for short communication -- text messages and tweets -- but they're just too slow and imprecise for long-form writing. (I thought of writing this post entirely on a screen keyboard -- like last week's handwritten post -- but I think that would make me want to gouge my eyes out.)

There are still plenty of requirements for longform writing in day-to-day life -- reports for school and reports for work, for starters. And that's even in professions where you don't have to write for a living, never mind ones where you do. People who write articles, and especially people who write books, are best served with a keyboard to type on.

And maybe that won't always be the case. Maybe kids growing up with screen keyboards aren't learning to type on traditional keyboards; maybe they're faster with screen keyboards than they are with hardware ones. Maybe, within a generation, we will see essays, reports, articles, even books, all written with screen keyboards. I suspect that if we do, they'll look a whole lot different than they do today.

Or maybe screen keyboards will get better. Maybe predictions and autocorrect will improve. Maybe a faster paradigm than qwerty + swipe will catch on. There's a lot that can happen in this space.

Or maybe we won't be using keyboards at all.

Problem 2: Text-to-Speech

Speech recognition software has grown by leaps and bounds. Terry Pratchett used Dragon Dictate and TalkingPoint to write his last few novels.

But being good enough for a first draft, for a user who is no longer physically capable of using a keyboard, isn't the same thing as being able to recognize a full range of standard and nonstandard grammars and sentence structures, pick correct homonyms, and understand slang and regional dialects. (Pratchett liked to tell the story of how he had to train his text-to-speech software to recognize the word "arsehole".)

Text-to-speech software might be good enough for simple, clear documents, such as manuals, lists, daily work logs, AP-style newsbriefs, and technical writing (provided you're writing on a subject that doesn't have a lot of jargon words that don't appear in a simple dictionary). But for writing that's meant to convey personality -- editorials, reviews, fiction, even this blog post -- text-to-speech algorithms have a long way to go.

So, for now at least, a good old hardware keyboard remains the best way to input large blocks of text into a computer. In my next post, I'll examine why a dedicated PC is still the best thing to connect to that keyboard, and how phone and tablet OS's are (or aren't) working to bridge that gap.

Customer Service Survey

I have no complaints about the representative who I spoke with; he was great. He was knowledgeable, professional, and responsive, and told me that they were aware of the outage and working on it.

HOWEVER, I have some pretty serious complaints about Cox's level of service.

First of all, my Internet outage lasted for over 12 hours.

Second, when I called, there was no recorded message informing me that there was a known outage in my area; I had to wait on hold for an extended period of time just to be told something that could have been handled by a recording as soon as I called in.

And speaking of recordings: you're seriously going to make me listen to the same four commercials, over and over again, on a continuous loop? Hey, kudos for finding a way to make being on hold an even MORE unpleasant experience; I didn't think that was actually possible. But I have to wonder, does Cox hate its employees AND its customers? Because this is just about the best way I've ever seen to ensure that a customer is as angry and frustrated as humanly possible before getting to speak to a support tech.

Put bluntly: Cox's Internet service is poor, rates keep increasing even as services are dropped (thanks so much for discontinuing Usenet support and then jacking up my rates five bucks), and saying that calling technical support is like pulling teeth is an insult to dentists everywhere.

Continuing bluntly: the only reason Cox has managed to keep my business is by virtue of being a local monopoly. The only other option for broadband Internet at my address is CenturyLink at 3.0Mbps, which is even more unacceptable than Cox's poor service, frequent outages, high prices, and legitimately terrible hold experience.

And, what's more, I strongly believe that Cox knows this, that the company is well aware that it has a captive audience and can therefore charge high rates for poor service and there is nothing else its customers can do but sit here and take it, because the broadband market has no competition to speak of.

In the short term, I begrudgingly admit that Cox has my business simply by default, because I have nowhere else to go.

In the long term, the market is going to change, competition is going to increase, and all the customers like myself who have spent the past decade being grossly dissatisfied with Cox's service are going to jump ship at the very first opportunity. A hard rain is going to fall.

I strongly suggest that Cox study the lessons of companies like Microsoft -- or, more dramatically, Blockbuster Video. Both of these are examples of companies that had a virtual monopoly in their respective industries. This monoculture allowed them to become bloated and unresponsive, and keep collecting money from their captive customers -- because where else were they going to go?

It didn't last. Technology changed. The markets changed. Blockbuster went bankrupt and, while Microsoft has held on to its majority share in the desktop/laptop OS and office suite markets, it has utterly failed to gain a foothold in emerging markets like phones and tablets, its browser market share has plummeted, and even companies that are using the latest version of Microsoft Office are likelier to use Google Docs for online collaboration.

Did this happen because Blockbuster didn't offer comparable, competetive services to Netflix and Redbox? Did it happen because Windows Phone is a poor operating system, or because Internet Explorer is an inferior browser?

No. Blockbuster offered very competetive prices to Netflix (no, it didn't offer streaming, but Blockbuster went bankrupt before streaming became Netflix's dominant distribution model). Windows Phone has received positive reviews, and Internet Explorer now performs comparably to other standards-compliant browsers.

So why did customers eagerly drop Blockbuster and Microsoft the first chance a viable alternative appeared?

Because that's what happens when you spend a decade taking your customers for granted, charging them a ridiculous rate for a barely-functional product or service, and generally treating them like livestock.

Yes, Blockbuster and Microsoft improved the quality of their products and services once competition started to pressure them into doing it. By then it was too late.

I know Cox is a monopoly in my area. I know there's no short-term incentive for it to improve its service or decrease its cost, because it doesn't have to in order to keep my business.

But if I were running Cox, I would think long and hard about the future. Someday, you ARE going to have a viable competitor. If you want to keep your existing customers' business when that day comes, you should probably start treating them better, right now.

The first thing you should do is stop making your customers listen to commercials when they're on hold.

Obfuscation

Continuing from Friday's post about a Microsoft employee's total disdain for Microsoft customers' concern about the next Xbox's rumored always-on requirement:

I want my game console to only be playable online, said no one ever.
Image via Quickmeme.
My Internet connection went down while I was trying to find it. I'm not kidding.

That's the crux of it, isn't it?

From a consumer standpoint, there is no benefit to an always-on requirement.

Now, people may try to obfuscate this point. They may list off all the benefits of an always-on option. And there are some! Cloud saves are pretty cool! So's online multiplayer! Having those things as options is great!

Making them mandatory, for all games, is not. And therein lies the disingenuousness of the argument.

EA COO Peter Moore recently shared this gem:

Many continue to claim the Always-On function in SimCity is a DRM scheme. It's not. People still want to argue about it. We can't be any clearer -- it's not. Period.

As difficult as it is to argue with the unassailable logic that is "It's not. Period.", there are two problems here:

  1. It's clearly DRM.
  2. Even if it weren't DRM, it would still be legitimately terrible game design.

This is one more case where a company representative is deliberately obfuscating the difference between a nice option and a good requirement.

The idea of an entire world of SimCities interacting with one another? That does sound pretty great! It's really a neat idea!

Is it integral to the gameplay?

Well, Peter Moore will tell you it is. Because Peter Moore is paid to tell you it is.

But it's turned out to be trivial to modify the game for offline play, and quite a lot of people have noted that the game plays just fine that way. The interaction with other players and cities is a nice option -- but it's not required to enjoy the game.

Indeed, it proved a pretty fucking considerable detriment to customers enjoying the game.

So beware this argument tactic -- "[X] is a good requirement to have, because of [features that could be implemented without making it a requirement]."

And its close cousin, "DRM is a benefit to the end user, because of [features that could be implemented without using DRM]."

DRM is never a benefit to the end user. No end user has ever said, "You know, this game is great, but it would be better if it had DRM."

Similarly, as the image above so succinctly notes, nobody has ever said "You know, offline games are great, but I sure wish they were as unreliable as online games."

Microsoft Doesn't Want My Business. That Can Be Arranged.

So in case you haven't been keeping score, apparently the next version of the Xbox will require an always-on Internet connection, even for single-player games.

As you might expect, some people are unhappy about this.

Microsoft's Adam Orth knows just how to treat concerned customers: by insulting and mocking them with disingenuous analogies.

Image: Adam Orth's Twitter feed, insulting his customers' intelligence and his own

Now, one of three things is true:

  1. Adam Orth is stupid.
  2. Adam Orth thinks you're stupid.
  3. Both.

I shouldn't even have to fucking explain this, but here goes anyway:

A video game console that doesn't work without an Internet connection is not analogous to a vacuum cleaner that doesn't work without electricity or a cellular telephone that doesn't work without cellular service.

Because, you see, a vacuum cleaner, by its nature, requires electricity to function. (Technically some vacuum cleaners get that electricity from batteries, but keep in mind, Orth's analogy is very very stupid.)

A cellular telephone requires cellular service to function.

You see where I'm going with this?

A video game console does not require an Internet connection to function.

Now, some games might. Complaining that, say, World of Warcraft requires an Internet connection would indeed be comparable to complaining that a vacuum requires a current and a cellular telephone requires cellular telephone service.

But -- fun fact! -- many video games are single-player.

Refusing to buy a video game console that requires an always-on Internet connection is not analogous to refusing to buy a vacuum cleaner that requires an electrical current.

Refusing to buy a video game console that requires an always-on Internet connection is analogous to refusing to buy a vacuum cleaner that requires an always-on Internet connection.

PC Gamer's Dilemma

Well, I finally got me an Xbox 360.

It was free. My fiancée got a new computer with one of those student "comes with a free Xbox" deals.

Here's the thing: I've got a pretty solid gaming rig. And another pretty solid media rig. So I haven't felt much need for Xboxin' up to this point.

The advantages and drawbacks of PC gaming are pretty well-documented. A PC can support crazy high-end hardware, but while the games are cheaper the gear is more expensive and fiddly and there's a whole lot that can go wrong.

Me, I'm something like a niche of a niche of a niche of a niche -- I run Linux on a Mac Pro as my primary OS and keep Windows around for gaming.

This is pretty cool when it works. But here's the thing: even a good Apple makes for a pretty crummy gaming system.

Last year I bought a pretty high-end Nvidia card. ATI has better Mac support, but I've had nothing but headaches trying to get ATI cards working with Linux. Nvidia's always run smoother for me -- galling considering their total lack of cooperation with Linux and the open-source community, but true.

But it's not an officially-supported card. It works under OSX (as of 10.7.3) but it's not entirely reliable under Windows -- when it gets taxed too heavily, I get a bluescreen.

It happened a few times when I played through Witcher 2, but, perversely, it's given me more trouble on Mass Effect 2 -- a game I had no trouble playing through with all the settings maxed out on a lower-end (but officially-Apple-supported) ATI card.

I thought it might be a heating problem but it occurs, consistently, even when I crank up all my system fans with third-party software.

The game worked fine up until Omega, and then started BSoDing randomly. I managed to recruit Garrus in-between crashes, but by the time it came around to Mordin's quest I couldn't get past loading the corridor.

I could just try some other missions, but seriously, you want me to put off getting Mordin? Hell no.

I've found, from searching, that this appears to be a fairly common problem with ME2, even among people not running eccentric hardware configurations such as mine. And I've found a few suggested fixes, but none have worked for me.

I've tried running the game under WINE on both OSX and Ubuntu. Under OSX it plods (I suspect my helper card may be to blame; maybe I'll try disabling it to make sure my higher-end card is the only one the system's putting a load on); under Ubuntu it runs fine up until the menu screen but then doesn't respond to mouse clicks or keystrokes (other than system stuff like Alt-Tab or Alt-F4). I haven't turned up any other reports of this same problem, so I can't find a fix -- maybe one of these days I'll try a full clean install and see if it still does it. Nuke my WINE settings too if I have to. (Or maybe I could set it up on my fiancée's new computer...)

Needless to say, I haven't tried Mass Effect 3 yet.

And that's before we get into all the DRM bullshit plaguing the PC platform.

Never played Batman: Arkham Asylum, largely because of the SecuROM/GFWL/Steamworks Katamari of Sucktitude. Similarly, I gave Dragon Age 2 a miss once I heard reports of people unable to authenticate their legally-purchased games because they'd been banned from BioWare's forums for saying mean things about EA. (Which obviously totally disproves that EA deserves to be called names.)

It's a great damn time to be a PC gamer for a lot of reasons -- a huge indie scene supported by the likes of Steam and the Humble Indie Bundle, with both pushing more gaming on OSX and even Linux -- but it's a lousy time for other reasons.

Anyway. Now I've got an Xbox. All else being equal, I still prefer to play games on the PC, but for cases where the Xbox has less restrictive DRM (like Arkham Asylum) or titles that aren't available on PC (like Red Dead Redemption) or just shit I can get for under five bucks (like a used copy of Gears of War I just picked up), well, it's kinda cool to have one.


Playing: Batman: Arkham Asylum.