You will recall that we were just discussing a cool little story about Bethesda going so far in embracing the modding communities surrounding its games that it ended up hiring one of the writers of an impressive Fallout 4 mod onto its team. Part of what made that story interesting was not how totally novel it was. After all, modders have found their way into developer roles in the past. Instead, it’s that it was Bethesda that made it interesting, being a AAA title developer and the fact that the gaming industry certainly doesn’t approach modding communities with unanimity.
But it would be great for this to become a trend, as a demonstration of the boon that modding communities can be for developers, rather than some kind of a threat to their control. It’s probably too early to call this a full on trend at this point, but it is worth highlighting that we have yet another story of a AAA developer hiring on members of a modding community, this time with CD Projekt Red.
Since it was released last December, Cyberpunk 2077 has received intermittent mod support from the developer, resources like metadata and TweakDB file dumps. But the most helpful tools have come from the community.
The modders who will officially join CDPR are Traderain, Nightmarea, Blumster, and rfuzzo. They are best known as the folks behind WolvenKit, an open-source tool that allows modders to modify CD Projekt Red’s greatest hits, Cyberpunk 2077 and The Witcher 3: Wild Hunt In terms of Cyberpunk modding resources, WolvenKit is the gold standard, allowing you to edit any file in the game and, crucially, browse those files “without unpacking the archives.”
“We will be working on various projects related to the Cyberpunk 2077 backend and the game’s modding support,” Traderain wrote in an announcement on Cyberpunk 2077’s modding community Discord (via Reddit). “We are really excited for this and we really hope we can help to bring Cyberpunk 2077 to the next level!”
This specific move by CDPR is somewhat meta, with the modders hired not only to help with Cyberpunk’s seemingly ongoing development via patches and mods, but also specifically to bridge the gap to other modding communities to make better use of their work.
Now, we’re not a gaming site, so we didn’t cover most aspects of CDPR’s long-awaited release of Cyberpunk 2077, but the Cliff’s Notes version is simply: it was an absolute shit show. The game, as released, was buggy as all hell, had console versions that didn’t work or display as advertised, and there were even lawsuits from investors as a result of a crazy amount of refund requests granted by CDPR. It… wasn’t great. And, frankly, there is no real excuse for the release of what is essentially an unfinished game.
But between active patching of the game and, more important for this post, an active modding ecosystem, the game has come a long way. Were CDPR to embrace a fight or flight mode of thinking, it would have been really easy for it to see these modding communities as either a threat to control over a broken product, or a point of embarrassment as the public was now fixing its game.
Instead, the developer did the smarter thing and embraced and eventually hired some of these modders. And now it’s going to take this embrace of modders even further.
“We are working with Yigsoft on the development of Cyberpunk 2077 modding tools. The modding community has always been very important to us and we are happy to be working with them side by side on further expanding the tools which are available to modders,” a representative for CD Projekt Red told Kotaku in a statement.
Which will allow CDPR to continue to reap the benefits of its biggest fans, those so passionate about the game that they want to not just play it, but play within it. As a bargain in which the developer only had to give up a bit of control over its property, that’s a damned good deal.
When there’s a major OS upgrade, like Apple’s recent Big Sur MacOS release, you would hope that an effort was made to ensure backwards compatibility with key apps and services. However, it’s now become clear that Apple failed to do so, and a variety of different developers across a variety of different applications have had to scramble over the last few weeks to update their apps just to keep working on the latest version of MacOS. It’s always understandable that a few apps may fall through the cracks, but with Big Sur, it’s notable just how widespread the reports are of compatibility problems, and just how much scrambling app developers had to do just to make sure their apps continued working. Here are just a few reports of such problems from across the internet.
The issues range from crashes on start to display issues to being “unable to quit the game” (a feature?). A user comments “In general macOS is a train wreck when it comes to gaming,” but the blame is quickly assigned to the Big Sur update.
“Actually before the update, SC2 worked perfectly on my macbook. All the problems only started with Big Sur. I’m still having the issues unfortunately, but I have a workaround that works (with my eGPU)”.
Apache’s Netbeans IDE
This is one case where developers moved quickly. By November 21 the second voting candidate for Netbeans 12.2 was announced with Java programmer Glenn Holmer praising the inclusion of the “Big Sur fix.” The previous release had startup issues.
Other Java IDEs
Many other Java IDEs had issues: “Well, yesterday I impatiently upgraded to MacOs Big Sur and since then I can’t use any of my IDEs (Eclipse, Netbeans, IntelliJ). They don’t recognize the JDK” wrote Pablo Herreo.
It turns out Apple broke the “Java_home” environment variable in Big Sur.
Other programmers suggest using the SDKman package manager “I use sdkman to manage my JDKs and other Java-related stuff. It works pretty good even on MacOS Big Sur” wrote Gleb Remniov.
In the middle of this brouhaha, Azul Systems gets a nod for giving Big Sur users some hope: two weeks ago, the firm had released builds of its Zulu OpenJDK port both for x64 and Apple Silicon which at least made the Netbeans team happy.
Apache’s OpenOffice
“After upgrading from Catalina to Big Sur, users on our French forum report that docx and xlsx cannot be opened. Whatever the opening process, OpenOffice crashes” reads the bug report on the project’s public bugtracker.
This was confirmed by other users on the web forum: “have the same problem. Safe Mode doesn’t fix it.”
Other user comments show that the OpenOffice fork LibreOffice also had issues: “the big problem with LibreOffice at the moment is that, with Big Sur on a Retina-screened Mac, the text is blurry. The devs at LibreOffice are looking to fix this”
Virtualbox
MacOS Catalina and beyond changed the way the OS handles kernel extensions, some of them requiring a system reboot. This raised some developer worries last year. What started as worries has turned into actual problems with Big Sur during the beta cycle for some developers. One example is with Virtualbox, Oracle’s open source virtualization solution.
A long bug report (ticket#19795) on the project’s public bugtracker documents the hurdles faced by users who had VBox working fine on earlier releases but failing on Big Sur, with “security pop-ups” that developers expected to appear, but users didn’t get.
In the end, some manual command line magic and rebooting often led to a working configuration. The good news is that there is now a Virtualbox release available that manages to work fine for most if not all Big Sur users (version 6.1.17 (r141370)).
But the bug squashing didn’t end without sweat and tears. In the flaming bug report, a project contributor was facing end user criticism and chastised Apple for changing a command line tool during the beta cycle, “Apple completely re-did their KEXT handling and there were issues throughout the different Betas with it blocking us from getting everything tested extensively, they even changed the kmutil command line tool completey[sic] in the last Beta.” That same user notes that people saying that “ample time” was given to developers to adjust “is a joke.”
ZFS also affected
In the same Virtualbox bug report, a user reports compatibility problems with the ZFS file system port to the fruity OS: “Nevermind! Further research indicated that the problem was my ZFS installation which hasn’t been made compatible with Big Sur yet”
ZFS developer Jörgen Lundman is still battling the compatibility bugs and API changes, with a test release for x64 available. But things aren’t so easy on ARM64: “So many kernel functions that are missing – so it is hard to say. Still working on it though” he said two weeks ago.
One of the ZFS users has cleverly nicknamed the OS “Bug Sur.” Some sour Apples, indeed.
The more you look, the more problems you find. Native Instruments is noting that a bunch of its software is somehow causing CPU spikes on Big Sur, and it’s working with Apple to find a solution:
Using a MASCHINE MK2 or MIKRO MK2 on macOS 11 (Big Sur) can produce high CPU spikes on your computer, which could cause it to freeze. We are working together with Apple to find a solution to this problem.
Using a KOMPLETE AUDIO 1, KOMPLETE AUDIO 2 or KOMPLETE AUDIO 6 MK2 on macOS 11 (Big Sur) can cause CPU spikes and distortion with sample rates above 172kHz. This can be avoided by selecting large buffer sizes (2000ms). We are working together with Apple to find a solution to this problem.
It’s not surprising that there might be some compatibility problems and updates necessary to deal with a new OS, but it’s striking to see just how many apps seem to have been caught totally off-guard by these changes.
For some time now, we’ve been discussing gaming company Epic’s entry into the gaming platform wars. Epic made waves shortly after the launch of the Epic Store when it began gobbling up exclusivity deals for games, whereas the PC gaming industry has mostly been free from the kind of exclusivity wars that have plagued the console gaming industry. Steam, the enormous competitor in the market, responded to Epic getting some AAA game exclusive deals for the first 6 months after launch by complaining that its new rival’s strategy was hurting gamers more than anything else. In response, Epic’s Tim Sweeney jumped on Twitter and promised to end the exclusive game strategy if Valve’s Steam platform would offer gamemakers the same more generous split on revenue that Epic is offering. See, Steam offers game publishers roughly 70% of game revenue back to the publisher to be on its platform, whereas Epic offers a flat 88%.
This initial stance from Sweeney was laid out as altruism, with claims that what Epic was really after was a better gaming marketplace to allow more reinvestment in games, more games for the public, and thereby a happier gaming public. Much of the gaming community met this argument with narrow eyes. Epic, after all, is a business and businesses are designed to make money. Sweeney has since followed up on Epic’s stance in a recent tweetstorm responding to public complaints about exclusive games. There’s a lot in the 9 tweets from Sweeney, but let’s start with the rationale for exclusive games on the Epic Store.
This question gets to the core of Epic’s strategy for competing with dominant storefronts. We believe exclusives are the only strategy that will change the 70/30 status quo at a large enough scale to permanently affect the whole game industry. In judging whether a disruptive move like this is reasonable in gaming, I suggest considering two questions: Is the solution proportionate to the problem it addresses, and are gamers likely benefit from the end goal if it’s ultimately achieved?
So what’s the problem Sweeney is trying to solve? It’s the Steam 70/30 split, yes, but ultimately he claims that such a split prevents more games from being produced due to the financial strain that split puts on game developers and publishers. He claims that a more generous storefront split will allow game publishers and developers to use that money to bank profit, reinvest in making games, or lower the prices of their games. Assuming a healthy competitive marketplace with more games being produced, the money is most likely to go to reinvestment and lower prices. Both are good for gamers. His argument is that, yes, exclusives are annoying to gamers, but if exclusives ultimately produce a better gaming marketplace, that outweighs the annoyance.
In a subsequent tweet, Sweeney claims this is win/win for Epic and gamers alike.
If the Epic strategy either succeeds in building a second major storefront for PC games with an 88/12 revenue split, or even just leads other stores to significantly improve their terms, the result will be a major wave of reinvestment in game development and a lowering of costs. So I believe this approach passes the test of ultimately benefitting gamers after game storefronts have rebalanced and developers have reinvested more of their fruits of their labor into creation rather than taxation.
For the math to work on this, Epic will both have to succeed in getting gamers to adopt the platform and get Valve to budge on Steam’s current revenue splits. Neither are sure things. Still, the biggest barrier to people accepting this argument is it’s still all being framed as an altruistic attempt to do good for the gaming public and that same gaming public is far too cynical to believe that’s the only reason Epic is taking these actions.
But, as the Kotaku points out, perhaps this isn’t so much win/win for Epic, but win/win/win.
In short, he’s basically saying yeah, this is causing problems for some gamers, but the issue Epic is trying to solve is worth the hardship. Most interesting is what he says that issue is: it’s not necessarily for their own store to make money and become more powerful, but for Epic’s pricing model—which gives far more money to developers and publishers than Valve’s current split—to be implemented across the market, whether it’s driven by their own success or by rivals adopting a similar model.
That might seem potentially counter-productive; why would it not really matter if your own store survived or not? Then you remember that Epic sells engines as well, and that if Sweeney’s stated goal of seeing a rise in games development investment is achieved, then there’s going to be an increase in the licensing of the Unreal Engine along with it.
I’m irritated with myself for not thinking of this on my own. Epic’s Store can make it money in two ways. First, its exclusive deals and revenue splits can propel it into a major gaming platform successful in its own right. Second, its strategy could force other platforms, especially Steam, to take actions that it believes will result in tons more games being made, many of which will license Epic’s Unreal Engine to make them.
Either way, Epic could win out here. And that’s pretty brilliant, whatever you think of PC game exclusives or how believable you think Sweeney’s claims of altruism are.
As you almost certainly know by now, earlier this week Microsoft announced that it was acquiring Github. There’s been plenty of hand-wringing about this among some. Microsoft has a pretty long history of bad behavior and so many of the developers who use Github don’t have much love or trust of Microsoft, and thus are perhaps reasonably concerned about what will happen. While I’m disappointed that another interesting independent company is being snapped up by a giant, I’m not completely convinced this will be a bad thing in the long run. Microsoft is a fairly different company than it was in the past, and there are reasons to believe it should know enough not to fuck things up. Alternatively, if it does fuck it up, it’s really not that hard for a new and innovative company to step into the void (and certainly, others are already jockeying for position to attract disgruntled Github users).
For this post, however, I wanted to point to three different reports in reaction to the news — because I was fascinated by all three of these takes. More specifically, I found two of them thought-provoking, and one laugh-inducing. And it made me realize just how poorly many non-specialized reporters understand the stuff they’re reporting on, while how those who have a really deep and implicit understanding of things provide so much greater insight. Let’s start with the laugh-inducing one, before moving on to the thought-provoking. The hilariously bad take is found as an editorial in the Guardian, which has already been corrected once for falsely claiming that Github was open source software, rather than that it hosted open source software (among other things). But the really insane paragraph is this one:
GitHub, by contrast, grew out of the free software movement, which had similar global ambitions to Microsoft. The confused ideology behind it, a mixture of Rousseau with Ayn Rand, held both that humans are naturally good and that selfishness works out for the best. Thus, if only coders would write and give away the code they were interested in, the results would solve everyone else?s problems. This was also astonishingly successful. The internet now depends on free software.
Confused ideology? Mixture of Rousseau with Ayn Rand? What the fuck are they talking about? And then after noting how free software has been phenomenally successful, it then says this:
But the belief that everyone coding would solve anyone?s problems has been shown up as completely ludicrous. If anything, computer literacy has declined over the generations as computers have got easier to use. In the heyday of Microsoft, almost everyone knew some tricks to make a computer do what it should, because almost everyone had to if they wanted to get anything done. But hardly anyone today has the first idea of programming a mobile phone. They just work. That?s progress, but not in the direction some idealists expected. Significant open source software is now produced almost entirely by giant commercial companies. It solves their problems but could be said to multiply ours. Huge cultural and political changes are presented as technological inevitabilities. They are not. The value of GitHub lies not in the open-source software it hosts, which anyone could copy, but in the trust reposed in it by users. It is culture, not code, that?s worth those billions of dollars.
The whole piece seems premised entirely on a near total misunderstanding of the reasons why people use Github, the ethos of free software, and well… just about everything. Of course it’s culture that’s important… but it’s so odd that this editorial goes out of the way to insult a strawman culture it believes permeates Github, while then claiming that it’s what’s valuable.
So let’s move on to the better takes. I’ll start with Paul Ford who is, hands down, the absolute best, most thoughtful, insightful and thought-provoking writer about technology issues around. His piece for Bloomberg Businessweek, entitled GitHub is Microsoft’s $7.5 Billion Undue Button is truly excellent. It not only does one of the best jobs I’ve seen in explaining Github for the layman, but does so in the context of explaining why this deal makes sense for Microsoft. Amusingly, I think that Ford is making the same point that the Guardian’s editorial was trying to make, but the difference is that Ford actually understands the details, whereas whoever wrote the byline-less Guardian editorial clearly does not.
GitHub represents a big Undo button for Microsoft, too. For many years, Microsoft officially hated open source software. The company was Steve Ballmer turning bright colors, sweating through his shirt, and screaming like a Visigoth. But after many years of ritual humiliation in the realms of search, mapping, and especially mobile, Microsoft apparently accepted that the 1990s were over. In came Chief Executive Officer Satya Nadella, who not only likes poetry and has a kind of Obama-esque air of imperturbable capability, but who also has the luxury of reclining Smaug-like atop the MSFT cash hoard and buying such things as LinkedIn Corp. Microsoft knows it?s burned a lot of villages with its hot, hot breath, which leads to veiled apologies in press releases. ?I?m not asking for your trust,? wrote Nat Friedman, the new CEO of GitHub who?s an open source leader and Microsoft developer, on a GitHub-hosted web page when the deal was announced, ?but I?m committed to earning it.?
But perhaps most interesting in Ford’s piece is that, while it understands why Microsoft is doing what it’s doing, it’s also a bit wistful of how he’d always kind of hoped that Github would become something more — something more normal, something that applied to much more of what everyone did. While it doesn’t directly say it, it does imply that that dream probably won’t happen with Microsoft in control.
I had idle fantasies about what the world of technology would look like if, instead of files, we were all sharing repositories and managing our lives in git: book projects, code projects, side projects, article drafts, everything. It?s just so damned???safe. I come home, work on something, push the changes back to the master repository, and download it when I get to work. If I needed to collaborate with other people, nothing would need to change. I?d just give them access to my repositories (repos, for short). I imagined myself handing git repos to my kids. ?These are yours now. Iteratively add features to them, as I taught you.?
For years, I wondered if GitHub would be able to pull that off?take the weirdness of git and normalize it for the masses, help make a post-file world. Ultimately, though, it was a service made by developers to meet the needs of other developers. Can?t fault them for that. They took something very weird and made it more usable.
The final thought provoking piece comes from Ben Thompson at Stratechery, who sees the clear business rationale of Microsoft’s decision. Microsoft built its entire business as a platform for developers (who it sometimes treated terribly…). But as we’ve moved past a desktop world and into a cloud world, Microsoft has much less pull on developers. Github brings it tons and tons of developers.
Go back to Windows: Microsoft had to do very little to convince developers to build on the platform. Indeed, even at the height of Microsoft?s antitrust troubles, developers continued to favor the platform by an overwhelming margin, for an obvious reason: that was where all the users were. In other words, for Windows, developers were cheap.
That is no longer the case today: Windows remains an important platform in the enterprise and for gaming (although Steam, much to Microsoft?s chagrin, takes a good amount of the platform profit there), but the company has no platform presence in mobile, and is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company ? or developer ? would choose Microsoft.
This is the context for thinking about the acquisition of GitHub: lacking a platform with sufficient users to attract developers, Microsoft has to ?acquire? developers directly through superior tooling and now, with GitHub, a superior cloud offering with a meaningful amount of network effects. The problem is that acquiring developers in this way, without the leverage of users, is extraordinarily expensive; it is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price.
Thompson’s piece (among many other good insights) suggests why developers might not need to fear Microsoft’s ownership, because of all the potential acquirers, Microsoft probably has the least incentive to ruin Github:
This, by the way, is precisely why Microsoft is the best possible acquirer for GitHub, a company that, having raised $350 million in venture capital, was possibly not going to make it as an independent entity. Any company with a platform with a meaningful amount of users would find it very hard to resist the temptation to use GitHub as leverage; on the other side of the spectrum, purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company.
What Microsoft wants is much fuzzier: it wants to be developers? friend, in large part because it has no other option. In the long run, particularly as Windows continues to fade, the company will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users.
My own take is somewhere between all of these. As soon as I heard the rumor, I started thinking back to the famed Steve Ballmer chant of “Developers, Developers, Developers!”
Microsoft has always needed developers, but in the past it got them by being the center of gravity of the tech universe. A huge percentage of developers were drawn to Microsoft because they had to develop for Microsoft’s platform. That allowed Microsoft to get away with a bunch of shady practices that certainly created a bunch of trust issues (Facebook might want to take note of this, by the way). Nowadays, in the cloud world, Microsoft doesn’t have that kind of leverage. It’s still a massive player, but not one that sucks in everything around it. And, it does have new leadership that seems to understand the different world in which Microsoft operates. So it will be interesting to see where it goes.
But, as someone who believes in the value of reinvention and innovation among the tech industry, it’s not necessarily great to see successful mid-tier companies just gobbled up by giants. It happens — and perhaps it clears the field for something fresh and new. Perhaps it even clears the field for that utopic git-driven world that Ford envisions. But, in the present-tense, it’s at least a bit deflating to think that a very different, and very powerful, approach to the way people collaborate and code… ends up in Microsoft’s universe.
And, as a final note on these three pieces: this is why we should seek out and promote people who actually understand technology and business in understanding what is happening in the technology world. The Guardian piece is laughable, because it appears to be written by someone with such a surface-level understanding of open source or free software that it comes off as utter nonsense. But the pieces by Ford and Thompson actually help add to our understanding of the news, while providing insightful takes on it. The Guardian (and others) should learn from that.
When it comes to content producers reacting to the pirating of their works, we’ve seen just about every reaction possible. From costly lawsuits and copyright trolling, to attempts to engage with this untapped market, up to and including creatively messing with those that would commit copyright infringement. The last of those options doesn’t do a great deal to generate sales revenue, but it can often be seen by the public as both a funny way to jerk around pirates and as a method for educating them on the needs of creators.
But Fstoppers, a site that produces high-end tutorials for photographers and sells them for hundreds of dollars each, may have taken the creativity to the next level to mess with those downloading illegitimate copies of their latest work. They decided to release a version of Photographing the World 3 on several torrent sites a few days before it went to retail, but the version they released was much different than the actual product. It was close enough to the real thing that many people were left wondering just what the hell was going on, but ridiculous enough that it’s downright funny.
Where Fstoppers normally go to beautiful and exotic international locations, for their fake they decided to go to an Olive Garden in Charleston, South Carolina. Yet despite the clear change of location, they wanted people to believe the tutorial was legitimate.
“We wanted to ride this constant line of ‘Is this for real? Could this possibly be real? Is Elia [Locardi] joking right now? I don’t think he’s joking, he’s being totally serious’,” says Lee Morris, one of the co-owners of Fstoppers.
People really have to watch the tutorial to see what a fantastic job Fstoppers did in achieving that goal. For anyone unfamiliar with their work, the tutorial is initially hard to spot as a fake and even for veterans the level of ambiguity is really impressive.
Beyond the location choices, there are some dead giveaways hidden in subtle ways within the “tutorial.” As an example, here is a scene from the tutorial in which Locardi is demonstrating how to for a ‘mask’ over one of the photos from Olive Garden.
If that looks like he’s drawn a dick and balls over the photo on his computer screen, that’s because that is exactly what he’s done. The whole thing is a Onion-esque love letter to pirates, screwing with them for downloading the tutorial before the retail version was even available. By uploading this 25GB file to torrent sites, and going so far as to generate positive but fake reviews of the torrent, Fstoppers managed not only to generate hundreds of downloads of the fake tutorial, but its fake actually outpaced torrents of the real product. The whole thing was like a strange, funny honeypot. The fake apparently even resulted in complaints from pirates to Fstoppers about the quality of the fake product.
Also of interest is the feedback Fstoppers got following their special release. Emails flooded in from pirates, some of whom were confused while others were upset at the ‘quality’ of the tutorial.
“The whole time we were thinking: ‘This isn’t even on the market yet! You guys are totally stealing this and emailing us and complaining about it,” says Fstoppers co-owner Patrick Hall.
You have to admit, the whole thing is both creative and funny. Still, the obvious question that arises is whether all the time and effort that went into putting this together couldn’t have been better spent figuring out a business model and method in which more of these pirates were flipped into paying customers rather than simply screwing with them.
For game developers and publishers, there are lots of ways to react to the modding community that so often creates new and interesting aspects to their games. Some companies look to shut these modding communities down completely, some threaten them over supposed copyright violations, and some developers choose to embrace the modding community and let mods extend the life of their games to ridiculous lengths.
But few studios have gone as far to embrace modders as developer 1C, makers of IL-2 Sturmovik: Cliffs of Dover. The flight-sim game, released way back in 2011, burst onto the gaming market with decidedly luke-warm reviews. Most of the critiques and public commentary surrounding the game could be best summarized as: “meh.” But a modding community sprung up around the game, calling itself Team Fusion, and developed a litany of mods for IL-2. Rather than looking at these mods as some sort of threat, 1C instead worked with Team Fusion and developed an official re-release of the game incorporating their work.
IL-2 Sturmovik: Cliffs of Dover BLITZ Edition is the result. Officially sanctioned and released under the banner of original developers 1C, it combines the original game with all the work that the fans at Team Fusion Simulations—now given access to the game’s source code—were able to cook up.
This work includes new planes, new graphics options, new damage and weapon modelling updated visual effects.
You can buy BLITZ if you’re coming into the game fresh, but if you already owned Cliffs of Dover, BLITZ was added to your Steam library for free late last year.
1C has also gone out of its way to highlight that BLITZ is in part the work of the Team Fusion modders and even announced the new release with comments on how much work the mods do to clean up the serious flaws in the original game. Other studios ought to be paying attention, because this is how it’s done. The modding community, far from being a threat to the game developers, both made the title more attractive for purchase by making it better, and extended the life of this title to the point that it is being re-released for sale again. That kind of free labor of love is something you can only get by embracing the modding community.
It also serves as a reminder again that the biggest fans of any given content can do much to promote it, if content makers bother to connect with them and treat them well. How anyone could argue that hardline stances against this kind of tinkering is a superior option is a question I cannot answer.
Mathew Ingram recently wrote a fantastic post about Twitter’s big mistake a few years back, basically killing off its openness for developers. He builds his argument off of an interesting post from Ben Thompson, arguing that Twitter has lost its strategic focus. Both articles are great, and I recommend them both. In the early days, Twitter was almost completely open. Many of its most useful features and services came from others building on top of it. The very idea of the “@” symbol was the invention of a user. Same with the retweet. Now both are core to Twitter’s identity. And, of course, third-party services were what made Twitter usable in the first place. The service didn’t really ever take off for me until I used Tweetdeck — which was a third party service until Twitter bought it. Thankfully, I can still use Tweetdeck (though not on mobile) because Twitter’s actions killed off most competitors (and, because of this, Tweetdeck still lags in fixing some basic things — like an autoscroll problem I’ve complained about for years). As Ingram notes, Twitter made a big strategic shift, as it started to fear its own openness and worry that it may have resulted in the dreaded “someone else profiting” off of Twitter’s foundation:
Namely, a crucial turning point in Twitter?s evolution that arguably helped put it where it is today, both in a positive sense (it is a publicly-traded $25-billion company) and a negative one (its growth potential is in question and its strategy doesn?t seem to be working). And that turning point happened about five years ago, when Twitter decided to turn its back on the third-party ecosystem that helped make it successful in the first place.
This process began gradually, with the acquisition of Tweetie ? which became Twitter?s official iOS client ? and restrictions on what third parties could do with tweets, including selling advertising related to them. But it escalated quickly, and arguably became an all-out war with Twitter?s moves against Bill Gross, the Idealab founder and inventor of search-related advertising, who was busy acquiring Twitter clients and trying to build an ad model around the public Twitter stream. The idea that someone could monetize Twitter before Twitter itself got around to doing so was what one investor called a ?holy shit moment? for the company.
We see this sort of thing in all sorts of areas — especially around “intellectual property.” People have a very emotional “holy shit moment” pretty frequently when they see “someone else” making money by leveraging something that they feel some sort of ownership attachment to, whether or not there’s any legitimate basis for that attachment. So many of the intellectual property fights we see stem from that general feeling of “Hey, that’s ripping me off!” even if the actions of those third parties may not have any real impact on the originating content, service or idea.
In the internet era, however, this is almost always the wrong decision. The internet thrives based on the flow of information. You want information to flow more broadly, rather than to hoard it. Historical economics is based on worlds of scarcity, and in worlds of scarcity it makes sense to hoard resources, as they are valuable by themselves. Yet, in worlds of abundance you want the opposite. You want abundant or infinite resources to flow freely because they do something special: they increase the value of everything else around them. You want openness, not closed systems. You want sharing, not hoarding. You want copying, not restrictions. Because all of those things increase the overall pie massively, even if some of that pie (or even large portions) are captured by others.
As Ingram notes, at least some at Twitter recognized this at the time. An early influential employee at Twitter, its chief engineer Alex Payne, wrote about how he tried to persuade the company to go in that direction:
Some time ago, I circulated a document internally with a straightforward thesis: Twitter needs to decentralize or it will die. Maybe not tomorrow, maybe not even in a decade, but it was (and, I think, remains) my belief that all communications media will inevitably be decentralized, and that all businesses who build walled gardens will eventually see them torn down. Predating Twitter, there were the wars against the centralized IM providers that ultimately yielded Jabber, the breakup of Ma Bell, etc. etc. This isn?t to say that one can?t make quite a staggeringly lot of money with a walled garden or centralized communications utility, and the investment community?s salivation over the prospect of IPOs from LinkedIn, Facebook, and Twitter itself suggests that those companies will probably do quite well with a closed-but-for-our-API approach.
The call for a decentralized Twitter speaks to deeper motives than profit: good engineering and social justice. Done right, a decentralized one-to-many communications mechanism could boast a resilience and efficiency that the current centralized Twitter does not. Decentralization isn?t just a better architecture, it?s an architecture that resists censorship and the corrupting influences of capital and marketing. At the very least, decentralization would make tweeting as fundamental and irrevocable a part of the Internet as email. Now that would be a triumph of humanity.
But he lost that argument to those who wanted to keep the pie smaller, but to capture more of it for themselves. That may have helped the company go public, but it has put the company in a serious bind today. One in which Wall Street is profoundly disappointed that what Twitter is capturing for itself “isn’t enough” and the innovations that the company needs to keep growing and innovating are much harder to come by. Sure, it does things like Vine and Periscope — both of which it bought out in infancy — but to do so it’s had to hamstring other third-party developers like Meerkat.
Ingram also highlights another Ben Thompson post on what Twitter might have been had it gone down this more open path (he wrote this after the whole Meerkat thing):
I would argue that what makes Twitter the company valuable is not Twitter the app or 140 characters or @names or anything else having to do with the product: rather, it?s the interest graph that is nearly priceless. More specifically, it is Twitter identities and the understanding that can be gleaned from how those identities are used and how they interact that matters.
If one starts with that sort of understanding ? that Twitter the company is about the graph, not the app ? one would make very different decisions. For one, the clear priority would not be increasing ad inventory on the Twitter timeline (which in this understanding is but one manifestation of an interest graph) but rather ensuring as many people as possible have and use a Twitter identity. And what would be the best way to do that? Through 3rd-parties, of course! And by no means should those 3rd-parties be limited to recreating the Twitter timeline: they should build all kinds of apps that have a need to connect people with common interests: publishers would be an obvious candidate, and maybe even an app that streams live video. Heck, why not a social network that requires a minimum of 140 characters, or a killer messaging app? Try it all, anything to get more people using the Twitter identity and the interest graph.
There’s a more fundamental premise at work here. In the information era, spreading more information increases the pie massively and opens up many more opportunities. The challenge is that many others can also take advantage of many of those opportunities, but as the core player in the space, a company like Twitter has a clear and natural advantage, even if it did what Payne had wanted to do many years ago and give up the underlying control altogether.
This is, unfortunately, a profoundly difficult concept for many to grasp — especially when they’re in the midst of it. Hell, even as someone who regularly talks about this very idea, I still get the initial emotional pang of being upset when I see someone else get success with an idea that I had first (whether or not they got it from me). It’s only natural to have that visceral reaction. The real question is what do you do about it. Do you fret? Do you try to control? Or do you realize that in broadening these ideas and sharing them more widely, it creates greater opportunities across the board?
It’s impossible to know what would have happened had Twitter taken a different path. But it seems clear that remaining a more open platform (or even moving to a fully distributed one), would have resulted in a tool that was much more useful today, with a much larger audience and much greater innovation. It’s too bad we didn’t get to live in that world.
In the wake of the recent flop that was Valve’s attempt to create a platform for paid game mods, you’d have thought that the company would be on its digital toes when it comes to being gamer-friendly. I have no interest in piling on Valve or the Steam platform, given what a great example of how game developers can make money in the digital age, but I was a bit surprised to learn that the company just announced it won’t be in charge of banning gamers from games any longer. Instead, it’s turning the keys to banning gamers over to the game developers themselves.
Because nobody likes playing with cheaters. Playing games should be fun. In order to ensure the best possible online multiplayer experience, Valve allows developers to implement their own systems that detect and permanently ban any disruptive players, such as those using cheats. Game developers inform Valve when a disruptive player has been detected in their game, and Valve applies the game ban to the account. The game developer is solely responsible for the decision to apply a game ban. Valve only enforces the game ban as instructed by the game developer. For more information about a game ban in a specific game, please contact the developer of that game.
Now, when anyone, including the Steam announcement above, talks about reasons to ban gamers from games, cheating is always brought up. And, indeed, nobody would be wrong to suggest that gamers cheating in online games reduce the fun-factor for the rest of the gaming community. Would it be better to exclude cheaters from games? Yes, no doubt. Is Valve’s announced plans above to turn the responsibility for banning games from its platform make for a good way to go about this? Hell no.
Why? Well, because giving that kind of control over to the game developers shifts the balance of power when it comes to being banned from games and the reasons why a player might be banned. The nice thing about Steam is that it has two sets of customers: both the gamers themselves and the game developers on its service. Therefore, when Steam is the one administering the ban-button, it essentially serves as an arbiter. It might be an imperfect arbiter, sure, but having all the power to ban customers from games residing in the hands of developers takes us from imperfect to completely broken. Whatever the developers say goes.
And developers haven’t always proven that they can be trusted with lesser forms of this power. Imagine Derek Smart in this scenario, no longer having the power to simply blanket-ban gamers from the Steam forums over negative reviews and comments, but now also being able to ban them from his games. Other developers have already attempted to ban players from their own single-player games over forum issues, so imagine what’s going to happen now that there is no “trying”, only “doing” when it comes to bans.
Steam made its mark by being fairly friendly to gamers in a myriad of ways. Giving this much power over bans to game developers is a step in the opposite direction. It would be a strange decision at any time, but now it seems particularly odd.
Over the last few years it’s been great to see a number of useful law school clinics pop up, focused on taking on key issues of our modern era. While law clinics have been around for a while, those focused on issues raised by technological change and innovation are quite useful. We’ve seen things like the Samuelson Law, Technology & Public Policy Clinic at Berkeley, the Online Media Legal Network at Harvard, the Brooklyn Law Incubator and Policy Clinic and a number of others popping up. However, most of these have been focused on copyright and free speech issues. There really hadn’t been that much done on the patent front that we can recall.
So it’s exciting to see a bunch of law schools teaming up with the App Developers Alliance (full disclosure: they have been, and continue to be, a sponsor for Techdirt) to launch the Law School Patent Troll Defense Network. It’s bringing together a variety of existing law school clinics, including Brooklyn Law School, American University, NYU Law, USC and more, to help app developers and small businesses fight off patent trolls. This is a big deal since trolls have increasingly been targeting these guys. We’ve covered, for example the lawsuits by Lodsys, which seems to take great joy in shaking down app developers with highly questionable patent threats.
The biggest issue with these shakedown attempts, which pretty much everyone recognizes, is that the cost of “settling” is much, much cheaper than the cost of fighting, even if the patent claim is ridiculous. And, much of that “cost” is in paying for lawyers. There are some lawyers willing to take cases on a pro bono setup, but that’s much rarer when it comes to patent disputes, which are generally seen as “corporate” disputes, rather than directly impacting individuals (even though many small businesses and app developers really are just a single person). So, the setup here will allow app developers who have been threatened to work with the network to try to find law clinics willing to help out and defend against bogus threats from patent trolls.
While there are huge structural problems with the patent system that need to be fixed, in the short term, lowering the potential liability and burden for a company hit with a threatening shakedown letter from a patent troll can be very helpful in avoiding having companies just rolling over and paying up.
There are tons of app developers out there who are quickly discovering that there’s a major risk they face today: if your app gets even remotely popular, you’re a likely target for a bunch of patent trolls who are feeding off of the greater app developer ecosystem with incredibly broad patents for obvious concepts (even things like charging for your app). There’s a relatively new group called the App Developers Alliance that is putting on a series of patent summits across the US to discuss issues related to patents and app developers. I’ve had a few conversations with the folks putting these events together, and they look like they should be fantastic resources for those who can attend.
Software patents present significant challenges to app developers. Vague claims, product life cycles shorter than the PTO review process, trolls and general uncertainty threaten to stifle app industry innovation and growth.
Beginning in April, the Application Developers Alliance will host events nationwide for developers to learn about patents and share stories of Lodsys letters, legal strategies and litigation costs, and their ideas about software patent reform.
Each event will feature an expert presentation/overview, followed by a panel discussion between policymakers, app developers, attorneys, and other stakeholders. Events will include an open Q&A and a networking reception.
You can check out the site to see when and where the various summits will be held.
Disclosure: Techdirt and the App Developers Alliance are discussing a sponsorship/advertising deal. That promotion is separate from these events and, as always, this post is editorially independent.