We Gotta Have a Presence: Failing at Marketing in Second Life

Is it possible to create engaging branded experiences in Second Life which actually help your company sell product, or at least reinforce your customer’s perception of your brand?

In the August issue of WIRED, Frank Rose is pretty down on the opportunities in Second Life for consumer brands in the US trying to create interest. But the fact that some (even many) have failed to create interesting experiences shouldn’t prove that no one can.
Continue reading →

Wikipedia, Ogyu Sorai, and Academia

I’ve heard a number of different folks – both in personal conversations and at conferences – talk about issues citing Wikipedia in an academic context.

Generally this begins with a reference to some school or another (generally seems to be a History department, but I’ve heard multiple schools referenced) which has forbidden the citation of (or maybe even the consultation of) wikipedia entries in student essays. The argument they’re using this bit of data could be either:

  1. You can’t cite wikipedia in an academic paper, and that is evidence of the fact that Wikipedia isn’t as good as real encyclopedias with editors and print publication houses behind them.
  2. You can’t cite wikipedia in an academic paper, and that is evidence of just how behind the times the ivory tower academics are.

This month’s Communications of the ACM refreshingly adds a richer context to what I was beginning to suspect was some kind of urban legend. Neil Waters, of Middlebury College, wrote this month’s viewpoint column, titled: “Why You Can’t Cite Wikipedia in My Class.” (For now, at least, it appears to be free full text in html or pdf – not sure if that will always be true).

In it, he describes how the Middlebury College History Department came to forbid Wikipedia citations in student essays:

I made that effort [to perceive the positive side of Wikipedia] after an innocuous series of events briefly and improbably propelled me and the history department at Middlebury College into the national, even international, spotlight. While grading a set of final examinations from my “History of Early Japan” class, I noticed that a half-dozen students had provided incorrect information about two topics—the Shimabara Rebellion of 1637–1638 and the Confucian thinker Ogyu Sorai—on which they were to write brief essays. Moreover, they used virtually identical language in doing so. A quick check on Google propelled me via popularity-driven algorithms to the Wikipedia entries on them, and there, quite plainly, was the erroneous information. To head off similar events in the future, I proposed a policy to the history department it promptly adopted: “(1) Students are responsible for the accuracy of information they provide, and they cannot point to Wikipedia or any similar source that may appear in the future to escape the consequences of errors. (2) Wikipedia is not an acceptable citation, even though it may lead one to a citable source.”

The rest, as they say, is history. The Middlebury student newspaper ran a story on the new policy. That story was picked up online by The Burlington Free Press, a Vermont newspaper, which ran its own story. I was interviewed, first by Vermont radio and TV stations and newspapers, then by The New York Times, the Asahi Shimbun in Tokyo, and by radio and TV stations in Australia and throughout the U.S., culminating in a story on NBC Nightly News. Hundreds of other newspapers ran stories without interviews, based primarily on the Times article. I received dozens of phone calls, ranging from laudatory to actionably defamatory. A representative of the Wikimedia Foundation (www.wikipedia.org), the board that controls Wikipedia, stated that he agreed with the position taken by the Middlebury history department, noting that Wikipedia states in its guidelines that its contents are not suitable for academic citation, because Wikipedia is, like a print encyclopedia, a tertiary source. I repeated this information in all my subsequent interviews, but clearly the publication of the department’s policy had hit a nerve, and many news outlets implied, erroneously, that the department was at war with Wikipedia itself, rather than with the uses to which students were putting it.

The key context here is that Wikipedia was (and still is, I believe) disallowed in a specific context, not that Middlebury was trying to prevent its students from seeing that historical interpretations are debated and argued about.

As Waters notes:

If [the goal] is to make Wikipedia a truly authoritative source, suitable for citation, it cannot be done for any general tertiary source, including the Encyclopaedia Britannica. . . . If the goal is more modest—to make Wikipedia more reliable than it is—then it seems to me that any changes must come at the expense of its open-source nature. Some sort of accountability for editors, as well as for the originators of entries, would be a first step, and that, I think, means that editors must leave a record of their real names. A more rigorous fact-checking system might help, but are there enough volunteers to cover 1.6 million entries, or would checking be in effect reserved for popular entries?

In other words, Waters isn’t an ivory tower academic, refusing to cede authority over knowledge to the great unwashed, but a practical educator trying to help his students develop critical thinking skills. (Though I think he has missed out on notion that wikipedia’s governance is also evolving – it isn’t stuck in one model but constantly looking at the right balance of controls versus openness, and how changes on those levers affect the quality and quantity of entries on the site.)

There’s a place for detailed primary and secondary research, and a place for general tertiary sources – and learning that difference seems like a good thing for students and conference presenters to do.

This relationship is off to a bad start

Coming across Roger Dooley’s post about Sears and their privacy policy (Sears- Marketers vs Lawyers, with a tip of the hat to Make the Logo Bigger) I decided to go check out the site he references, My SHC Community.

Unfortunately, no such luck (cue the “No soup for you!” clip from Seinfeld):

My SHC Community

Was the problem that I was running Firefox rather then Netscape (Netscape? Really?), or that I was running Linux?
Continue reading →

The Slippery Slope of Intent: Microsoft and the OSI

It’s been a very active couple of weeks on the license-discuss mailing list. Now the debate is spilling over into the blogosphere.

The immediate question is: should the OSI approve the licenses Microsoft recently submitted as being compliant with the open source definition?

The underlying questions, though, are about the role of the OSI, and whether anything other than adherence to the open source definition can be used as criteria for certification or non-certification of a license.

First, is Microsoft’s past (and, dare I say, current) behavior as an anti-open-source force in the market relevant?

Second, what about the impact of license proliferation, and the creation of a “separate commons” of software which cannot be combined with projects under the GPL or other popular licenses? Is the impact of approving a set of licenses on “the open source community” something the OSI can, should, or even must consider?

The initial salvo in this discussion belongs to Chris DiBona, manager of open source programs for Google (though I assume he speaks, on the license-discuss mailing list, for himself, not as an official Google spokesperson), who wrote in an email to the list earlier this month :

I would like to ask what might be perceived as a diversion and maybe even a mean spirited one. Does this submission to the OSI mean that Microsoft will:

a) Stop using the market confusing term Shared Source
b) Not place these licenses and the other, clearly non-free , non-osd licenses in the same place thus muddying the market further.
c) Continue its path of spreading misinformation about the nature of open source software, especially that licensed under the GPL?
d) Stop threatening with patents and oem pricing manipulation schemes to deter the use of open source software?

If not, why should the OSI approve of your efforts? That of a company who has called those who use the licenses that OSI purports to defend a communist or a cancer? Why should we see this seeking of approval as anything but yet another attack in the guise of friendliness?

Finally, why should yet another set of minority, vanity licenses be approved by an OSI that has been attempting to deter copycat licenses and reduce license proliferation? I’m asked this for all recent license-submitters and you are no different :-)

I was surprised at the time that this didn’t receive more attention, but I guess most reporters don’t read mailing lists. ;)

Then Matt Asay blogged about it on C-Net, under the title Microsoft capitulates to the OSI, gets horse-whipped for its troubles

Asay’s argument, which has also been voiced clearly by a number of folks on the mailing list directly, is that the OSI ought not to consider the conduct of the submitting organization, but the content of the character of the licenses themselves:

I understand that Microsoft may be using the OSI’s license approval process to its own ends, and potentially ends that may be anti-open source. I’m still not sure, however, that it’s appropriate to treat an incoming license from Microsoft any differently than one that comes from Linus Torvalds.

Groklaw joined the conversation in “Why, Why, Why OSI?,” suggesting not only that the OSI may be within its charter to consider the conduct of the organization proposing the license (what the OSD prohibits is discrimination against fields of endeavor by licenses, not consideration of submitting entities by the OSI) but that the OSI may be required by its bylaws to do so:

If in fact OSI suspects that Microsoft’s real purpose is to cause harm, since OSI is a California 501(c)(3), can it approve a license it believes will harm the community’s interests? The By-laws seem to me to be incompatible with any action OSI knows will cause harm, or is foreseeably likely to cause harm, to the community’s interests. I’m just a paralegal, so OSI probably needs to ask its lawyers about that, but as I read it, I can’t see how it can ignore a threat that an OSI license might be used against the Open Source community. A number of community members have told OSI they think accepting this license would be harmful to the community’s interests, after all. Can OSI ignore that? I honestly don’t see how, unless they rewrite the by-laws.

The vision of the OSI Groklaw lays out here tasks it with approving licenses based on their potential impact on the open source community, rather than based on their compliance with the open source definition. (The OSI’s about page claims only “The OSI are the stewards of the Open Source Definition (OSD) and the community-recognized body for reviewing and approving licenses as OSD-conformant.”)

I think that’s too much for the OSI, or any other organization, to own. While the community discussion of the impact of (and Microsoft’s intent behind) these licenses – and vigorous arguments within projects about whether or not to adopt those licenses, should and must continue, I don’t believe the OSI should mix “impact on the community” together with “OSD compliant.”

However, even if the licenses were not submitted by Microsoft, or on its behalf, there are real potential issues based on license proliferation, and specifically the ability to combine into derivative works projects released under different licenses and license versions. And it is part of the overall OSI charter to focus on license proliferation issues and work to resolve them.

Zac Bowling, a developer on the Mono project sums up the issue on his blog:

While I’m so happy Microsoft is [submitting the licenses for OSI certification], I do have a few small reservations. In general, I don’t think having more open source licenses are a “good thing™” for the community. I especially don’t care for any new licenses that overly govern or even disallow mixing code under one license with code under it’s license. We have problems with that type of thing right now where FSF’s GPLv2, Sun’s CDDL, and all the Microsoft shared source licenses prevent mixing code with anything under a different license. GPLv3 did take care of part of this issue to be compatible with the Apache license and a few others.

These issues suggest to me that a legitimate approach is for the OSI to work directly with Microsoft (as the license drafters), as they have done with Sun, SocialText, and other license authors, to avoid unnecessary license proliferation and conflict.

Michael Tiemann’s suggested as much in his email to the license-discuss list this morning:

In many past discussions the arguments of license-discuss and the personal appeals of OSI board members have prevailed upon many license submitters to change their licenses so as to minimize the harm those licenses do to the overall open source ecosystem (as best we understand it). I think that this is a case where the proper next step is to take Microsoft up on their offer to discuss these points and see if we cannot address this particular case of license incompatibility. Again, I agree that license incompatibility per se is not evil, but total incompatibility with any other possible OSI-approved license is not a feature that fosters the benefits of open source as I understand them.

But what happens if those discussions have no effect? What if the “total incompatibility with any other possible OSI-approved license” aspect of the Microsoft licenses is a feature, not a bug (That is, deliberate and by design, not an accident of syntax)?

The only real solution I can see is that the licenses are approved as compliant with the open source definition (as it currently stands) but also marked as “not recommended for use” based on the incompatibilities they create. Soft of “approved as OSD compliant” but with a major asterisk. (Maybe a bew category on this page of “licenses reluctantly approved, to be used with extreme caution as they have significant incompatibility”).

Does that risk the charge of discrimination against Microsoft? Perhaps, but it would be clearly grounded in the licenses themselves and their effect, not in suspicions about corporate intent.

Moving Windows from Dual Boot to Virtualization (Help!)

When I initially set up my new laptop, I opted for dual boot, assuming that from time to time in client work I’d need to be able to get to windows applications. Now that I’m moving to virtualization, I’ve run into an issue with my shared partition.

Hoping to avoid significant “I can’t get to that file now” problems, and not wanting to try out read/write mount of NTFS+ in Linux, I took a multi-partition approach, breaking up the hard drive thusly:

  1. ext3 format, onto which Ubuntu is installed
  2. ntfs format, onto which Windows XP is installed
  3. vfat (aka Fat32) format, as a shared partition accessible from Windows or Linux
  4. small linux swap partition, ignored by windows

This was great, as it enabled me to put things like firefox profiles on the shared drive, and then whether I booted Windows or Kubuntu I ended up with the same set of bookmarks, cookies, and the like.

It also meant all my “documents” (client folders, project folders, and so on) went to the shared partition. (In windows I mapped “My Documents” to point to what it sees as the E: drive, and in Linux mapped the mounted drive to /media/shared/).

Since then, however, I’ve decided that rather than dual booting I should move windows into a virtualization container, and run Windows XP inside VMWare Player without having to reboot.

(Experienced virtualization users at this point have likely already anticipated the problem).
Continue reading →