30 November 2007

The Cape Town Open Education Declaration

Just as the Budapest Open Access initiative was a defining moment for open access, so the Cape Town Open Education Declaration promises to be the same for open education:

We are on the cusp of a global revolution in teaching and learning. Educators worldwide are developing a vast pool of educational resources on the Internet, open and free for all to use. These educators are creating a world where each and every person on earth can access and contribute to the sum of all human knowledge. They are also planting the seeds of a new pedagogy where educators and learners create, shape and evolve knowledge together, deepening their skills and understanding as they go.

This emerging open education movement combines the established tradition of sharing good ideas with fellow educators and the collaborative, interactive culture of the Internet. It is built on the belief that everyone should have the freedom to use, customize, improve and redistribute educational resources without constraint. Educators, learners and others who share this belief are gathering together as part of a worldwide effort to make education both more accessible and more effective.

The expanding global collection of open educational resources has created fertile ground for this effort. These resources include openly licensed course materials, lesson plans, textbooks, games, software and other materials that support teaching and learning. They contribute to making education more accessible, especially where money for learning materials is scarce. They also nourish the kind of participatory culture of learning, creating, sharing and cooperation that rapidly changing knowledge societies need.

"The freedom to use, customize, improve and redistribute educational resources without constraint": does that sound familiar, Richard? Now all we need are some good open education licences.... (Via Open Access News.)

The Virtues of PatientOS

It used to be a truism that the open source development methodology would only work for mainstream projects. Only for areas of interest to large user and hacker populations, so the logic went, could support the free software ecology. So a striking proof of the growing maturity of open source is its increasing appearance in vertical markets, hitherto regarded as unviable.

For example, here's the GPL'd PatientOS:

PatientOS is a free clinical information management system for hospitals and healthcare practitioners. Pharmacy, the laboratory, registration and other departments will be able to automate many processes when version 1.0 is released October 31st, 2008. A physician practice version will be released March 31st, 2008.

Pretty specialist - and pretty important, too. (Via FSDaily.)

Trumping Intellectual Monopolies

Some misguided people seem to think that intellectual monopolies are "sacred" - probably because they insist on calling them "intellectual property", and property, as we all know, is totally sancrosant. But it seems that some are realising there may be higher imperatives - like saving the planet:


Intellectual property rules should be reshaped to ensure that they do not hinder developing countries from gaining access to technology considered vital for addressing climate change, the European Parliament has declared.

Members of the Parliament (MEPs) on 29 November approved a report that urges examination of the possibility of revising the World Trade Organization Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). TRIPS may need to be amended, the report suggests, in order to allow for the compulsory licensing of environmentally-friendly technology that is patented.

Amending TRIPS? Now there's an interesting idea.

29 November 2007

Grizzly Bears with Chainsaws for Hands

That's the creators of the Internet, in case you were wondering. (Via Boing Boing.)

Giving "Eye-Pea" the Heave-Ho

Regular readers of this blog will know that I am not over-enamoured of the term "intellectual property", and feel the need to protect myself from its malign influence by placing prophylactic inverted commas around it. Those readers will also know what I think we should use instead, but here's the full half-hour argument why, with the added bonus that you get to see me insulted by an "eye-pea" enthusiast.

Where Are With Open Source in Europe?

Given its non-standard nature, free software is particularly hard to pin down in terms of the bare facts about its development and deployment. But this post from Roberto Galoppini does a useful job of pulling together linnks to various stories and studies about open source in Europe that provide a few pointers.

28 November 2007

Textbook Enterprise Open Source

There's no more powerful argument in favour of using GNU/Linux in an enterprise context than big names that are already doing so. Google and Amazon are the obvious ones, but we can now add PayPal to the list:

PayPal is currently processing $1,571 worth of transactions per second in 17 different currencies on about 4,000 servers running Red Hat Linux.

The article also gives some very concrete advantages of running a GNU/Linux-based grid in this way:

As PayPal grows it's much easier to grow the grid with Intel (NSDQ: INTC)-based servers than it would be to upgrade a mainframe, he said. In a mainframe environment, the cost to increase capacity a planned 15% or 20% "is enormous. It could be in the tens of millions to do a step increase. In [PayPal's] world, we add hundreds of servers in the course of a couple of nights and the cost is in the thousands, not millions," he said.

PayPal takes Red Hat Enterprise Linux and strips out all features unnecessary to its business, then adds proprietary extensions around security. Another virtue of the grid is that PayPal's 800 engineers can all get a copy of that customized system on their development desktops, run tests on their raw software as they work, and develop to PayPal's needs faster because they're working in the target environment. That's harder to do when the core of the data center consists of large Unix symmetrical multiprocessing boxes or mainframes. In neither case is it cheap to install duplicates for developers, he said.

European Digital Library - An Update

I've written about this project several times; here's the latest info:

Europe's cultural institutions plan to launch a prototype of the European digital library in November 2008. It will give direct access to at least 2 million digital books, photographs, maps, archival records, and film material from Europe's libraries, archives and museums. By 2010 this will already have rapidly grown to include far more digital objects than the 6 million originally envisaged as more institutions make their digitised assets searchable through the European digital library.

For a steady growth of the European digital library, two key issues need to be tackled: the financing of digitisation and solutions for making copyrighted works searchable through the European digital library. In its yesterday's meeting the high level group discussed:

* new ways for funding digitisation through public private partnerships;
* solutions for mass-digitisation of out of print works and orphan works (for which it is very difficult to locate the rightholders). By June next year the group should find an agreement on dealing with orphan works (including criteria for searching for rightholders);
* the issue of access to and preservation of scientific information (see IP/07/190). Scientific publishers, libraries and scientists confirmed their intention to work together in an experiment with open access to scientific publications after an embargo period.

It's particuarly pleasing to see orphan works mentioned, since bringing them online would make a huge difference. It's also good to see scientific publishers making positive noises - though we'll need to see the details. (Via paidContent:UK.)

Firefox By Numbers

* We think there are at least 125,000,000 Firefox users in the world right now, give or take. That represents a doubling since Firefox 2 was released a little over a year ago, and significant growth in every country.
* At Mozilla we view market share as an important quantitative metric that can help us ask smarter questions and build better products, but it’s only one of many
* We have systems here that tell us approximate number of daily users, and use that information to inform much of what we do.

The rest of this interesting post from John Lilly, COO of Mozilla, explains the reasoning behind that number, and also offers some insight into what the Mozilla team are thinking these days. (Via Asa Dotzler.)

Mashup 2.0: Inheriting the Mesh

I've written before about how mashups need meshes. Typically that mesh will be geographical, but another obvious one is time. Time is interesting because it's often linked to people's lives - or rather several interlinked lives. That's the insight behind this new startup, AllofMe:

Founder Addy Feuerstein has described AllofMe in the following way:

“The idea is that if I or someone else has a picture that includes my son, alone or with friends, I or anyone else will be able to tag the people in the picture and transform these digital assets into part of my sons. When he grows up and takes control over his own timeline, he will have a timeline of tagged material from his childhood…We will also transform the timeline created by each person into a video movie, through a widget on an internet site [and] enable comparison of your timeline with that of your acquaintances, or chronological data files. For instance, you will be able to compare your own timeline with historical events of Time, and see where you were when some important world event occurred.”

I'm not sure about the company, but I think the idea is important, because it hints at a further key property of Mashup 2.0, where it becomes possible to use pre-existing meshes in richer ways.

Asus Eee Goes Weeeeeee!

Good news for Asus, but also for GNU/Linux:

Unprecedented demand for the low-cost baby laptop from both consumers and the education sector sees PC builder increase sales target to five million units for 2008 as UK retailers struggle to keep stock on shelves.

Asustek (also known as Asus) has revised its expectations for sales of its recently launched Eee PC low cost laptop, increasing sales forecast from three million units to five million.

For more analysis see my comments quoted here.

Vodafone's Open Source Page

It's amazing where open source may lurk. Who would have thought that the mobile giant Vodafone had a page devoted to the subject? And yet it does. (Via Dana Blankenhorn.)

Millions of Book Projects

There are so many book-scanning projects underway at the moment that it's hard to keep up. Google's may have the highest profile, but it suffers from the big problem that it won't make full texts routinely available. That's not the case for the Universal Digital Library, aka the Million Book Project - a name that's no longer appropriate:

The Million Book Project, an international venture led by Carnegie Mellon University in the United States, Zhejiang University in China, the Indian Institute of Science in India and the Library at Alexandria in Egypt, has completed the digitization of more than 1.5 million books, which are now available online.

For the first time since the project was initiated in 2002, all of the books, which range from Mark Twain’s “A Connecticut Yankee in King Arthur’s Court” to “The Analects of Confucius,” are available through a single Web portal of the Universal Library (www.ulib.org), said Gloriana St. Clair, Carnegie Mellon’s dean of libraries.

“Anyone who can get on the Internet now has access to a collection of books the size of a large university library,” said Raj Reddy, professor of computer science and robotics at Carnegie Mellon. “This project brings us closer to the ideal of the Universal Library: making all published works available to anyone, anytime, in any language. The economic barriers to the distribution of knowledge are falling,” said Reddy, who has spearheaded the Million Book Project.

Though Google, Microsoft and the Internet Archive all have launched major book digitization projects, the Million Book Project represents the world’s largest, university-based digital library of freely accessible books. At least half of its books are out of copyright, or were digitized with the permission of the copyright holders, so the complete texts are or eventually will be available free.

The main problem with the site seems to be insufficient computing wellie: I keep on getting "connection timed out" when I try to use it. Promising, nonetheless. (Via Open Access News.)

Update: Here's a good post on some of the issues surrounding book projects.

The Google Highly Open Participation Contest?!?

Despite having the world's worst name, the Google Highly Open Participation Contest sounds a fine initiative:


Following on from the success of the Google Summer of Code program, Google is pleased to announce this new effort to get young people involved in open source development. We've teamed up with the open source projects listed here to give student contestants the opportunity to learn more about and contribute to all aspects of open source software development, from writing code and documentation to preparing training materials and conducting user experience research.

The Google Summer of Code programme seems to be flourishing, so extending it to younger hackers is a natural step. Moreover, the earlier people are exposed to the joys of free software, the more like they are to be converted.
(Via Dries Buytaert.)

27 November 2007

All the World's a Stage...For Windows Users

You can now enter Arden, Ed Castronova's virtual Shakespearian world:

Arden has yet to become the world that we have envisioned. What is available here in the current version is only a small prototype of what we believe Arden could eventually become. Richard III and The War of the Roses have provided a small starting template for what we hope you will help us improve. The works of Shakespeare are vast and provide us as developers with rich settings into which we can continue to expand the content of Arden.

Well, you can if you run Windows....

Is Debatepedia a Good Idea? Well, Yes and No....

It's always interesting to see how the basic wiki/Wikipedia idea can be extended. Here's one I'd not come across before, the more or less self-explanatory Debatepedia:


Debatepedia is a wiki project of the International Debate Education Association (IDEA) with the mission to act as the "Wikipedia of debate and reasoning". On it, debaters, students, experts, and citizens can all openly edit and co-create an encyclopedia of debates, pro and con arguments, bodies of supporting evidence (quotes, articles, studies...), and the stances of relevant actors. We encourage you to create an account (above), become an editor, and join this important social movement and community. Imagine the difference that this resource could make in improving reasoning globally.

Well, I think that's debatable.... (Via Joho the Blog.)

On-Demand OK, But How Demanding?

The BBC, ITV and Channel 4 are to launch a joint on-demand service, which will bring together hundreds of hours of television programmes in one place.

The service is set to go live in 2008 and will offer viewers access to current shows and archive material.

Plans will have to be approved by the BBC Trust and the other broadcasters' boards, and a name for the service will be unveiled ahead of its launch.

The three broadcasters currently offer their own separate on-demand services.

The BBC's iPlayer, ITV's catch-up service and Channel 4oD will continue to exist along the new online "aggregator", which will provide a complement to the established providers.

But that still doesn't make clear what the platform requirements will be for viewing this new super-duper on-demand service: who thinks it might be Windows only....?

Update: Ashley has more details, but doesn't answer my question, although to my eyes there's an implication that the new aggregator will be even less platform agnostic than iPlayer - which Ashley says *will* support GNU/Linux....

Of Lost IDs, ID Cards and Biometric IDiocy

One of the many outrageous aspects of the recent loss by HMRC of crucial data about half the UK population is how the UK government immediately tried to spin this as a reason why we needed ID cards. This follows in a long and dishonourable tradition in this country whereby every failure by the police to catch terrorists/criminals using their extensive powers of surveillance is turned into a justification for giving them even more such powers, when it ought to mean the opposite.

Fortunately a crushing refutation of the faulty logic behind the ID card argument has now been provided by some top academic security expects, who write:

biometric checks at the time of usage do not of themselves make any difference whatsoever to the possibility of the type of disaster that has just occurred at HMRC. This type of data leakage, which occurs regularly across Government, will continue to occur until there is a radical change in the culture both of system designer and system users. The safety, security and privacy of personal data has to become the primary requirement in the design, implementation, operation and auditing of systems of this kind.

The inclusion of biometric data in one's NIR record would make such a record even more valuable to fraudsters and thieves as it would - if leaked or stolen - provide the 'key' to all uses of that individual's biometrics (e.g. accessing personal or business information on a laptop, biometric access to bank accounts, etc.) for the rest of his or her life. Once lost, it would be impossible to issue a person with new fingerprints. One cannot change one's fingers as one can a bank account.

(Via The Reg.)

Getting Organised for Grids

I've always liked the idea of grids - creating virtual supercomputers by hooking up often geographically distant systems. It goes almost without saying that free software rules the grid roost, notably in the form of the Globus Toolkit. Now there's a new meeting place for open source gridders, with the easy-to-remember URL grid.org:


Grid.org is intended to provide a single location where open-source cluster and grid information can be aggregated and where community members can exchange information, experiences, and ideas related to the complete open source cluster software stack. In particular, but not exclusively, Grid.org provides a community where users of Cluster Express, and the various open source components it comprises, can interact with each other and with the source code.

This has only just been (re-)launched, so content and community are still slightly thin on the ground, although it is already good on Cluster Express:

Cluster Express 3.0 is open source cluster management software that integrates best-of-breed open source technologies to provide everything one needs to run technical and advanced computing applications on a cluster.

Just What We Wanted to Hear...

...that releasing a film under a Creative Commons licence does not harm its prospects; on the contrary:


Lo que tú Quieras Oír, the phenomenal Spanish short film we talked about earlier here, has recentlly broken into the “All Time Most Viewed” list on YouTube with upwards of 38,000,000 views! Lo que tú Quieras Oír is licensed under a CC BY-NC-SA license.

Some major kudos are in order for the everyone involved in creating Lo que tú Quieras Oír and making it the success it is today. We can only hope that part of the short’s online success has been enhanced by this decision to utilize CC licensing, which allows its viewers to not only freely distribute the film, but also remix it as long as they give credit, do so with non-commercial intent, and share their new works under the same license.

Firefox: The Toronto Connection

Interesting factoid:

Those familiar with Mozilla and Firefox project will probably recognize names such as lead developer Mike Connor, user experience leader Mike Beltzner, and chief technology evangelist and Mozilla founding member Mike Shaver. All three Mikes work out of Mozilla's Toronto office.

"Mozilla's an interesting community because it's pretty global with employees all over the place, but the Toronto office is actually pretty sizable," Johnathan Nightingale, Firefox's security design lead of Toronto's Mozilla office, said. "It's not unusual to have eight to 10 people in the office, and for a company of about 100 employees, that's pretty significant."

(Via FSDaily.)

Anti-Social Networks

Although I've joined a couple of social networks, it's purely for the sake of some digital anthropology: I've never actually *used* them. In part, this is because I've always found their dynamics slightly unhealthy - this binary business (yes/no) of accepting someone as a "friend" seemed pretty adolescent, frankly.

Now Cory Doctorow has skewered and dissected the key problems in one of his well-written analyses:

For every long-lost chum who reaches out to me on Facebook, there's a guy who beat me up on a weekly basis through the whole seventh grade but now wants to be my buddy; or the crazy person who was fun in college but is now kind of sad; or the creepy ex-co-worker who I'd cross the street to avoid but who now wants to know, "Am I your friend?" yes or no, this instant, please.

It's not just Facebook and it's not just me. Every "social networking service" has had this problem and every user I've spoken to has been frustrated by it. I think that's why these services are so volatile: why we're so willing to flee from Friendster and into MySpace's loving arms; from MySpace to Facebook. It's socially awkward to refuse to add someone to your friends list -- but removing someone from your friend-list is practically a declaration of war. The least-awkward way to get back to a friends list with nothing but friends on it is to reboot: create a new identity on a new system and send out some invites (of course, chances are at least one of those invites will go to someone who'll groan and wonder why we're dumb enough to think that we're pals).

That's why I don't worry about Facebook taking over the net. As more users flock to it, the chances that the person who precipitates your exodus will find you increases. Once that happens, poof, away you go -- and Facebook joins SixDegrees, Friendster and their pals on the scrapheap of net.history.

26 November 2007

Analogue Hackers

Hacking lies at the heart of free software, since hackers are people who love fiddling and fixing (and not to be confused with crackers who like break and steal). How about this for a group of analogue hackers? The Parisian "Untergunther":


whose members include architects and historians, rebuilt an abandoned 100-year-old French government bunker and renovated a 12th-century crypt, he said. They claim to be motivated by a desire to preserve Paris’s heritage.

Last year the Untergunther spent months hidden in the Panthéon, the Parisian mausoleum that holds France’s greatest citizens, where they repaired a clock that had been left to rust. Slipping in at closing time every evening – French television said that they had their own set of keys – they set up a workshop hidden behind mock wooden crates at the top of the monument. The security guards never found it. The Untergunther used a professional clockmaker, Jean-Baptiste Viot, to mend the 150-year-old mechanism.

(Via Boing Boing.)

Zoho Gears Up for Offline Web Apps

Talking of Zoho, it seems it has beaten Google to the punch when it comes to using Google Gears for offline Web app word processing:

In August, we made Zoho Writer available offline. When we released it back then, the documents were available in read-only mode when offline. With today’s update, you’ll now be able to view and edit your documents offline. This functionality is based on Google’s open source project Google Gears. We thank them for the project and their support.|

For Zoho Writer to work offline, you’ll need to have Google Gears plug-in installed on your browser (works on Firefox 1.5+, IE 6+). Click on ‘Go Offline’ to access your documents offline. By default, we download 15 documents each from ‘My Docs’ and ‘Shared Docs’ section. You can change the default setting by clicking on the down arrow beside ‘Go Offline’ link. The documents are downloaded based on the ‘Sort Order’ in your ‘My Docs’ section.

When you are not connected to the internet, you can visit http://writer.zoho.com/offline to access and edit your documents offline. When you are back online, clicking on ‘Go Online’ will let you to synchronize the modified documents with the online versions.

A taste of things to come, when this will be standard for all Web apps. (Via Blognation.)

Here We Go Round the (Open) Mulberry Bush

Mulberry started off life as a software project that was really meant to help the author learn more about the internet and internet protocols used for email. However, it became much more than that and garnered support from a small (in internet terms) group of users and institutions many of whom relied on the product as their primary email tool.

Whilst it started as only an IMAP client and only on Mac OS, it has grown to cover not only other email protocols, but also calendaring and scheduling and is available on Mac OS X, Windows and Linux systems.

Not something I use myself, but good to see it going open source, not least because calendaring and scheduling is an area where free software offerings are still rather thin on the ground:

The full code for Mulberry (Mac OS X, Windows and Linux) is now available as open source under an Apache 2 License. Full details available on the wiki.

(Via heise online.)

Why Javascript, not Flash? - Ask Zoho

I've only just come across this, perhaps the best summary of why using Flash is the wrong way to create Web apps:


1. Native to the Web

A real web application should natively support web standards - HTML & CSS are pretty much synonymous with “web standards”. The biggest reason we started out with Javascript is that it is native to the web - in the sense its core object model for Javascript is the HTML/CSS Document Object Model. The DOM is a gift to web applications. Even with the annoying browser differences in DOM (which sophisticated libraries increasingly hide), it is still far better to have the DOM than not have it. Flash, for all its advantages, sits in a separate space from the browser. In that sense, Flash is not that different from Java-on-the-client. In fact, Flash is Java-on-the-client-done-right.

I am sure Flash will eventually find a way to natively integrate with the browser but it is not there yet.

2. Open Source Library Support

This is a big one. The depth and variety of libraries available in Javascript just keep getting better. It is mind boggling just how much open source development is going on in Javascript. Developers keep pushing the envelope. For one example, look at the jQuery solar system demo. It shocked me the first time I saw it. Pretty impressive that Javascript could do that, right? The capabilities of Javascript exceed the client requirements of office productivity applications today, and there are tons more innovations coming.

3. Vector Graphics in Browsers

This is another big one. Vector graphics formats like SVG (Firefox, Opera), VML (IE), and HTML Canvas (Firefox, Safari, Opera), are becoming ubiquitous in browsers. Yeah, it sucks that IE doesn’t support SVG, but that can be worked around. Even cooler is the fact that SVG & VML are XML and very Javascript friendly. You can do real magic.

Obviously, number 2 is the heart of the matter: Javascript is just going to keep getting better, faster, thanks to the open development process. With Flash, you're dependent on the skills of one company (now, where have I heard that before?)

Open Bookshelf: Real-Time and Embedded Linux

Real-time and embedded Linux is an iceberg: for all its low visibility, it's pretty big below the surface, and getting bigger. If you want to get to know this world better - and you know you do - here's a bumper crop of light reading for you:

LinuxDevices.com is pleased to publish an overview and papers from the Ninth Real-Time Linux Workshop held in Linz, Austria, Nov. 2-3, 2007. The papers, available for free download without registration, span a broad range of topics, ranging from fundamental real-time technologies to applications, hardware, and tools.

(Via Linux Today.)

Soaraway Open Source

Rupert Murdoch's tabloid Sun newspaper, better known for its fascination with chest-tops rather than laptops, is nonetheless starting to grok the Joy of Linux, thanks to the Asus EEE PC:

The crucial thing about the Eee is rather than running on Windows, it uses a Linux operating system. Now I'm a Microsoft man through and through, I've never been able to face switching from XP or Vista to the likes of OS X on an Apple. There's safety in what you know.

I'd certainly never consider running Linux on my home PC but by slimming down the software on this gadget, it allows it to have a much longer battery life - crucial for a product designed to be used on the move. It will also run faster and has instant on and off.

As I've said elsewhere, the Asus could really prove to be a breakthrough machine for GNU/Linux among general users. (Via FSDaily.)

Andy Updegrove on the War of the Words

The ODF/OOXML struggle has been one of the pivotal stories for the world of open source, open data and open standards. I've written about here and elsewhere many times. But the person best placed to analysis it fully from a standards viewpoint - which is what it is all about, at heart - is undoubtedly Andy Updegrove, who is one of those fine individuals obsessed with an area most people find slightly, er, soporific, and capable of making it thrilling stuff.

News that he's embarked on an e-book about this continuing saga is therefore extremely welcome: I can't imagine anyone doing a finer job. You can read the first instalment now, with the rest following in tantalising dribs and drabs, following highly-successful precedents set by Dickens and others. With the difference, of course, that this book - entitled ODF vs. OOXML: War of the Words - is about fact, not fiction, and that the events it describes have not even finished yet.

25 November 2007

Feel Free to Squeak

I don't know much about the open source programming language Squeak, but it does sound rather cool:

Squeak is a highly portable, open-source Smalltalk with powerful multimedia facilities. Squeak is the vehicle for a wide range of projects from educational platforms to commercial web application development.

...

Squeak stands alone as a practical environment in which a developer, researcher, professor, or motivated student can examine source code for every part of the system, including graphics primitives and the virtual machine itself. One can make changes immediately and without needing to see or deal with any language other than Smalltalk.

Our diverse and very active community includes teachers, students, business application developers, researchers, music performers, interactive media artists, web developers and many others. Those individuals use Squeak for a wide variety of computing tasks, ranging from child education to innovative research in computer science, or the creation of advanced dynamic web sites using the highly acclaimed continuation based Seaside framework.

Squeak runs bit-identical images across its entire portability base, greatly facilitating collaboration in diverse environments. Any image file will run on any interpreter even if it was saved on completely different hardware, with a completely different OS (or no OS at all!).

Now, though, it seems there is no excuse not to find out more:

To help more people get familiar with Squeak's very powerful programming environment, the new book Squeak by Example is now being made available under the Creative Commons Attribution-ShareAlike 3.0 license. It's intended for both students and developers and guides readers through the Squeak language and development environment by means of a series of examples and exercises. This is very useful to those who wish to become more familiar with the Croquet programming environment. You can either download the PDF for free, or you can buy a softcover copy from lulu.com.

What a classic combination: CC digital download, or an analogue version from Lulu.com.

Update 1: Alas, it seems you can't squeak freely - see comment below.

Update 2: Or maybe you can - see other comments below.

24 November 2007

(Copyright) Darkness Visible

The benighted policy of extending copyright terms again and again is made visible in a nice graphic accompanying this post:

The term of copyright has steadily expanded under U.S. law. The first federal copyright legislation, the 1790 Copyright Act, set the maximum term at fourteen years plus a renewal term (subject to certain conditions) of fourteen years. The 1831 Copyright Act doubled the initial term and retained the conditional renewal term, allowing a total of up to forty-two years of protection. Lawmakers doubled the renewal term in 1909, letting copyrights run for up to fifty-six years. The 1976 Copyright Act changed the measure of the default copyright term to life of the author plus fifty years. Recent amendments to the Copyright Act expanded the term yet again, letting it run for the life of the author plus seventy years.

What's wrong with this picture?

The Supreme Court has held that legislative trick constitutional, notwithstanding copyright’s policy implied aim of stimulating new authorship—not simply rewarding extant authors.

Open EMR

He beat me to it.

23 November 2007

Another Reason Why China is the Future...

...or rather *a* future:

Chinese trust the Internet over mainstream media and all sources of information according to a study done by Harris Interactive for Edelman.

(Via RConversation.)

Live Documents and Let Live Documents

It's not really clear whether we need yet another online office suite, but at least Live Documents seems to have understood the importance of freeing users from dependence on a certain offline one:


"From a technology and utility perspective, Live Documents offers two valuable improvements - firstly, it break's Microsoft's proprietary format lock-in and builds a bridge with other document standards such as Open Office and secondly, our solution matches features found only in the latest version of Office (Office 2007) such as macros, table styles and databar conditional formatting in Excel 2007 and live preview of changes in PowerPoint 2007. Thus, Live Documents lets consumers and businesses to derive the benefits of Office 2007 without having to upgrade," said Adarsh Kini, Chief Technology Officer, InstaColl.

KOffice Made Simpler

The high-profile nature of OpenOffice.org means that KOffice tends not to get the respect it deserves. Maybe the latest iteration will change that, because it offers an interesting addition:

Over two years ago, Inge Wallin proposed a simplified word processor to be used in school for kids. Thomas Zander, the KWord lead developer, made a proof of concept of this using the infrastructure of KOffice 2. This proved simpler than even Thomas would have believed, and KOffice 2.0 Alpha 5 now contains a first version of the KOffice for kids. Note that only the GUI is simplified, and that it still contains the full power of KOffice. This means that it can save and load the OpenDocment Format, which will make it easy to interact with other users of OpenOffice.org or the full KOffice suite.

These are precisely the kind of innovations that free software makes so easy: hacking together a quick prototype and then polishing it. Let's hope that other simplified versions follow, since an "Easy" Office would be useful far beyond its original target market, education.

It would also be a nice riposte to never-ending complexification of Microsoft's own products, which are forced to add more and more obscure features - whether or not users what them - in a desperate attempt to justify yet another paid-for upgrade. Free software is under no such pressure, and can therefore downgrade applications when that might appropriate, as here. Microsoft, by contrast, is trapped by its ratchet-based business model.

MS Explorer Is Sinking...

...no, really. Talk about symbolism.

We Demand Books on Demand

One of the interesting results of the move to digital texts is a growing realisation that analogue books still have a role to play. Similarly, it's clear that analogue books serve different functions, and that feeds into their particular physical form. So some books may be created as works of art, produced to the very highest physical standards, while others may simply be convenient analogue instantiations of digital text.

Public domain books are likely to fall into the latter class, which means that ideally there should be an easy way to turn such e-texts into physical copies. Here's one:

This is an experiment to see what the demand for reprints of public domain books would be. This free service can take any book from the Internet Archive (that is in public domain) and reprint it using Lulu.com. Prices of the books are rounded up from Lulu.com cost prices to the nearest $0.99 to cover the bandwidth and processing power that we rent from Amazon using their EC2 service. There is also a short post on my blog about it.

How Does It Work

Anyone with an email address can place a request on this page using an Internet Archive link or ID. Your request will be forwarded to our conversion server which will convert the appropriate book to printable form, and sends it off to Lulu.com. When the book has been uploaded, it will be made for immideate ordering and shipping, and you will receive a link to it via email. Currently, only soft cover books are supported in 6"x9", 6.625"x10.25" or 8"x11" trim sizes.

Interesting to see Lulu.com here, confirming its important place as a mediator between the digital and analogue worlds. (Via Open Access News.)

Openness: Purity of Essence

I wrote a piece for Linux Journal recently warning that Microsoft was beginning to hijack the meaning of the phrase "open source". But the problem is much bigger than this: the other opens face similar pressures, as Peter Murray-Rust notes.

In some ways it's even more serious for fledgling movements like open access and open data: there, the real meaning has barely been established, and so defending it is harder than for open source, which has had a well-defined definition for some time. Given the importance of labels, this is a matter that needs to be addressed with some urgency before "open access" and "open data" become little more than bland marketing terms.

Thank You, FOSS

Via GigaOM, I came across a link to this love-letter to Facebook:

Thinking about it, I've rarely used a service that has brought me so much emotional satisfaction...connecting with good friends is a feel-good thing and it is this emotional value that makes Facebook hard to beat in terms of the gratification other services can provide. So much so, here I am even writing a thank you note to the service (I can't remember doing that for any service...I've written about how "cool" stuff is, or how useful some service might be...but "thank you"? Never).

Although I think that Facebook is interesting - but not unproblematic, especially its recent moves - I'd never see it in this light. But it set me wondering whether there was anything comparable for me - a place of digital belonging of the kind offered by Facebook. And I realised there was, but not one that was crystallised in a single service. Rather, I feel this same sense of "connecting with good friends" with respect to the much larger, and more diffuse free software community.

This isn't a new thing. Back in the early years of this century, when I was writing Rebel Code, I was astonished at how helpful everyone was that I spoke to in that world. That stood in stark contrast to the traditional computing milieu, where many was full of their own (false) self-importance, and rather too fixated on making lots of money.

It seems I'm not alone in this sense of hacker camaraderie:

The key thing here is that in all the details, spats, debates, differences in direction and nitty-gritty, it is easy to forget that the core ingredients in this community are enthusiastic, smart, decent people who volunteer their time and energy to make Open Source happen. As Open Source continues to explode, and as we continue to see such huge growth and success as it spreads across the world and into different industries, we all need to remember that the raw ingredients that make this happen are enthusiastic, smart, decent people, and I for one feel privileged to spend every day with these people.

To paraphrase W.H.Auden:

Thank You, Thank You, Thank You, FOSS.

Public Domain Search

One of the big advantages of open content is that there are no problems with indexing it - unlike proprietary stuff, where owners can get unreasonably jumpy at the idea. Public domain materials are the ultimate in openness, and here's a basic search engine for some of them:

major public domain sites were chosen, the most important being the US federal government sites. government:

* .gutenberg.org
* .fed.us
* .gov
* .mil

But there are plenty of exclusions. Also, it's a pity this is only for the US: the public domain is somewhat bigger. (Via Open Access News.)

22 November 2007

Happy Birthday Internet

Watch out, there's a meme about:


The Internet is 30 today. Exactly 30 years ago today on November 22, 1977 the first three networks were connected to become the Internet.

(Via Simon Willison's Weblog.)

Realising Virtual Worlds Through Openness

I mentioned Tim Berners-Lee below as an iconic figure. Philip Rosedale may not quite be there yet, but he stands a good chance of attaining that status if his vision works out. He's put together a useful summary of how that vision grew, and, more importantly, what Linden Lab is going to do to realise it more fully. Nice to see that at the heart of the strategy lies openness:

we need to keep opening SL up, as we’ve started talking about lately. This means formats, source code, partners, and more. We are working on turning our clear vision on this into more detailed plans. Virtual worlds, in their broadest form, will be more pervasive that the web, and that means that their systems will need to be open: extended and operated by many people and companies, not just us.

That Umair Bloke on Blogonomics 2007

Glad it's not just me that feels this way.

Tim B-L: On Moving from the WWW to the GGG

Tim Berners-Lee is an inconic figure for a reason: he's actually rather sharp. This makes his rare blog posts important and interesting - none more so than his most recent one about the Giant Global Graph (GGG):

In the long term vision, thinking in terms of the graph rather than the web is critical to us making best use of the mobile web, the zoo of wildy differing devices which will give us access to the system. Then, when I book a flight it is the flight that interests me. Not the flight page on the travel site, or the flight page on the airline site, but the URI (issued by the airlines) of the flight itself. That's what I will bookmark. And whichever device I use to look up the bookmark, phone or office wall, it will access a situation-appropriate view of an integration of everything I know about that flight from different sources. The task of booking and taking the flight will involve many interactions. And all throughout them, that task and the flight will be primary things in my awareness, the websites involved will be secondary things, and the network and the devices tertiary.

This is probably the best thing I've read about social graphs, not least because it anchors a trendy idea in several pre-existing areas of serious Webby development. (Via Simon Willison's Weblog.)

21 November 2007

Interoperability: The New Battlefield

One word is starting to crop up again and again when it comes to Microsoft: interoperability - or rather the lack of it. It was all over the recent agreement with the EU, and it also lies at the heart of the OpenDocument Foundation's moves discussed below.

And now here we have some details of the next interoperability battles:

the EU Competition Commissioner’s office, with the first case decided by the EU Court of First Instance, now has started working intensively on the second case.

The new case involves three main aspects. First, Microsoft allegedly barred providers of other text document formats access to information that would them allow to make their products fully compatible with computers running on Microsoft’s operating systems. “You may have experienced that sometimes open office documents can be received by Microsoft users, sometimes not.”

Second, for email and collaboration software Microsoft also may have privileged their own products like Outlook with regard to interfacing with Microsoft’s Exchange servers. The third, and according to Vinje, most relevant to the Internet and work done at the IGF, was the problem of growing .NET-dependency for web applications. .NET is Microsoft’s platform for web applications software development. “It is a sort of an effort to ‘proprietise’ the Internet,” said Vinje.

That's a good summary of the problems, and suggests that the Commission is learning fast; let's hope that it doesn't get duped when it comes to remedies as it did the last time, apparently fooled by Microsoft's sleights of hand over patents and licences.

Decentralise Your Data - Or Lose It

Aside from the obvious one of not trusting the UK government with personal data, the other lesson to be learned from the catastrophic failure of "security" by the HMG is the obverse to one of free software's key strengths, decentralisation. When you do centralise, you make it easy for some twerp - or criminal - to download all your information onto a couple of discs and then lose them. A decentralised approach is not without its problems, but at least it puts a few barriers in the way of fools and knaves.

Hardware is Like Software? - Ban Hardware Patents

I won't bother demolishing this sad little piece on why software patents are so delicious and yummy, because Mike Masnick has already done that with his customary flair.

But I would like to pick on something purports to be an argument in the former:


One needs to understand that there is fundamentally no difference between software and hardware; each is frequently expressed in terms of the other, interchangeably describing the same thing. For example, many microprocessors are conceptualized as software through the use of hardware description languages (HDL) such as Bluespec System Verilog and VHDL. The resulting HDL software code is downloaded to special microprocessors known as FPGAs (field programmable gate arrays), which can mimic a prospective chip's design and functions for testing. Eventually, the HDL code may be physically etched into silicon. Voilà! The software becomes hardware.

Well, that's jolly interesting, isn't it? Because it means that such hardware is in fact simply an instantiation of algorithms - hard-wired, to be sure, but no different from chiselling those algorithms in granite, say. And as even the most hardened patent fan concedes, pure knowledge such as mathematics is not patentable.

So the logical conclusion of this is not that software is patentable, but that such hardware *shouldn't* be. I'd go further: I suspect that anything formed by instantiating digial information in an analogue form - but which is not essentially analogue - should not be patentable. The only things that might be patentable are purely analogue objects - what most people would recognise as patentable things.

There is an added benefit to taking this approach, since it is also solves all those conundrums about whether virtual objects - in Second Life, for example - should be patentable. Clearly, they should not, because they are simply representations of digital entities. But if you wanted to make an analogue version - and not just a hard-wiring - you could reasonable seek a patent if it fulfilled the usual conditions.

Oh, Tell Me the Truth About...the ODF Bust-Up

The recent decision by the OpenDocument Foundation to shift its energies away from ODF to CDF has naturally provoked a lot of rather exaggerated comment. I wrote a piece for LWN.net (now out from behind the paywall) exploring what exactly was going on, and found out that there are bigger issues than simply document interoperability at play.

It turns out to be all about Microsoft's Sharepoint - software that I am beginning to see as one of the most serious threats to open source today. Read it and be very afraid.

GNU PDF Project

Around ten years ago I fought a fierce battle to get people to use HTML instead of PDF files, which I saw as part of a move to close the Web by making it less transparent.

You may have noticed that I lost.

Now, even the GNU project is joining in:

The goal of the GNU PDF project is to develop and provide a free, high-quality and fully functional set of libraries and programs that implement the PDF file format, and associated technologies.

...

PDF has become the de-facto standard for documentation sharing in the industry.

Almost all enterprises uses PDF documents to communicate all kinds of information: manuals, design documents, presentations, etc, even if it is originally composed with OpenOffice, LaTeX or some other word processor.

Almost all enterprises use proprietary tools to compose, read and manipulate PDF files. Thus, the workers of these enterprises are forced to use proprietary programs.


I still think HTML, suitably developed, would be a better solution. (Via LXer.)

20 November 2007

Actuate's Actual Open Source Snapshot

One of the sure signs that open source is moving into the mainstream is the number of surveys about it that are being conducted. The great thing about these is that while individually they bolster the case for open source in different areas, collectively they are almost overwhelmingly compelling.

The latest such survey comes from Actuate. It's actually an update of an earlier, more circumscribed one, and it ranges far more widely:


Following research first conducted in November 2005, exclusively targeted at financial services companies in the UK and Europe, the 2007 Actuate Open Source Software Survey broadened its scope to include research attitudes to open source systems in both North America and Germany. The 2007 survey also extended beyond financial services to include public services, manufacturing and telecommunications (telco) in the new regions and now uniquely provides a detailed local insight as well as interesting regional comparisons across the geographies and the vertical sectors within them.

The top-line result?
Half the organizations surveyed stated that open source is either the preferred option or is explicitly considered in the software procurement process. One surprising note is that one-third of the organizations surveyed are now likely to consider open source business intelligence in their evaluations. This is a huge shift from just a few years ago.

The survey is available free of charge, but registration is required.

UK Government Loses 15 Million Bank Details

This has to be about the most stupid security lapse in the history of computing:

Confidential details of 15 million child benefit recipients are on a computer disc lost by HM Revenue and Customs, the BBC understands.

Insult is added to injury:

Revenue and Customs says it does not believe the records - names, addresses and bank accounts - have fallen into the wrong hands.

Yeah? And they know that precisely how - because they're psychic, perhaps?

And then the UK government wants us to trust them with our IDs, too? If we did, how long before the odd 60 million IDs get "lost"? At least you can change your bank details - you don't have that option with your identity.

Update 1: What's really heartening is that a surprisingly large proportion of those commenting here on the BBC story spot the ID card connection....

Update 2: Better make that 25 million bank details, plus key data on all children in the UK.

Free Software and the Categorical Imperative

The Web could have been invented for butterfly minds like mine. For example, in one of Stephen O'Grady's hallmark Q&As (this one on Red Hat's cloud computing announcement) I came across a link that took me to the Wikipedia page about Immanuel Kant's categorical imperative.

I first encountered Kant when I was in my late teens - the perfect age for grappling with those big questions that look too big and daunting when we are older and more sensible. I thought then, and still think now, that his Critique of Pure Reason represents the acme of modern philosophical thought - the Choral Symphony of metaphysics.

I was therefore already familiar with the categorical imperative, not least in Auden's rather fine clerihew:


When the young Kant
Was told to kiss his aunt,
He obeyed the Categorical Must,
But only just.

But reading the excellent Wikipedia entry, with its formulation:

"Act only according to that maxim whereby you can at the same time will that it should become a universal law."

brought home to me something that - stupidly - I hadn't really grasped before about Kant's idea: its essential symmetry. Of course, it's there implicitly in the other version, which I knew:

"Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means"

but the second form lacks the extra precision of the first.

What struck me is that this is the driving force behind free software - Stallman's belief that we must share software that we find interesting or useful. And more generally, it lies at the heart of all the kinds of openness that are starting to blossom: they are all predicated on this symmetry, on the giving back as well as the taking.

So there we have it: Immanuel Kant - philosopher and proto-hacker.

Larry Sanger's Question

Larry Sanger has a question about Citizendium:

Suppose we grow to Wikipedian size. This is possible, however probable you think it might be.

Suppose, also, that, because we are of that size, we have the participation of a sizable portion of all the leading intellectuals of the world, in every field–and so, there are hundreds of thousands, if not millions, of approved articles. These are all long, complete with many links, bibliography, etc., etc.–all the subpage stuff. It’s reference utopia. Far better than Wikipedia has any hope of becoming.

Here’s the question, then. If we use a license that permits commercial reuse–CC-by-sa or GFDL–then every major media company in the world could, and probably would, use CZ content. Do you favor a license that allows CBS, Fox, the New York Times, English tabloids, Chinese propaganda sheets, Yahoo!, Google, and all sorts of giant new media companies to come, to use our content? Without compensation?

That's the question that Linus faced over a decade ago when he decided to adopt the GNU GPL instead of the earlier one that forbade any kind of money changing hands. And as Linus has said many times, choosing the GNU GPL was one of the best decisions he ever made, because it has widened support for Linux enormously, and as a result has driven its development even faster.

There's your answer, Larry....

What Can You Protect in Open Source?

Marc Fleury is a Frenchman who famously made lots of dosh when he sold his open source company JBoss to Red Hat. That puts him in a strong permission to pontificate about what does and what doesn't work in the world of businesses based around free software. Try this wit and wisdom, for example:

B.D asks: "marcf, my open source project is starting to enjoy a measure of success, I am thinking of going professional with it, I am thinking about business models. How much thought should I put in protecting my Intellectual Property?"

Answer: B.D. protecting IP in OSS is extremelly important. The only "private" property that exists in OSS are 1- brand 2- URL. Both are obviously related but really you need to protect your brand name, in other words REGISTER your trademarks, use them, declare they are yours and enforce the trademark, meaning protect against infringement. Other products, specifically based on your product should not include your name. Consultancies will be able to say they know and work with your "product name" but they cannot ship products using your trademark. Educate yourselves on brand IP, that is a big asset in OSS.

The URL deserves the same treatment. A successful website with traffic is a source of revenue in this day and age, either directly through ad placement or indirectly by lead generation.

It's interesting that Fleury concentrates on trademarks, rather than copyright or patents (of the latter he says: "you will have little protection against thieves that want to copy what you have done without letting you know and put it under different licenses, I have seen it done, such is the nature of the beast.") I think this indicates that trademarks can be useful, even with open source, just as copyright is necessary for licences to work. It's patents that remain the problem.

Of "IP", "Piracy" and China

As readers of this blog will know, I don't use the terms "intellectual property" or "piracy", since both are profoundly misleading and hopelessly skew the discussion. Nonetheless I can recommend a paper entitled "Intellectual Property Piracy: Perception and Reality in China, the United States, and Elsewhere", since it presents a cool analysis of the reality behind the terms, as well as some surprises.

Here's a sample of the former:

Free-rider downloading also serves an advertising function that may actually benefit music-copyright owners: Some free-rider downloaders may like “Sci-Fi Wasabi” enough to go out and spend 99¢ per song for other Cibo Matto tunes from iTunes, or even $11 for the album Stereo Type A or $19 for Pom Pom: The Essential Cibo Matto. If the downloader (or another who hears the downloaded copy) becomes a fan, hundreds of dollars in sales may result; if no download takes place, all of these potential future sales would be lost. Even if the total number of such sales represents only a tiny portion of downloads, it still exceeds the number of sales in the absence of downloading, which would be zero.


And one of the surprises is as follows:

Of the supposed $6.1 billion in losses to U.S. studios, 2.3 billion, or 38%, were lost to Internet piracy, while 3.8 billion, or 62%, were lost to hard-goods piracy. The three countries in which the losses to U.S. studios were highest were not East Asian countries, and two of them were not developing countries: Mexico, the United Kingdom, and France accounted for over $1.2 billion in lost revenues, or 25% of the non-U.S. total – and slightly less than the U.S. total of $1.3 billion. The three countries have a combined population of about 225 million, somewhat less than the United States’ 293 million, giving them a slightly higher per capita piracy rate.

(Via Salon.)

Will WIPO Wipe the Slate Clean?

So the sorry saga at WIPO is coming to an end, with the controversial Director leaving early (although I was disappointed that this was not "to spend more time with his family.") The question now, is who will take over, and what new direction will WIPO take?

This handover comes at a time when many (including me) are questioning what the role of an organisation nominally about so-called "intellectual property" should be in a world increasingly looking to move on to a less proprietorial approach to knowledge. The appointment of a new head would a good time to re-evaluate WIPO's role - and maybe even change its name.

Dealing with Disabilities

One of the problems raised with the use of ODF in Massachusetts was its lack of support for people with disabilities. That has now been sorted out, but it's probably generally true that open source has not addressed this issue as well as it could, not least because hackers tend to be young and hale, and therefore less aware of the problems faced by those who are not, for example.

So it's good to hear that some work is being done on precisely this area:

IBM and the researchers at the University of Dundee School of Computing (UK) and the University of Miami's Miller School of Medicine are collaborating to develop open source software technology tools to accommodate the needs of older workers to help them adapt to and remain productive in the changing workplace of the 21st century.

...

One way to support maturing workers who have age-related disabilities is to find new ways to increase their comfort level and ability to use technology.

(Via Daniweb.)

I've Got a Little List

On the basis that you just can't have enough lists of open source software, here's another one.

19 November 2007

OpenSolaris CIFS Server: Colour Me Confused

The goal of this project is to provide a native, integrated CIFS implementation to support OpenSolaris as a storage operating system. The OpenSolaris CIFS Server provides support for the CIFS/SMB LM 0.12 protocol and MSRPC services in workgroup and domain mode. Substantial work has already gone into modifying and adapting the existing OpenSolaris file system interfaces, services and commands to accommodate Windows attributes and file sharing semantics. The intent is to provide ubiquitous, cross-protocol file sharing in Windows and/or Solaris environments.

Now, I may be wrong, but this all sounds very similar to Samba. So the question is, how did Sun manage to emulate the protocols? And does the agreement between Microsoft and the EU over interoperability have any bearing on this? Yours, confused of London.

Google Desperately Seeking Picasa

What on earth took them so long?

Finally, Google has integrated Picasa Web Albums into Google Image Search. Public albums can be enabled for a public search option, meaning your images will be more likely to come up in Google image results. And that’s a huge improvement, because previously images on Picasa (and Blogger, and Google Docs) were not searchable at all. The other Google applications are still missing out on all the fun, but Picasa images are now searchable. This is limited, however, to a Google image search.

What's the point of having masses of open content if you can't find it? (Via Searchblog.)

Die, TinyURL, Die!

A couple of years ago, I wrote about TinyURLs, noting:

they are a great idea: too many Internet addresses have become long snaking strings of apparently random text. But the solution - to replace this with a unique but shorter URL beginning http://tinyurl.com commits the sin of obscuring the address, an essential component of the open Web.

Well, I don't want to say "I told you so", but "I told you so":

The link shortening and redirection service TinyURL went down apparently for hours last night, rendering countless links broken across the web. Complaints have been particularly loud on Twitter, where long links are automatically turned to TinyURLs and complaining is easy to do, but the service is widely used in emails and web pages as well. The site claims to service 1.6 billion hits each month.

That post worries about having a single point of failure for the Web; that's certainly valid, but for me the malaise is deeper. Even if there were hundreds of TinyURL-like services, it wouldn't solve the problem that they subvert the open nature of the Web.

Far better for the Web to wean itself off TinyURL now and get back to proper addressing. Interestingly, blogging URLs often do that, with nicely descriptive URLs that let you form a rough idea of what you're going to view before you get there.

When the Microsoft Train hits the Brooksian Wall

For a long time enthusiasts of the open source development methodology have predicted that the traditional method will sink in the sand sooner or later. And since, as far as we can tell, Microsoft still employs such methods, the expectation is that one day its operating system upgrade would be a downgrade.

It's hard to tell from all the noise in the comments, but preliminary results seem to suggest Vista is that downgrade:

Extensive testing by the exo.performance.network (www.xpnet.com) research staff shows that SP1 provides no measurable relief to users saddled with sub-par performance under Vista.

And here's some corroboration that people are beginning to realise that the Microsoft train has hit the Brooksian wall:

Ninety percent of 961 IT professionals surveyed said they have concerns about migrating to Vista and more than half said they have no plans to deploy Vista.

What's a Paglo?

That was my first question to Brian de Haaff, CEO of the eponymous company. This is what he said, (more or less):

Francisco Paglo was a virtually unknown Italian explorer who first set sail as a lookout on Cadamosto's expedition to the Gambia River in 1455. Upon completion of a distance learning course in creative writing, he published a stirring account of the exploration from his viewpoint in the crow's nest, which was widely published throughout Europe. It ultimately caught the eye of Prince Henry the Navigator who was a Portuguese royal prince, soldier, and patron of explorers. Prince Henry summoned Paglo, and thanks to his generous funding, sent him on an expedition around Africa's Cape of Good Hope in 1460 to trade for spices in India. A storm pushed him off his target, and he finally dropped anchor in what is now known as New Zealand.

He never did set foot in India, but in New Zealand he remains a hero for bringing the country its first sheep, and his birthday (April 1) is celebrated every year with giant mutton pies. A growing movement has petitioned the government to officially establish the day as a national holiday — Dandy Mutton Day, in reverent appreciation for Paglo. On the eve of March 31 each year, children leave tiny bales of hay in their family rooms, hoping for the safe return of his ghost to their home and a flock of sheep for their family. Those who have been good the preceding year and have prepared fresh bales receive a bowl of lamb stew and freshly-knit wool socks and sweaters from their parents. But poor behavior and unkempt bales is frowned upon as a sign of disrespect, and these unfortunate kids receive a clump of manure.

And this is what the company does:

Paglo is a search engine for IT that specializes in searching the complex and varied data of IT networks, and in returning rich data reports in table and chart formats, as well as simple text hit lists.

As someone who was smitten with search engines ever since the early days of Lycos, WWWW and Inktomi, I was naturally highly receptive to this approach. Search has become the optic through which we see the digital world; applying it not just to traditional information, but also to corporate IT data is eminently sensible.

Things only got better when I found out that the search engine crawler was open source (GNU GPL to be precise). This makes a lot of sense. It means that people can add extra features to it to allow discovery of all kinds of new and whacky hardware and software through the use of plugins; it also means that people are more likely to trust it to wander around their intranets, gathering a lot of extremely sensitive information.

That information is sent back to Paglo, encrypted, where it is stored on their servers as a searchable index of your IT assets that can be interrogated. Now, obviously security is paramount here. I also worry about people turning up with a sub poena: after all, those search indexes will provide extremely useful information about unlicensed copies of software etc.; Paglo, not surprisingly, doesn't think this will be a problem.

There are other interesting aspects of Paglo, including its use of what it calls "social solving":

We do this by allowing all users to save their search queries and publish them for anyone’s use. The elegance here is that you can immediately access any query that’s been saved and made public, and run it against your own data. (Only the query syntax is published. The data itself, of course, is private to each user.) This is especially helpful when you need a query that searches out a complex relationship – such as between users and the applications they have installed on their desktops – and you do not know where to start. The permutations are endless, but since the core concept is the same, any saved query can be used against any set of network data.

But in many ways, the most interesting aspect of Paglo is its business model:

We are maniacally focused on delivering the most value, for the most users, as quickly as possible. To achieve this, we are removing barriers to getting started (like complex installation and cost) and making the service convenient to use. Our experience and the history of the Internet tells us that lots and lots of thrilled users of a free service are much more valuable than a handful of paying customers. If we are successful, you will love Paglo, use it daily, and tell your colleagues and friends.

Yup, that means that they don't have one, but they're really, really sure that if everyone uses them, they can find one. Of course, that's precisely what Google did, so there are precedents - but no guarantees. Let's hope the final business plan proves more credible than the explanation of the company name.

When Oink Went to Piggy Heaven

Here's a wise post on why it is utterly pointless pursuing P2P services and their associated tracker aggregation sites:


What effect has this attack on tracker sites had? Well, to use the example of Oink, it has been entirely negative for the mafiaa. I didn't know what Oink was, as I had never heard of it, until it was busted. I now do know the names of the two successor sites now based on news reports of what happened after Oink went to piggy heaven. Should I ever care, I now know where to go for illegal torrents. I suspect there are several million more like me who were handed a roadmap by just about every IT news site out there, along with the news that absolutely zero people using the site were busted along with the ops. Can you say own goal?

Interestingly, what this comes down to is access to information: thanks to the Internet you and I have as much - often more - clue as to what's going on everywhere than the traditional news gatekeepers.

GNU Affero GPL: Second Draft

One of the vexed questions in the free software world is what should be done about software as a service, when the service is based on free software:

All versions of the GPL allow people to use modified version of the software privately without being obliged to make their modified source code available to anyone. When people put software on a public server, the question is less clear: is that private use or public use? This was called the "software as a service" issue, or "SaaS".

The FSF's answer is a special licence, known as the GNU Affero GPL, which is now in its second draft.

Modular Magazines

After modular books, now this:

Google may soon begin to offer users the ability to create customized, printed magazines from Internet content. And print ads included in the magazine would be customized, too.

The future is modular.

From Remix to Re-enactment

I wrote recently about the remix and it's relevance to an open content world. Here's an interesting exploration of remix's sibling, re-enactment:

Once you start thinking about the idea of re-enactment, you start seeing it everywhere. Maybe the argument could be made that we're in a cultural moment devoted to re-enactment. Much of what we write off as novelty can be put into this category. The Internet recently was excited about old people re-enacting iconic photos of the twentieth century; see also choirs of old people performing Sonic Youth's "Schizophrenia". Or choirs of small children doing much the same. But less ironic presentations abound: off the top of my head, Japancakes just released a note-for-note country-inflected cover of Loveless, My Bloody Valentine's seminal drone-rock record. Going further, German new music ensemble Zeitkratzer has played and recorded Lou Reed's Metal Machine Music. Tom McCarthy's excellent recent novel Remainder concerns a wealthy man who maniacally reenacts scenes; McCarthy springs from the art world, which has been interested in re-enactment for a while. Examples spiral on ad infinitum. But there seems to be something in us that wants to see or hear what we've seen or heard before again.

These are quickly composed thoughts, and I'm ignoring a great deal; parsing the difference between re-enactment and adaptation could be fiendishly complicated, as might be the role of copyright in all of this, etc. I'll simply tie this back to the Communist Manifesto problem. I think it's become apparent that we're no longer reading texts in isolation: now when we read Hamlet, digital media has made it possible to read any number of possible versions at the same time. The archive presents us with an embarrassment of riches, though I suspect that we still lack the tools to let us make sense of the pile: both to make sense of the growing number of versions of texts and to usefully compare versions. The Wooster Group's Hamlet can be seen as a close reading of the 1964 Hamlet. But such a one-to-one reading might just be the tip of the iceberg.

What made this particularly apposite for me is that I've been watching Kenneth Branagh's film version of Hamlet, and the sense of hearing a hundred other uses of Shakespeare's famous lines is very strong, and makes the film feel, indeed, like a re-enactment rather than a performance, brilliant as it is.

Asking Ashley

For those following the iPlayer debate, Groklaw has put up perhaps the best interview with Ashley Highfield so far:

the long-term alternative solution is a world beyond DRM and how we can work together, particularly with our rights holders, to get to a world beyond DRM.

Das ist Ja Doof!

Many years ago the last major British computer manufacturer ICL launched One Per Desk, one of the craziest early computers ever. It was based on the famous Sinclair QL - as used by one Linus Torvalds - and had small tapes instead of disc drives (no, they never worked). But what was most striking about this misbegotten device, was the name of one of the rebadged versions, which came from BT. It was called Tonto - Italian and Spanish for "stupid."

Well, the meme lives on:

Doof, a new London-based startup went into public beta at the beginning of October offering casual gaming wrapped-up with social networking in a good-looking package.

"Doof" is German for "stupid"....

Kindling a Conflagration

There's one of Steven Levy's finer big pieces in Newsweek about Amazon's new Kindle e-book device. It all sounds pretty cool, but for me the real showstopper is the following:


Publishers are resisting the idea of charging less for e-books. "I'm not going along with it," says Penguin's Peter Shanks of Amazon's low price for best sellers. (He seemed startled when I told him that the Alan Greenspan book he publishes is for sale at that price, since he offered no special discount.) Amazon is clearly taking a loss on such books. But Bezos says that he can sustain this scheme indefinitely. "We have a lot of experience in low-margin and high-volume sale—you just have to make sure the mix [between discounted and higher-priced items] works." Nonetheless the major publishers (all of whom are on the Kindle bandwagon) should loosen up. If you're about to get on a plane, you may buy the new Eric Clapton biography on a whim for $10—certainly for $5!—but if it costs more than $20, you may wind up scanning the magazine racks.

What planet are these people on? Amazon is shipping electrons - well known for being rather cheap (here, take a few trillion for free). When you buy a book, you're buying mashed-up trees that cost something (which in fact cost rather more than you pay). E-books will never take off until publishers are prepared to throw their analogue business models on the fire.

Update: Almost needless to say, Kindle is powered by GNU/Linux.

Poland: Not Just Plumbers

In the UK the Polish plumber has become a staple figure of merriment, if not fun (after all, nobody wants to make fun of someone as important as a plumber.) More generally, there are supposed to be around 600,000 recent Polish immigrants, more or less keeping the UK economy going. (As a corollary, the number of signs and job vacancies in Polish is also shooting up.)

Now it seems that Polish programmers are just as important globally:

Recently, I moderated an interesting panel held at Stanford university at the Hoover Insititution, on the subject of Poland's growing role in the global tech community. Over the past few years Dell, Google, Hewlett-Packard, Intel, IBM, Motorola, Siemens, and others have opened engineering offices in Poland.

18 November 2007

Internalising Externalities

One of the problems with most everyday economics is that pollution tends to be regarded as an externality:


An externality occurs when a decision causes costs or benefits to third party stakeholders, often, although not necessarily, from the use of a public good. In other words, the participants in an economic transaction do not necessarily bear all of the costs or reap all of the benefits of the transaction. For example, manufacturing that causes air pollution imposes costs on others when making use of public air.

But externalities have a habit of coming home to roost:

China's rising energy demand isn't just leaving its mark on the country's heritage. Every 30 seconds, an infant with birth defects is born in China, according to Jiang Fan, deputy head of the country's National Population and Family Planning Commission. The rate of birth defects nationwide has soared 40 percent in the past five years, from 105 defects per 10,000 births in 2001 to nearly 146 in 2006. The problem now affects nearly 1 in 10 Chinese families, the Commission stated in a recent report .

Birth defect rates are highest in the northern province of Shanxi, an area that is also home to some of China's richest coal resources. "The incidence of birth defects is related to environmental pollution," An Huanxiao, director of Shanxi's provincial family planning agency, told Xinhua News. "The survey's statistics show that birth defects in Shanxi's eight large coal-mining regions are far above the national average."

Tragedy and Travesty of the Commons

One of the key features of digital commons - like free software or science - is that there is no tragedy in the classical sense: it is impossible for users to "overgraze" a digital commons in the way they can a physical one.

That analogue tragedy can even by caused by the selfish actions of just one player. A case in point is the cetacean commons, which a few decades ago came perilously close to the ultimate tragedy: total destruction. That, happily, was avoided, but there are still a few benighted groups who insist on taking for themselves what belongs to all.

Worse, that selfishness is escalating:

A Japanese whaling fleet has set sail aiming to harpoon humpback whales for the first time in decades.

The fleet is conducting its largest hunt in the South Pacific - it has instructions to kill up to 1,000 whales, including 50 humpbacks.

This extraordinary display of contempt for the global community is compounded by a further insult. The "justification" for this pointless slaughter is given as:

killing whales allowed marine biologists to study their internal organs

What, you mean to find out if they have a brain, unlike the whalers who insist on hunting endangered species back to the brink of extinction?

Not so much a tragedy of the science commons as a travesty.

17 November 2007

Some is Rotten in the State of Copyright

Nicely put:

By the end of the day, John has infringed the copyrights of twenty emails, three legal articles, an architectural rendering, a poem, five photographs, an animated character, a musical composition, a painting, and fifty notes and drawings. All told, he has committed at least eighty-three acts of infringement and faces liability in the amount of $12.45 million (to say nothing of potential criminal charges).50 There is nothing particularly extraordinary about John’s activities. Yet if copyright holders were inclined to enforce their rights to the maximum extent allowed by law, he would be indisputably liable for a mind-boggling $4.544 billion in potential damages each year. And, surprisingly, he has not even committed a single act of infringement through P2P file sharing. Such an outcome flies in the face of our basic sense of justice. Indeed, one must either irrationally conclude that John is a criminal infringer—a veritable grand larcenist—or blithely surmise that copyright law must not mean what it appears to say. Something is clearly amiss. Moreover, the troublesome gap between copyright law and norms has grown only wider in recent years.

(Via Boing Boing.)

Creative Commons Discovers Dual Licensing

I missed this before:

This is the CC+ project. An artist, for example, can release her work under a CC Attribution-Noncommercial license, but then, using the CC+ infrastructure, enable those who want commercial rights (or anything else beyond the freedoms granted in the license) to link to a site that can provide those other rights. In this way, CC now helps support a hybrid economy of creativity. We provide a simple platform to protect and enable those who want to share; and we’ve built a simple way to cross over from that sharing economy for those who want to profit from their creativity.

Er, yes, this is called dual licensing in the open source world....

Modular Books

Modularisation is one of the key elements of open processes: so why can't we have modular books? Well, we can, up to a point:


On Wednesday, the Arizona community college announced a partnership with Pearson Custom Publishing to allow Rio Salado professors to piece together single individualized textbooks from multiple sources. The result, in what could be the first institution-wide initiative of its kind, will be a savings to students of up to 50 percent, the college estimates, as well as a savings of time to faculty, who often find themselves revising course materials to keep pace with continuously updated editions.

However, this is only with texts from one source: imagine if you could do this with *any* text. (Via if:book.)