24 July 2014

Resisting Surveillance on a Unprecedented Scale II

(The first part of this three-part essay appeared yesterday.)

The gradual but relentless shift from piecemeal, small-scale analogue eavesdropping to constant and total surveillance may also help to explain the public's relative equanimity in the face of these revelations. Once we get beyond the facile idea that if you have nothing to hide, you have nothing to fear - everybody has something to hide, even if it is only the private moments in their lives - there is another common explanation that people offer as to why they are not particularly worried about the activities of the NSA and GCHQ. This is that "nobody would be interested" in what they are up to, and so they are confident that they have not been harmed by the storage and analysis of the Internet data.

This is based on a fundamentally analogue view of what is going on. These people are surely right that no spy is sitting at a keyboard reading their emails or Facebook posts. That's clearly not possible, even if the will were there. But it's not necessary, since the data can be "read" by tireless programs that extract key information at an accelerating pace and diminishing cost thanks to Moore's Law.

People are untroubled by this because most of them can't imagine what today's top computers can do with their data, and think again in analogue terms - the spy sifting slowly through so much information as to be swamped. And that's quite understandable, since even computer experts struggle to keep up with the pace of development, and to appreciate the ramifications.

A post on the Google Search blog from last year may help to provide some sense of just how powerful today's systems are:

When you enter a single query in the Google search box, or just speak it to your phone, you set in motion as much computing as it took to send Neil Armstrong and eleven other astronauts to the moon. Not just the actual flights, but all the computing done throughout the planning and execution of the 11-year, 17 mission Apollo program. That’s how much computing has advanced.

Now add in the fact that three billion Google queries are entered each day, and that the NSA's computing capability is probably vastly greater than Google's, and you have some idea of the raw power available for the analysis of the "trivial" data gathered about all of us, and how that might lead to very non-trivial knowledge about our most intimate lives.

In terms of how much information can be held, a former NSA technical director, William Binney, estimates that one NSA data centre currently being built in Utah will be able to handle and process five zettabytes of data - that's five million million gigabytes. If you were to print out that information as paper documents, and store them in traditional filing cabinets, it would require around 42 million million cabinets occupying 17 million square kilometres of floor space.

Neither computing power nor the vast holdings of personal data on their own are a direct threat to our privacy and freedom But putting them together means that the NSA can not only find anything in those 42 million million virtual cabinets more or less instantly, but that it can cross-reference any word on any piece of paper in any cabinet - something that can't even be contemplated as an option for human operators, let alone attempted.

It is this unprecedented ability to consolidate all the data about us, along with the data of our family, friends and acquaintances, and their family, friends and acquaintances (and sometimes even the acquaintances of our acquaintances' acquaintances) that creates the depth of knowledge the NSA has at its disposal whenever it wants it. And while it is unlikely to call up that knowledge for most of us, it only takes a tiny anomalous event somewhere deep in the chain of acquaintance for a suspicion to propagate back through the links to taint all our innocent records, and to cause them to be added to the huge pile of data that will cross-referenced and sifted and analysed in the search for significant patterns so deep that we are unlikely to be aware of them.

Given this understandable, if regrettable, incomprehension on the part of the public about the extraordinary power at the disposal of the NSA, and what it might be able to extract as a result, the key question then becomes: what can we do to bolster our privacy? Until a few weeks ago, most people working in this field would have said "encrypt everything". But the recent revelations that the NSA and GCHQ have succeeded in subverting just about every encryption system that is widely used online seem to destroy even that last hope.

(In tomorrow's instalment: the way forward.)

Resisting Surveillance on a Unprecedented Scale I

Netzpolitik.org is the leading site covering digital rights in German. It played a key role in helping to stop ACTA last year, and recently has been much occupied with the revelations about NSA spying, and its implications. As part of that, it has put together a book/ebook (in German) as a first attempt to explore the post-Snowden world we now inhabit. I've contributed a new essay, entitled "Resisting Surveillance on a Unprecedented Scale", which is my own attempt to sum up what happened, and to look forward to what our response should be. I'll be publishing it here, split up into three parts, over the next few days.


Despite being a journalist who has been writing about the Internet for 20 years, and a Briton who has lived under the unblinking eye of millions of CCTV cameras for nearly as long, I am nonetheless surprised by the revelations of Edward Snowden. I have always had a pretty cynical view of governments and their instruments of power such as the police and secret services; I have always tried to assume the worst when it comes to surveillance and the assaults on my privacy. But I never guessed that the US and UK governments, aided and abetted to varying degrees by other countries, could be conducting what amounts to total, global surveillance of the kind revealed by Snowden's leaked documents.

I don't think I'm alone in this. Even though some people are now claiming this level of surveillance was "obvious", and "well-known" within the industry, that's not my impression. Judging by the similarly shocked and outraged comments from many defenders of civil liberties and computer experts, particularly in the field of security, they, like me, never imagined that things were quite this bad. That raises an obvious question: how did it happen?

Related to that outrage in circles that concern themselves with these issues, is something else that needs explaining: the widespread lack of outrage among ordinary citizens. To be sure, some countries are better than others in understanding the implications of what has been revealed to us by Snowden (and some are worse - the UK in particular). But given the magnitude and thoroughgoing nature of the spying that is being conducted on our online activities, the response around the world has been curiously muted. We need to understand why, otherwise the task of rolling back at least some of the excesses will be rendered even more difficult.

The final question that urgently requires thought is what can, in fact, be done? Since the level of public concern is relatively low, even in those countries that are traditionally sensitive about privacy issues - Germany, for example - what are the alternatives to stricter government controls, which seem unlikely to be forthcoming?

Although there was a Utopian naivety in the mid-1990s about what the Internet might bring about, it has been clear for a while that the Internet has its dark side, and could be used to make people less, not more, free. This has prompted work to move from a completely open network, with information sent unencrypted, to one where Web connections using the HTTPS technology shield private information from prying eyes. It's remarkable that it has only been in recent years that the pressure to move to HTTPS by default has grown strong.

That's perhaps a hint of how the current situation of total surveillance has arisen. Although many people knew that unencrypted data could be intercepted, there was a general feeling that it wouldn't be possible to find the interesting streams amongst the huge and growing volume flooding every second of the day through the series of digital tubes that make up the Internet.

But that overlooked one crucial factor: Moore's Law, and its equivalents for storage and connectivity. Crudely stated, this asserts that the cost of a given computational capability will halve every 18 months or so. Put another way, for a given expenditure, the available computing power doubles every year and half. And it's important to remember that this is geometric growth: after ten years, Moore's Law predicts computing power increases by a factor of around 25 for a given cost.

Now add in the fact that the secret services are one of the least constrained when it comes to spending money on the latest and fastest equipment, since the argument can always be made that the extra power will be vitally important in getting information that could save lives and so on. One of the first and most extraordinary revelations conveyed from Snowden by the Guardian gave an insight into how that extra and constantly increasing computing power is being applied, in what was called the Tempora programme:

By the summer of 2011, GCHQ had probes attached to more than 200 internet links, each carrying data at 10 gigabits a second. "This is a massive amount of data!" as one internal slideshow put it. That summer, it brought NSA analysts into the Bude trials. In the autumn of 2011, it launched Tempora as a mainstream programme, shared with the Americans.

The intercept probes on the transatlantic cables gave GCHQ access to its special source exploitation. Tempora allowed the agency to set up internet buffers so it could not simply watch the data live but also store it - for three days in the case of content and 30 days for metadata.

As that indicates, two years ago the UK's GCHQ was pulling in data at the rate of 2 terabits a second: by now it is certain to be far higher than that. Thanks to massive storage capabilities, GCHQ could hold the complete Internet flow for three days, and its metadata for 30 days.

There is one very simple reason why GCHQ is doing this: because at some point it realised it could, not just practically, because of Moore's Law, but also legally. The UK legislation that oversees this activity - the Regulation of Investigatory Powers Act (RIPA) - was passed in 2000, and drawn up based on the experience of the late 1990s. It was meant to regulate one-off interception of individuals, and most of it is about carrying out surveillance of telephones and the postal system. In other words, it was designed for an analogue world. The scale of the digital surveillance now taking place is so far beyond what was possible ten years ago, that RIPA's framing of the law - never mind its powers - are obsolete, and GCHQ is essentially able to operate without either legal or technical constraints.

(In tomorrow's instalment: why isn't the public up in arms over this?)

Brendan Eich, Mozilla's CTO, on EME and DRM

A few weeks back, I wrote about the troubling prospect of DRM being baked into HTML5. At the centre of a related piece was a post by Brendan Eich, CTO and SVP of Engineering for Mozilla. As I noted then, it was somewhat opaque, in that I found it hard to understand how exactly Mozilla intended to react to the W3C's pernicious proposal to discuss DRM - specifically, the idea of adding Encrypted Media Extensions (EME) to HTML5. By a happy chance, Eich was passing through London recently, and so I was able to find out more about Mozilla's attitude and plans in this area.

 On Open Enterprise blog.
A few weeks back, I wrote about the troubling prospect of DRM being baked into HTML5. At the centre of a related piece was a post by Brendan Eich, CTO and SVP of Engineering for Mozilla. As I noted then, it was somewhat opaque, in that I found it hard to understand how exactly Mozilla intended to react to the W3C's pernicious proposal to discuss DRM - specifically, the idea of adding Encrypted Media Extensions (EME) to HTML5. By a happy chance, Eich was passing through London recently, and so I was able to find out more about Mozilla's attitude and plans in this area. - See more at: http://blogs.computerworlduk.com/open-enterprise/2013/11/brendan-eich-mozillas-cto-on-eme-and-drm/index.htm#sthash.bJs9GIQu.dpuf

TTIP Update V

Today's update is a little odd, since it's not actually about TAFTA/TTIP, at least not directly. Although the second round is taking place this week, it's almost certain we'll be told nothing about the real substance of the discussions. That's because even though these massive trade agreements affect hundreds of millions of people, the latter are not given any opportunity to see the draft texts as they are discussed, or to have any meaningful dialogue with the negotiators. That may have been acceptable 30 years ago, but in the age of the Internet, when it is trivial to make documents available, and easy to enter into online discussions, it's outrageous.

On Open Enterprise blog.
few weeks back, I wrote about the troubling prospect of DRM being baked into HTML5. At the centre of a related piece was a post by Brendan Eich, CTO and SVP of Engineering for Mozilla. As I noted then, it was somewhat opaque, in that I found it hard to understand how exactly Mozilla intended to react to the W3C's pernicious proposal to discuss DRM - specifically, the idea of adding Encrypted Media Extensions (EME) to HTML5. By a happy chance, Eich was passing through London recently, and so I was able to find out more about Mozilla's attitude and plans in this area. - See more at: http://blogs.computerworlduk.com/open-enterprise/2013/11/brendan-eich-mozillas-cto-on-eme-and-drm/index.htm#sthash.bJs9GIQu.dpuf
few weeks back, I wrote about the troubling prospect of DRM being baked into HTML5. At the centre of a related piece was a post by Brendan Eich, CTO and SVP of Engineering for Mozilla. As I noted then, it was somewhat opaque, in that I found it hard to understand how exactly Mozilla intended to react to the W3C's pernicious proposal to discuss DRM - specifically, the idea of adding Encrypted Media Extensions (EME) to HTML5. By a happy chance, Eich was passing through London recently, and so I was able to find out more about Mozilla's attitude and plans in this area. - See more at: http://blogs.computerworlduk.com/open-enterprise/2013/11/brendan-eich-mozillas-cto-on-eme-and-drm/index.htm#sthash.bJs9GIQu.dpuf
A few weeks back, I wrote about the troubling prospect of DRM being baked into HTML5. At the centre of a related piece was a post by Brendan Eich, CTO and SVP of Engineering for Mozilla. As I noted then, it was somewhat opaque, in that I found it hard to understand how exactly Mozilla intended to react to the W3C's pernicious proposal to discuss DRM - specifically, the idea of adding Encrypted Media Extensions (EME) to HTML5. By a happy chance, Eich was passing through London recently, and so I was able to find out more about Mozilla's attitude and plans in this area. - See more at: http://blogs.computerworlduk.com/open-enterprise/2013/11/brendan-eich-mozillas-cto-on-eme-and-drm/index.htm#sthash.bJs9GIQu.dpuf

TTIP Update IV

One of the key issues during the ACTA negotiations was transparency - or rather the lack of it. Despite a few token gestures from the European Commission initially, TAFTA/TTIP looks like it will be just as bad. Here's a rather cheap trick the negotiators have just played:

On Open Enterprise blog.

Behold the Bankruptcy of Software Patents

You may recall back in 2011, there was an extraordinary bidding war for the patents of Nortel Networks:

On Open Enterprise blog.

Help: EU Net Neutrality Consultation Closes Today

As you may recall, back in September the European Commission finally came out with its proposals for net neutrality, part of its larger "Connected Continent" package designed to complete the telecoms single market. I learned yesterday that the European committee responsible for this area, ITRE (Industry, Research and Energy), has launched something of a stealth consultation on these proposals. Stealth, because neither I nor anyone else that I know covering this area, was aware of them, which is pretty bizarre.

On Open Enterprise blog.

The Coming Chinese Android Invasion

Remember all those years ago, when people laughed at the first Android phones (which were, to tell the truth, pretty clunky, but still...). Remember how Apple fans have always insisted that however well Android did in the smartphone market, it would always be second best, and never seriously threaten Apple's dominance? Well here's what actually happened:



On Open Enterprise blog.

2009: Man Buys 5000 Bitcoins For $27, Forgets About Them. 2013: Man Rediscovers His Bitcoins, Now Worth $886,000

Bitcoin shares with drones the unhappy distinction of being the subject of almost exclusively negative reports. Just as drones are usually doing bad things to people, so Bitcoins are usually helping people do bad things because of their supposed untraceability. So it makes a pleasant change to come across an upbeat Bitcoin story like this, as told by the Guardian: 

On Techdirt.

European Court Of Justice Hands Down Big Win For Transparency in Europe

Russia's Leading Social Network VKontakte Cleared Of Copyright Infringement

VKontakte is not only the largest social networking site in Russia, but is also one of the biggest unauthorized repositories of copyright music, thanks to its file-hosting service. Given the moves to clamp down on copyright infringement in Russia, it seemed only a matter of time before VKontakte found itself in hot water because of this. And yet, as Torrent Freak reports, something unexpected has happened

On Techdirt.

Bruce Schneier On The Feudal Internet And How To Fight It

There aren't many upsides to Snowden's revelations that NSA is essentially spying on the entire Internet, all the time, but if one good thing has already come out of that sorry state of affairs it's the emergence of security expert Bruce Schneier as a mainstream commentator on the digital world. That's largely because his core expertise has been shoved into the very center of our concerns, making his thoughts on what's going on particularly valuable.

On Techdirt.

Trade Agreements Are Designed To Give Companies Corporate Sovereignty

One of the difficulties of making people aware of the huge impact that investor-state dispute settlement (ISDS) clauses in TPP and TAFTA/TTIP are likely to have on their lives, is that the name is so boring, and so they tend to assume that what it describes is also boring and not worth worrying about. And yet what began as an entirely reasonable system for protecting investments in emerging economies with weak judiciaries, through the use of independent tribunals, has turned into a monster that now allows companies to place themselves above national laws, as Techdirt has reported before. 

On Techdirt.

Wikipedia Fights Back Against Socking

The idea that Wikipedia is dying has become one of the Internet's recurrent stories. Because something used by so many people every day is completely free and dependent on the selfless dedication of relatively few individuals, there is perhaps an underlying fear that it will disappear, and it will be our fault for not supporting it better. However, alongside major issues like the need for an influx of new contributors from more diverse backgrounds, one of the lesser-known challenges Wikipedia faces is the rise of "socking", or sock puppetry. Here's how Wikipedia defines the term

On Techdirt.

EU Data Protection Proposal Gets Stronger, But With Big Loopholes

One of the most important pieces of legislation wending its way through the European Parliament concerns data protection. Because of its potential impact on major US companies like Google and Facebook, this has become one of the most fought-over proposals in the history of the EU, with lobbyists apparently writing large chunks of suggested amendments more favorable to online services. And all of that was before Snowden's revelations about NSA spying in the EU made data protection an even more politically-sensitive area. 

On Techdirt.

India Wants Students And Researchers To Have The Right To Photocopy Books

Techdirt has run several stories about the difficulties students in emerging economies have when it comes to buying expensive study materials. Back in 2012, Costa Rican students took to the streets to defend their right to photocopy otherwise unaffordable university textbooks. Earlier this year, Indian textbook authors asked for a lawsuit brought by Western publishers against Delhi University and a nearby photocopying shop over alleged infringements to be dropped. A common element to those two stories is that students often resort to making photocopies of books, since they can't afford the originals. According to this story from Calcutta's The Telegraph, it seems that the Indian government wants to turn the practice into a recognized right

On Techdirt.

European Commission: ACTA Is Dead, Long Live ACTA?

The first six months of 2012 saw Europeans taking to the streets in order to kill off ACTA in the European Union. Against all the odds, they succeeded in that aim, as the European Parliament voted to reject ACTA on 4 July last year. That defeat has certainly been burned into the memories of Karel de Gucht, the EU Commissioner responsible for negotiating first ACTA and now TAFTA/TTIP. When he was asked whether the latter might see ACTA sneak in by the backdoor, here's what he replied

On Techdirt.

Stand Back, I'm About to Do Some Posting....

Apologies for the silence, I've been a bit busy, what with TTIP, ISDS, UK open standards, data retention, EU copyright review and much else.  But at least we had some great results, as the backlog of links to my posts elsewhere will show....

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

02 February 2014

Interview: Eben Moglen - "surveillance becomes the hidden service wrapped inside everything"

(This was original published in The H Open in March 2010.)

Free software has won: practically all of the biggest and most exciting Web companies like Google, Facebook and Twitter run on it.  But it is also in danger of losing, because those same services now represent a huge threat to our freedom as a result of the vast stores of information they hold about us, and the in-depth surveillance that implies.

Better than almost anyone, Eben Moglen knows what's at stake.  He was General Counsel of the Free Software Foundation for 13 years, and helped draft several versions of the GNU GPL.  As well as being Professor of Law at Columbia Law School, he is the Founding Director of Software Freedom Law Center.  And he has an ambitious plan to save us from those seductive but freedom-threatening Web service companies.  He explained what the problem is, and how we can fix it.

GM: So what's the threat you are trying to deal with?

EM:  We have a kind of social dilemma which comes from architectural creep.  We had an Internet that was designed around the notion of peerage -  machines with no hierarchical relationship to one another, and no guarantee about their internal architectures or behaviours, communicating through a series of rules which allowed disparate, heterogeneous networks to be networked together around the assumption that everybody's equal. 

In the Web the social harm done by the client-server model arises from the fact that logs of Web servers become the trails left by all of the activities of human beings, and the logs can be centralised in servers under hierarchical control.  Web logs become power.  With the exception of search, which is a service that nobody knows how to decentralise efficiently, most of these services do not actually rely upon a hierarchical model.  They really rely upon the Web  - that is, the non-hierachical peerage model created by Tim Berners-Lee, and which is now the dominant data structure in our world.

The services are centralised for commercial purposes.  The power that the Web log holds is monetisable, because it provides a form of surveillance which is attractive to both commercial and governmental social control.  So the Web with services equipped in a basically client-server architecture becomes a device for surveilling as well as providing additional services.  And surveillance becomes the hidden service wrapped inside everything we get for free.

The cloud is a vernacular name which we give to a significant improvement in the server-side of the web side - the server, decentralised.  It becomes instead of a lump of iron a digital appliance which can be running anywhere.  This means that for all practical purposes servers cease to be subject to significant legal control.  They no longer operate in a policy-directed manner, because they are no longer iron subject to territorial orientation of law. In a world of virtualised service provision, the server which provides the service, and therefore the log which is the result of the hidden service of surveillance, can be projected into any domain at any moment and can be stripped of any legal obligation pretty much equally freely.

This is a pessimal result.

GM:  Was perhaps another major factor in this the commercialisation of the Internet, which saw power being vested in a company that provided services to the consumer?

EM:  That's exactly right.  Capitalism also has its architectural Bauplan, which it is reluctant to abandon.  In fact, much of what the network is doing to capitalism is forcing it to reconsider its Bauplan via a social process which we call by the crappy name of disintermediation.  Which is really a description of the Net forcing capitalism to change the way it takes.  But there's lots of resistance to that, and what's interesting to all of us I suspect, as we watch the rise of Google to pre-eminence, is the ways in which Google does and does not - and it both does and does not - wind up behaving rather like Microsoft in the course of growing up.  There are sort of gravitational propositions that arise when you're the largest organism in an ecosystem. 

GM:  Do you think free software has been a little slow to address the problems you describe?

EM:  Yes, I think that's correct.  I think it is conceptually difficult, and it is to a large degree difficult because we are having generational change.  After a talk [I gave recently], a young woman came up to me and she said: I'm 23 years old, and none of my friends care about privacy.  And that's another important thing, right?, because we make software now using the brains and hands and energies of people who are growing up in a world which has been already affected by all of this.  Richard or I can sound rather old-fashioned.

GM:  So what's the solution you are proposing?

EM:  If we had a real intellectually-defensible taxonomy of services, we would recognise that a number of the services which are currently highly centralised, and which count for a lot of the surveillance built in to the society that we are moving towards, are services which do not require centralisation in order to be technologically deliverable.  They are really the Web repackaged. 

Social networking applications are the most crucial.  They rely in their basic metaphors of operation on a bilateral relationship called friendship, and its multilateral consequences.  And they are eminently modelled by the existing structures of the Web itself. Facebook is free Web hosting with some PHP doodads and APIs, and spying free inside all the time - not actually a deal we can't do better than. 

My proposal is this: if we could disaggregate the logs, while providing the people all of the same features, we would have a Pareto-superior outcome.  Everybody – well, except Mr Zuckenberg - would be better off, and nobody would be worse off.  And we can do that using existing stuff.

The most attractive hardware is the ultra-small, ARM-based, plug it into the wall, wall-wart server, the SheevaPlug.  An object can be sold to people at a very low one-time price, and brought home and plugged into an electrical outlet and plugged into a wall jack for the Ethernet, or whatever is there, and you're done.  It comes up, it gets configured through your Web browser on whatever machine you want to have in the apartment with it, and it goes and fetches all your social networking data from all the social networking applications, closing all your accounts.  It backs itself up in an encrypted way to your friends' plugs, so that everybody is secure in the way that would be best for them, by having their friends holding the secure version of their data.

And it begins to do all the things that we assume we need in a social networking appliance.  It's the feed, it maintains the wall your friends write on - it does everything that provides feature compatibility with what you're used to. 

But the log is in your apartment, and in my society at least, we still have some vestigial rules about getting into your house: if people want to check the logs they have to get a search warrant. In fact, in every society, a person's home is about as sacred as it gets.

And so, basically, what I am proposing is that we build a social networking stack based around the existing free software we have, which is pretty much the same existing free software the server-side social networking stacks are built on; and we provide ourselves with an appliance which contains a free distribution everybody can make as much of as they want, and cheap hardware of a type which is going to take over the world whether we do it or we don't, because it's so attractive a form factor and function, at the price. 

We take those two elements, we put them together, and we also provide some other things which are very good for the world.  Like automatically VPNing everybody's little home network place with my laptop wherever I am, which provides me with encrypted proxies so my web searching, wherever I am, is not going to be spied on.  It means that we have a zillion computers available to the people who live in China and other places where there's bad behaviour.  So we can massively increase the availability of free browsing to other people in the world.  If we want to offer people the option to run onion routeing, that's where we'll put it, so that there will be a credible possibility that people will actually be able to get decent performance on onion routeing networks.

And we will of course provide convenient encrypted email for people - including putting their email not in a Google box, but in their house, where it is encrypted, backed up to all their friends and other stuff.  Where in the long purpose of time we can begin to return email to a condition - if not being a private mode of communication - at least not being postcards to the secret police every day.

So we would also be striking a blow for electronic civil liberties in a way that is important, which is very difficult to conceive of doing in a non-technical way.

GM:  How will you organise and finance such a project, and who will undertake it?

EM:  Do we need money? Yeah, but tiny amounts.  Do we need organisation? Yes, but it could be self-organisation.  Am I going to talk about this at DEF CON this summer, at Columbia University? Yes.  Could Mr Shuttleworth do it if he wanted to? Yes.  It's not going to be done with clicking heels together, it's going to be done the way we do stuff: somebody's going begin by reeling off a Debian stack or Ubuntu stack or, for all I know, some other stack, and beginning to write some configuration code and some glue and a bunch of Python to hold it all together. From a quasi-capitalist point of view I don't think this is an unmarketable product.  In fact, this is the flagship product, and we ought to all put just a little pro bono time into it until it's done.

GM:  How are you going to overcome the massive network effects that make it hard to persuade people to swap to a new service?

EM:  This is why the continual determination to provide social networking interoperability is so important. 

For the moment, my guess is that while we go about this job, it's going to remain quite obscure for quite a while.  People will discover that they are being given social network portability.  [The social network companies] undermine their own network effect because everybody wants to get ahead of Mr Zuckerberg before his IPO.  And as they do that they will be helping us, because they will be making it easier and easier to do what our box has to do, which is to come online for you, and go and collect all your data and keep all your friends, and do everything that they should have done.

So part of how we're going to get people to use it and undermine the network effect, is that way.  Part of it is, it's cool; part of it is, there are people who want no spying inside; part of it is, there are people who want to do something about the Great Firewall of China but don't know how.  In other words, my guess is that it's going to move in niches just as some other things do.

GM:  With mobile taking off in developing countries, might it not be better to look at handsets to provide these services?

EM:  In the long run there are two places where we can conceivably put your identity: one is where you live, and the other is in your pocket.  And a stack that doesn't deal with both of those is probably not a fully adequate stack.

The thing I want to say directed to your point “why don't we put our identity server in our cellphone?”, is that our cellphones are very vulnerable.  In most parts of the world, you stop a guy on the street, you arrest him on a trumped-up charge of any kind, you get him back to the station house, you clone his phone, you hand it back to him, you've owned him.

When we fully commoditise that [mobile] technology, then we can begin to do the reverse of what the network operators are doing.  The network operators around the world are basically trying to eat the Internet, and excrete proprietary networking.  The network operators have to play the reverse if telephony technology becomes free.  We can eat proprietary networks and excrete the public Internet.  And if we do that then the power game begins to be more interesting.

26 January 2014

Interview: Linus Torvalds - "I don't read code any more"


(This was originally published in The H Open in November 2012.)

I was lucky enough to interview Linus quite early in the history of Linux – back in 1996, when he was still living in Helsinki (you can read the fruits of that meeting in this old Wired feature.) It was at an important moment for him, both personally – his first child was born at this time – and in terms of his career. He was about to join the chip design company Transmeta, a move that didn't really work out, but led to him relocating to America, where he remains today.

That makes his trips to Europe somewhat rare, and I took advantage of the fact that he was speaking at the recent LinuxCon Europe 2012 in Barcelona to interview him again, reviewing the key moments for the Linux kernel and its community since we last spoke.

Glyn Moody: Looking back over the last decade and half, what do you see as the key events in the development of the kernel?

Linus Torvalds: One big thing for me is all the scalability work that we did. We've gone from being OK on 2 or 4 CPUs to the point where basically you can throw 4000 [at it] – you won't scale perfectly, but most of the time it's not the kernel that's the bottleneck. If your workload is somewhat sane we actually scale really well. And that took a lot of effort.

SGI in particular worked a lot on scaling past a few hundred CPUs. Their initial patches could just not be merged. There was no way we could take the work they did and use it on a regular PC because they added all this infrastructure to work on thousands of CPUs. That was way too expensive to do when you had only a couple.

I was afraid for the longest time that we would have the high-performance kernel for the big machines, and the source code would be separate from the normal kernel. People worked a lot on just making sure that we had a clean code base where you can say at compile time that, hey, I want the kernel that works for 4000 CPUs, and it generates the code for that, and at the same time, if you say no, I want the kernel that works on 2 CPUs, the same source code compiles.

It was something that in retrospect is really important because it actually made the source code much better. All the effort that SGI and others spent on unifying the source code, actually a lot of it was clean-up – this doesn't work for a hundred CPUs, so we need to clean it up so that it works. And it actually made the kernel more maintainable. Now on the desktop 8 and 16 CPUs are almost common; it used to be that we had trouble scaling to an 8, now it's like child's play.

But there's been other things too. We spent years again at the other end, where the phone people were so power conscious that they had ugly hacks, especially on the ARM side, to try to save power. We spent years doing power management in general, doing the kind of same thing - instead of having these specialised power management hacks for ARM, and the few devices that cellphone people cared about, we tried to make it across the kernel. And that took like five years to get our power management working, because it's across the whole spectrum.

Quite often when you add one device, that doesn't impact any of the rest of the kernel, but power management was one of those things that impacts all the thousands of device drivers that we have. It impacts core functionality, like shutting down CPUs, it impacts schedulers, it impacts the VM, it impacts everything.

It not only affects everything, it has the potential to break everything which makes it very painful. We spent so much time just taking two steps forward, one step back because we made an improvement that was a clear improvement, but it broke machines. And so we had to take the one step back just to fix the machines that we broke.

Realistically, every single release, most of it is just driver work. Which is kind of boring in the sense there is nothing fundamentally interesting in a driver, it's just support for yet another chipset or something, and at the same time that's kind of the bread and butter of the kernel. More than half of the kernel is just drivers, and so all the big exciting smart things we do, in the end it pales when compared to all the work we just do to support new hardware.

Glyn Moody: What major architecture changes have there been to support new hardware?

Linus Torvalds: The USB stack has basically been re-written a couple of time just because some new use-case comes up and you realise that hey, the original USB stack just never took that into account, and it just doesn't work. So USB 3 needs new host controller support and it turns out it's different enough that you want to change the core stack so that it can work across different versions. And it's not just USB, it's PCI, and PCI becomes PCIe, and hotplug comes in.

That's another thing that's a huge difference between traditional Linux and traditional Unix. You have a [Unix] workstation and you boot it up, and it doesn't change afterwards - you don't add devices. Now people are taking adding a USB device for granted, but realistically that did not use to be the case. That whole being able to hotplug devices, we've had all these fundamental infrastructure changes that we've had to keep up with.

Glyn Moody: What about kernel community – how has that evolved?

Linus Torvalds: It used to be way flatter. I don't know when the change happened, but it used to be me and maybe 50 developers - it was not a deep hierarchy of people. These days, patches that reach me sometimes go through four levels of people. We do releases every three months; in every release we have like 1000 people involved. And 500 of the 1000 people basically send in a single line change for something really trivial – that's how some people work, and some of them never do anything else, and that's fine. But when you have a thousand people involved, especially when some of them are just these drive-by shooting people, you can't have me just taking patches from everybody individually. I wouldn't have time to interact with people.

Some people just specialise in drivers, they have other people who they know who specialise in that particular driver area, and they interact with the people who actually write the individual drivers or send patches. By the time I see the patch, it's gone through these layers, it's seldom four, but it's quite often two people in between.

Glyn Moody: So what impact does that have on your role?

Linus Torvalds: Well, the big thing is I don't read code any more. When a patch has already gone through two people, at that point, I can either look at the patch and say: no, all your work was wasted, and micromanage at that level – and quite frankly I don't want to do that, and I don't have the capacity to do that.

So most of the time, when it comes to the major subsystem maintainers, I trust them because I've been working with them for 5, 10, 15 years, so I don't even look at the code. They tell me these are the changes and they give me a very high-level overview. Depending on the person, it might be five lines of text saying this is roughly what has changed, and then they give me a diffstat, which just says 15 lines have changed in that file, and 25 lines have changed in that file and diffstat might be a few hundred lines because there's a few hundred files that have changed. But I don't even see the code itself, I just say: OK, the changes happen in these files, and by the way, I trust you to change those files, so that's fine. And then I just say: I'll take it.

Glyn Moody: So what's your role now?

Linus Torvalds: Largely I'm managing people. Not in the logistical sense – I obviously don't pay anybody, but I also don't have to worry about them having access to hardware and stuff like that. Largely what happens is I get involved when people start arguing and there's friction between people, or when bugs happen.

Bugs happen all the time, but quite often people don't know who to send the bug report to. So they will send the bug report to the Linux Kernel mailing list – nobody really is able to read it much. After people don't figure it out on the kernel mailing list, they often start bombarding me, saying: hey, this machine doesn't work for me any more. And since I didn't even read the code in the first place, but I know who is in charge, I end up being a connection point for bug reports and for the actual change requests. That's all I do, day in and day out, is I read email. And that's fine, I enjoy doing it, but it's very different from what I did.

Glyn Moody: So does that mean there might be scope for you to write another tool like Git, but for managing people, not code?

Linus Torvalds: I don't think we will. There might be some tooling, but realistically most of the things I do tend to be about human interaction. So we do have tools to figure out who's in charge. We do have tools to say: hey, we know the problem happens in this area of the code, so who touched that code last, and who's the maintainer of that subsystem, just because there are so many people involved that trying to keep track of them any other way than having some automation just doesn't work. But at the same time most of the work is interaction, and different people work in different ways, so having too much automation is actually painful for people.

We're doing really well. The kind of pain points we had ten years ago just don't exist any more. And that's largely because we used to be this flat hierarchy, and we just fixed our tools, we fixed our work flows. And it's not just me, it's across the whole kernel there's no single person who's in the way of any particular workflow.

I get a fair amount of email, but I don't even get overwhelmed by email. I love reading email on my cellphone when I travel, for example. Even during breaks, I'll read email on my cellphone because 90% of them I can just read for my information that I can archive. I don't need to do anything, I was cc'd because there was some issue going on, I need to be aware of it, but I don't need to do anything about that. So I can do 90% of my work while travelling, even without having a computer. In the evening, when I go back to the hotel room, I'll go through [the other 10%].

Glyn Moody: 16 years ago, you said you were mostly driven by what the outside world was asking for; given the huge interest in mobiles and tablets, what has been their impact on kernel development?

Linus Torvalds: In the tablet space, the biggest issue tends to be power management, largely because they're bigger than phones. They have bigger batteries, but on the other hand people expect them to have longer battery life and they also have bigger displays, which use more battery. So on the kernel side, a tablet from the hardware perspective and a usage perspective is largely the same thing as a phone, and that's something we know how to do, largely because of Android.

The user interface side of a tablet ends up being where the pain points have been – but that's far enough removed from the kernel. On a phone, the browser is not a full browser - they used to have the mobile browsers; on the tablets, people really expect to have a full browser – you have to be able to click that small link thing. So most of the tablet issues have been in the user space. We did have a lot of issues in the kernel over the phones, but tablets kind of we got for free.

Glyn Moody: What about cloud computing: what impact has that had on the kernel?

Linus Torvalds: The biggest impact has been that even on the server side, but especially when it comes to cloud computing, people have become much more aware [of power consumption.] It used to be that all the power work originally happened for embedded people and cellphones, and just in the last three-four years it's the server people have become very power aware. Because they have lots of them together; quite often they have high peak usage. If you look at someone like Amazon, their peak usage is orders of magnitude higher than their regular idle usage. For example, just the selling side of Amazon, late November, December, the one month before Christmas, they do as much business as they do the rest of the year. The point is they have to scale all their hardware infrastructure for the peak usage that most of the rest of the year they only use a tenth of that capacity. So being able to not use power all the time [is important] because it turns out electricity is a big cost of these big server providers.

Glyn Moody: Do Amazon people get involved directly with kernel work?

Linus Torvalds: Amazon is not the greatest example, Google is probably better because they actually have a lot of kernel engineers working for them. Most of the time the work gets done by Google themselves. I think Amazon has had a more standard components thing. Actually, they've changed the way they've built hardware - they now have their own hardware reference design. They used to buy hardware from HP and Dell, but it turns out that when you buy 10,000 machines at some point it's just easier to design the machines yourself, and to go directly to the original equipment manufacturers and say: I want this machine, like this. But they only started doing that fairly recently.

I don't know whether [Amazon] is behind the curve, or whether Google is just more technology oriented. Amazon has worked more on the user space, and they've used a fairly standard kernel. Google has worked more on the kernel side, they've done their own file systems. They used to do their own drivers for their hard discs because they had some special requirements.

Glyn Moody: How useful has Google's work on the kernel been for you?

Linus Torvalds: For a few years - this is five or ten years ago - Google used to be this black hole. They would hire kernel engineers and they would completely disappear from the face of the earth. They would work inside Google, and nobody would ever hear from them again, because they'd do this Google-specific stuff, and Google didn't really feed back much.

That has improved enormously, probably because Google stayed a long time on our previous 2.4 releases. They stayed on that for years, because they had done so many internal modifications for their specialised hardware for everything, that just upgrading their kernel was a big issue for them. And partly because of the whole Android project they actually wanted to be much more active upstream.

Now they're way more active, people don't disappear there any more. It turns out the kernel got better, to the point where a lot of their issues just became details instead of being huge gaping holes. They were like, OK, we can actually use the standard kernel and then we do these small tweaks on top instead of doing these big surgeries to just make it work on their infrastructure.

Glyn Moody: Finally, you say that you spend most of your time answering email: as someone who has always seemed a quintessential hacker, does that worry you?

Linus Torvalds: I wouldn't say that worries me. I end up not doing as much programming as sometimes I'd like. On the other hand, it's like some kinds of programming I don't want to do any more. When I was twenty I liked doing device drivers. If I never have to do a single device driver in my life again, I will be happy. Some kind of headaches I can do without.

I really enjoyed doing Git, it was so much fun. When I started the whole design, started doing programming in user space, which I had not done for 15 years, it was like, wow, this is so easy. I don't need to worry about all these things, I have infinite stack, malloc just works. But in the kernel space, you have to worry about locking, you have to worry about security, you have to worry about the hardware. Doing Git, that was such a relief. But it got boring.

The other project I still am involved in is the dive computer thing. We had a break in on the kernel.org site. It was really painful for the maintainers, and the FBI got involved just figuring out what the hell happened. For two months we had almost no kernel development – well, people were still doing kernel development, but the main site where everybody got together was down, and a lot of the core kernel developers spent a lot of time checking that nobody had actually broken into their machines. People got a bit paranoid.

So for a couple of months my main job, which was to integrate work from other people, basically went away, because our main integration site went away. And I did my divelog software, because I got bored, and that was fun. So I still do end up doing programming, but I always come back to the kernel in the end.

"The H Open" is Closed and Offline; Here's What I Aim to Do...

Long-time readers of this blog may recall that for some years I wrote for the UK Heise title "The H Open".  Sadly, that closed last year; even more sadly, Heise seems to have taken its archive off line.  That raises all sorts of interesting questions about the preservation of digital knowledge, and the responsibility of publishers to keep titles that they have closed publicly accessible - not least to minimise link-rot.

However, here I want to concentrate on the question of what I, personally, can do about this.  After all, however minor my columns for The H Open were, they none the less form a part of the free software world's history, however footling.  Of course, I have back-up copies of all of my work, so the obvious thing to do is to post them here.  I can do that, because I never surrendered the copyright, and they therefore remain mine to do with as I please.

There are quite a few of them - nearly one hundred - so I have decided to begin with two of the most popular pieces that I published in The H Open: an interview with Linus from the end of my output, and an interview with Eben Moglen from the beginning.  I will then try to work my way through the other columns as and when I have time.  Don't hold your breath....


27 December 2013

TAFTA/TTIP: European Commission Tells Us to "Get the Facts"; Here They Are

Readers with long memories may recall in the dim and distant past that at one time "Get the Facts" was a favourite war-cry of Microsoft when attacking GNU/Linux and free software.  Of course the "facts" were anything but, and I spent quite some time debunking them.  Significantly, once the claims had been debunked often enough, and by enough people, the campaign went away, and was never heard of again.

Rather interestingly, the European Commission now seems intent on recapitulating that saga and its fate.  I've noticed several times recently it has invoked the "facts", and I've tried to show why its idea of facts leaves much to be desired.  So far, most of my columns about TAFTA/TTIP have been over on Computerworld UK, under the rubric "TTIP Update."  There also a fair few on Techdirt.  Here I'd like to address a rather interesting addition to the "Get the Facts" collection that doesn't really sit well in either publication, since it's in German.

It comes in response to an epetition from campact.de, that is currently storming away (at the time of writing it has nearly 300,000 signatures.)  Evidently worried by that momentum, the European Commission has issued another of its point-by-point commentaries.  I will repay the compliment by rebutting its rebuttals.  I'll use the original German, but you can use a Google Translate version if you wish.

Campact behauptet, dass TTIP es ausländischen Unternehmen zukünftig ermögliche, Gesetze in Europa auszuhöhlen. Falsch

Ein bereits bestehendes Gesetz kann nicht durch ein Handelsabkommen "ausgehöhlt" werden. So kann beispielsweise ein bestehendes Verbot von Fracking oder von Chlorhühnerfleisch nicht in Frage gestellt werden. Das einzige, was das Abkommen unterstreicht – und das ist auch im Interesse der EU – ist ein Diskriminierungsverbot. Das heißt: Was für Inländer gilt, muss auch für Ausländer gelten. Dies ist besonders wichtig bei Investitionen, die entscheidend für wirtschaftliche Entwicklung und die Schaffung von Arbeitsplätzen sind. Hier brauchen wir Stabilität und Sicherheit, auch für europäischen Investitionen im Ausland. Allerdings heißt Investitionsschutz nicht, den Unternehmen unbegrenzte Rechte einzugestehen, oder die Möglichkeit zu geben, jedwede nationale Gesetzgebung in Frage zu stellen. Investitionsschutzklauseln dürfen nur in sehr begrenzten Bereichen eingesetzt werden, z.B. wenn gegenüber inländischen Firmen diskriminiert wird oder wenn eine Firma im Ausland ohne Entschädigung enteignet wird.

Well, it's true that a trade agreement can't change laws directly.  But it can have a chilling effect, as occurred in Canada.  When NAFTA was brought in, practically every proposed law to protect the environment was dropped when threats were received from US companies that they would use investor-state dispute settlement (ISDS), available under NAFTA, to sue the Canadian government.  That's a real hollowing out of laws not just in the future, but also in the present, since governments will be unwilling to run the risk of getting sued if they apply them rigorously.

The Commission also claims that ISDS is particularly important for investment; but here's what its own site says on the subject:

Total US investment in the EU is three times higher than in all of Asia.

EU investment in the US is around eight times the amount of EU investment in India and China together.

EU and US investments are the real driver of the transatlantic relationship, contributing to growth and jobs on both sidesof the Atlantic. It is estimated that a third of the trade across the Atlantic actually consists of intra-company transfers.

That's all without ISDS: so why bring it in?


Campact behauptet, dass TTIP zu Privatisierungen im Bereich Wasserversorgung, Gesundheit und Bildung führe. Falsch.

Das TTIP-Abkommen hat nichts mit verordneten Privatisierungen zu tun – das wird von den Regierungen alleine beschlossen. Kein Freihandelsabkommen verpflichtet Mitgliedsstaaten zur Liberalisierung oder Privatisierung der Wasserversorgung oder anderer öffentlicher Dienstleistungen, z.B. des öffentlichen Gesundheitswesens, des öffentlichen Verkehrswesens oder des Bildungswesens.

Again, that misses the point, probably wilfully.  This is not about formally forcing these privatisations: but that will be the effect of ISDS, since governments will find themselves sued for billions of Euros if they don't allow commons to be privatised, since that would reduce expectations of future profits - a big no-no under ISDS.

Campact behauptet, dass TTIP die Tore für Fracking, Chlorhühnchen oder Genfood öffne. Falsch.

Fracking, Chlorhühnchen und Genfood sind in der EU verboten oder streng reguliert. Das wird auch ein Freihandelsabkommen nicht ändern. Nur Regierungen oder Parlamente können entscheiden, Gesetzgebung zu ändern. Die Europäische Union wird unsere hohen EU-Standards nicht zur Verhandlung stellen

Even if that's true - and since the negotiations are completely secret, we have no way of telling until it's too late - it's already become clear how cholorinated chickens and GMOs will be brought to Europe: the institution of a transatlantic Regulatory Council.  As I've already discussed at length elsewhere, this body will not only be able to veto new regulations unless they favour transatlantic trade, but they will be able to suggest to both EU and US lawmakers *directly* what new laws should be brought in - for example, those mandating that EU supermarkets must accept chickens washed in chlorine, or beef pumped up with growth hormones.

Campact behauptet, dass TTIP die Rechte von Internetnutzern einschränken werde. Falsch.

Sowohl die EU als auch die USA verfügen bereits über effiziente Vorschriften zum Schutz des Rechts des geistigen Eigentums, wenn auch der Weg zum Ziel gelegentlich unterschiedlich ist. TTIP soll den Handel zwischen der EU und den USA vereinfachen, ohne diese Vorschriften aufzuweichen. „ACTA durch die Hintertür“ wird es mit TTIP nicht geben.
Well, the protection of intellectual monopolies may be efficient, but that didn't stop the US and EU trying to ram through ACTA, did it? So what's to stop that now?  Claims that TAFTA/TTIP won't be ACTA through the backdoor ring a little hollow thanks to a recent leak that reveals what one of the EU's chief negotiators has to say on the subject of a "Christmas list of items" that lobbyists want in this area:

According to the negotiator, the most repeated request on the Christmas list was in "enforcement". Concerning this, companies had made requests to "improve and formalize" as well as for the authorities to "make statements". The Commission negotiator said that although joint 'enforcement statements' do not constitute "classical trade agreement language" -- a euphemism for things that do not belong in trade agreements -- the Commission still looks forward to "working in this area".

Sounds like ACTA through the back door to me...

Campact behauptet, dass TTIP undemokratisch sei und gewählte Politiker keine Einflussmöglichkeit hätten. Falsch.

Regierungen der Mitgliedstaaten, um sie vor, während und nach den Verhandlungsrunden „live“ über den Verhandlungsstand aufzuklären und deren Positionen zu einzubeziehen. Das Europäische Parlament wird ebenfalls regelmäßig über den Verhandlungsstand informiert, damit die Standpunkte und Interessen der demokratisch gewählten europäischen Abgeordneten in die Verhandlungen einfließen können. Am Ende sind es die EU-Mitgliedstaaten und das Europäische Parlament, die das letzte Wort über TTIP haben. 

So let's look at those claims.  It may well be that the Member States are kept informed - since they never pass on anything to their electorate, that hardly helps the public, say, who remain in the dark.  The European Parliament as a whole certainly isn't kept informed, even if one or two selected individuals are given information under embargo that they also cannot pass on.  And that "last word" that the European Parliament has over TTIP is all or nothing: as with ACTA, either it accepts the whole package, or it rejects the whole package.  That means it will be unable to remove the bad bits and keep the good bits.  By using emotional blackmail about the good bits, the European Commission will doubtless try to force through things like ISDS even though the European Parliament is increasingly alarmed about its dangers.

Worum soll es dann in diesem Handelsabkommen gehen?

Meistens verfolgen unsere Behörden auf beiden Seiten des Atlantiks im Grunde das gleiche Ziel, wenn sie Standards und Zulassungsverfahren festlegen: Sie wollen Menschen vor Risiken für ihre Gesundheit schützen, für Sicherheit etwa am Arbeitsplatz sorgen, die Umwelt schützen oder die finanzielle Sicherheit einer Firma garantieren. Um dies zu erreichen, haben wir auf beiden Seiten des Atlantiks aber häufig unterschiedliche regulatorische Strukturen und Traditionen. Daraus entstehen, obwohl das oft gar nicht beabsichtig ist, unterschiedliche Regelungen, die den Zugang zum anderen Markt oftmals erheblich erschweren. Schätzungen zufolge entsprechen aber allein diese bürokratischen Handelshürden einem Zoll von 10-20 Prozent.
Well, the aim may be the same, but the results are very different.  Here in Europe, we have the Precautionary Principle: that's not only absent in the US, but US industries have said many times that one of their *demands* for TAFTA/TTIP is that the Precautionary Principle should be dismantled.  Similarly, here in Europe we have the very strict REACH - Registration, Evaluation, Authorisation and Restriction of Chemicals.  Again, US industries haves aid they want to get rid of this "barrier" to their profits.

Equally, nobody would suggest that social, employment or environmental standards in the US are anywhere near as stringent as those in the EU: the idea that they are somehow "equivalent" is ridiculous, and shows that the true intent of the European Commission is to water down EU standards to US levels.

Warum das alles? Die transatlantische Handels- und Investitionspartnerschaft könnte wie ein Konjunkturpaket wirken: Das Abkommen könnte der EU einen Wachstumsschub von 0,5 Prozent des Bruttoinlandsprodukts erbringen, das sind rund 120 Milliarden Euro, oder 500 Euro pro Haushalt – denn letztendlich bedeuten Kosteneinsparungen für Unternehmen auch preiswertere Produkte, mehr Qualität und Auswahl.

What that fails to mention is that the 119 billion euro GDP uplift would only come in 2027, and is the *most optimistic* scenario, which assumes massive deregulation.  So it would not produce more quality, but US-style chlorine-washed chickens, hormone-injected beef and GMOs.

And the idea that every household would somehow magically receive 500 euros, as if from some TAFTA/TTIP Father Christmas, is just dishonest: even if this impossibly ambitiously deregulation were achieved, most of the GDP boost would go to the giant international companies, which would then doubtless offshore their profits, so you can forget about any "trickle-down" effect either.

Meanwhile, to pay for those boosted bottom lines, and billions in bonuses for corporate fat-cats, ordinary people would find their jobs disappearing overseas, their food quality lowered, and broader environmental degradation caused by widespread fracking and extractive industries indifferent to the damage they cause.  If anyone needs to get the facts, it's the European Commission.


24 November 2013

Towards a Post-H.264 World

In my post yesterday about Cisco making the code for its H264 codec available, I noted that the really important news was that Mozilla was working on Daala, a fully open next generation codec. One of the key people on the team doing that is Monty Montgomery, and he's written a really interesting blog post about the announcement and its background, which I recommend thoroughly (the discussion in the comments is also very illuminating):

On Open Enterprise blog.

Is Cisco Open-Sourcing its Code - or Openwashing?

You know that open source has won when everybody wants to wrap themselves in a little bit of openness in order to enjoy the glow. That's good news - provided it represents a move to true open source and not fauxpen source. Which brings me to the following news:

On Open Enterprise blog.

Of Surveillance Debates and Open Clinical Data

Revelations about the staggering levels of online surveillance that are now routine in this country have been met with a stunning silence from the UK government. There's an important meeting tomorrow where three MPs from the main parties are trying to get some kind of debate going on this crucial issue. It would be helpful if you could ask your MP to participate. Here's what I've written:

On Open Enterprise blog.