? Editing: Post:21.body Save Delete Cancel
Content changed Sign & Publish new content

The Computationalist

A torrid conspiracy of ones and zeros


The Hidden Recipient

on Oct 08, 2015

For the last however many months a low priority intermittent project of mine has been to add GPG's hidden recipient feature to OpenKeychain. OpenKeychain is an app which allows email encryption to be used on Android phones.

Why is hidden recipient relevant? Probably only a small minority of GPG users ever encounter or use that particular feature. The need for it arises out of an attempt to ensure that when emails are saved from that point onwards they only exist in an encrypted format and are only temporarily decrypted in RAM for viewing. If an adversary gains access to the files then they must guess or crack the GPG passphrase in order to read the content, and that buys the user time.

Emails on Freedombone are stored encrypted by default. If arriving mail is not already encrypted then a system called gpgme is used to encrypt it. To read any particular mail you then need to enter your GPG passphrase, and the email client typically remembers that for some limited amount of time (such as the 30 mins for a typical email reading session). The decryption stage makes use of the hidden recipient feature.

This means that although Freedombone emails are readable on a laptop, via Tunderbird/Icedove and Enigmail, they're usually not readable on an Android system. Some may regard this as being a feature rather than a bug, and I'd be sympathetic to that view. Android is not exactly regarded as being a secure system and so depending on your particular threat model you might want to keep email away from it, or endure the more cumbersome user interface of logging into your server via an ssh client (it works, but is not exactly a pleasant mobile user experience). However, many people do read their email on phones and so as a harm minimisation strategy if you're going to do it you might as well do it in the safest possible manner.

Getting GPG working on Android has been a long struggle. I can see complaints about not being able to read PGP/MIME formatted mail in K-9 dating back to 2011. There are closed source Android email clients which do support PGP/MIME, but source code which is not independently auditable cannot be regarded as trustworthy. Recently however the situation has definitely improved, and via OpenKeychain encrypted mail is now mostly readable on Android, with only occasional exceptions.

The patch for hidden recipient in OpenKeychain can be found here, and detailed instructions on how to compile both OpenKeychain and K-9 can be found here.

https://github.com/bashrc/open-keychain/commit/4d63f56431069bff4dc3083d031b4e4649172f10

http://freedombone.uk.to/usage.html#orgheadline2

I've tested the patch fairly thoroughly, using two keys with different passphrases and it seems to work, but at present it seems doubtful that it will be accepted upstream. The feedback on the initial version of the patch was rather negative. Be that as it may, so long as hidden recipient is implemented somehow I don't really mind.

Read more

First Class

on Oct 08, 2015 ·
2 comments

I occasionally travel on trains. Yesterday, the guy at the ticket office gave me a first class pass, even though I had only actually paid for a second class ticket. When I queried that the guy said he thought the train might be crowded and so was upgrading passengers, but I noticed that he didn't do the same for others who also got tickets on the same train a few minutes later.

My guess is that this was some sort of random promotion. Maybe it was to try to encourage more regular travellers to pay more for a first class ticket.

And so I travelled first class to Manchester. That's the first time I've ever been in the first class section of a train. It wasn't all that different. There were curtains on the windows, an oddly shaped table light, "first class" printed conspicuously on the seats, and the seats themselves were slightly wider and reclined. The density of seats was also slightly lower, and there was air conditioning (which probably made the biggest difference).

My overall impression was: why don't they do this in all the carriages? Why is there even a class system on trains? The additional cost to provide air conditioning probably doesn't justify the large difference in ticket price.

This is really a reflection of the difference between the way I used to think about things when I was younger and the way I do now. Years ago I would have considered the material differences between first and second class, noticed that they were small and then concluded that this was simply irrational and could be fixed by better engineering or technology. Now I think about things in a more sociological way. What people are really paying for in first class is not so much the air conditioning as simply to segregate themselves from the rest of the population. It's hard to maintain delusions of superiority if you're mixing it with the average traveller.

One other thing I noticed, which should come as no surprise, is that there are very few first class rail travellers. I counted three others in the same carriage.

To keep in the spirit of social class divisions I read chapter one part three of Capital volume one, by Karl Marx, on my phone. It's about the nature of money.

So having experienced first class rail travel I think Marx would approve of me saying that I think the class system should not exist. It's something which should be transcended. Treat everyone the same way and make all carriages as technically luxurious as the economic circumstance allows. When I think about how crowded this route sometimes gets it should be scandalous that there are people standing in other carriages with the first class one almost empty.

Read more

A Journey to Free Software

on Oct 08, 2015

I suppose my journey to free software began in the 1980s. Not long after my parents bought a BBC Micro I was writing games on it or typing in programs from magazines, saving them to cassette tape or later to floppy disk and then passing those around via sneakernet. A few of my contemporaries did the same. Then in the late 1980s I used what was then known as "public domain" software of various sorts, although that was mostly what we would now call "freeware" (binary only gratis distributions). So as a kid in the 1980s I was aware that software could be treated like a commercial product or like a library book which you could study.

It wasn't until about 1999 that I became aware of GNU/Linux. There was a growing amount of hype about it in the technology magazines, and it started showing up at the industrial machinery exhibitions which I was attending. I looked for things to read about this new system and soon came across the online version of Eric Raymond's "The Cathedral and the Bazaar". That book changed my way of thinking about software. For the first time I recognized the difference between use value and sale value and it introduced me to the idea of a gift economy. I bought some CDs of the SuSE distribution and tried running it. It was buggy, but the concept of an operating system made mostly by volunteers outside of the usual method of software production was fascinating. In subsequent years I tried other distros such as RedHat and Mandrake, and read other books related to open source.

It was reading the biography of Richard Stallman which really introduced be to free software and its definition. Before that time I'd been vaguely aware of public domain licenses, and even added them to my own software on the recommendation of others, but other than that paid little attention to their content. After that I read "Free Software, Free Society" and the concepts which had begun to fall into place with Eric Raymond's book became much clearer to me. As software became increasingly ubiquitous there was a link between the nature of the software and the kinds of society which are possible with it as a mediator.

I had previously been a Microsoft user, but by the end of 2006 GNU/Linux was good enough on the desktop to be usable for me. That was my "year of the Linux desktop". After that my use of Microsoft products dwindled, until today I use no Microsoft software at all. That wasn't by accident, but instead it was a deliberate choice to try to wean myself off of the proprietary software habit.

By 2013 I was already a long time GNU/Linux user, but after the Snowden story hit I realized that was the point at which I wanted to "go full Stallman". I wanted to actually have some degree of confidence that my systems were not jam packed full of spooks and other malware nasties, and so distros containing proprietary blobs (typically things like wifi and graphics drivers) had to be relinquished. That's an ongoing project, but running Trisquel and Debian without the nonfree repos I think I've mostly succeeded.

However, running free software on desktops and laptops wasn't enough. I also wanted to have more control over the internet services which I use. I'd been hosting my own web site since 2010, but the Snowden event turbocharged my efforts to have a full-featured internet experience using free software and that was the beginning of the Freedombone project. I took my feeling of annoyance and disappointment at the way things were going and tried to do something positive with it.

Today free software is more important than ever, and I'm looking forward to things like reproducible builds becoming a reality so that we can have even greater confidence in the integrity of the systems we use. Over time the badness of proprietary software business models have become only more apparent, and free software shows that there can be a different and better way to do things which does not disenfranchise software users or producers.

Read more

Book Review: Rebel Footprints by David Rosenberg

on Oct 08, 2015

https://www.youtube.com/watch?v=UClpbN-2mgg

Especially in the run up to general elections we're often told by commentators on all sides that we must vote because previous generations fought for it. Mostly that's a reference to world war 2, but actually most of the things which are usually regarded as basic rights were not obtained via that particular war but were part of a much longer multi-generational struggle over centuries.

Who were these often vaguely alluded to people who "fought for democracy" and what did they do? Within the 1830-1940 time frame this book describes some of the personalities and struggles involved which were situated within London. At the end of each chapter there's a walk between some of the landmarks, and you can see a subset of this in the above video.

There were a few main things which people were campaigning for with those times, such as:

  • Universal suffrage (all adults being able to vote in elections), particularly for women
  • A sane length to the working day
  • Safer working conditions
  • Support for the unemployed or disabled
  • Tea breaks
  • Ending child labour
  • A living wage
  • Abolition of fines at work (a sort of wage theft for petty violations of rigidly enforced rules)
  • Better housing and fair rents
  • Paid holidays
  • Pensions

I was quite naive about it a few years ago, but one thing I now know is that significant social changes are rarely granted by authority figures who suddenly "see the light" but instead the hand of authority figures is forced when there is a critical mass of agitation from below, operating in a manner which they can't easily dismiss. This often does entail rebellion though various kinds of non-cooperation and strikes, which in turn requires a high degree of discipline and solidarity which is not always easy to establish.

The book begins with the Chartists and ends with the East End resistance to Oswald Mosley's blackshirts. It's quite an easy read and written in an entertaining style. In the 19th century most of the effort seems to be focused on building unions and regulating the length of the working day. It's only by the end of the 1920s that all adults are able to vote without any special conditionality.

If there's an overall message to this book I think it's that it took significant efforts over multiple decades to get the vote, but that being able to vote in the kind of system which presently exists isn't enough. Many of the above points still remain in contention and there is always a need for new rebels to take risks and push for change.

Read more

Is Privacy Dead?

on Oct 08, 2015 ·
1 comments

There is a certain viewpoint that privacy is dead, and we should just all get over it and move on. It's the kind of sentiment expressed in this video.

I think there are a few fundamental things to consider. Don't think about the technology. Just think about it from a sociological perspective. Who is being watched and who is doing the watching? What sorts of power might the watchers obtain, and how might they use those powers? Is that more likely or less likely to make the world a better place? Does it make internet users more free, or does it just deepen existing power imbalances and allow for greater social control? What are the costs and benefits of promoting an idea, even in a panopticon, versus not promoting it because it may be a minority view?

If you're a "conscious human", as they sometimes say on the Stone Ape podcast, then you should be considering these things.

Personally I don't think privacy is dead. It's a very basic concept. This is why people typically prefer their own rooms rather than to live communally in dormitories. It's why medical records aren't public information and neither are bank accounts (unless you use Bitcoin). It's really about being able to choose how the information which you generate is deployed. A mediation of personal power.

So at the end of the video the choice is either to boycott or to self-censor. I think there's a third option, and that's to be the internet. Yes, admittedly, the third option is something only a small technical elite currently engage in, but I don't think there's any reason why it has to remain that way, especially if you factor in "internet of things" and such.

Recently I've been investigating ZeroNet and IPFS. The latter isn't ready for mainstream usage, but the former certainly is. Both are end-to-end encrypted by default, disintermediating advertising companies and spy agencies from the process. Both could be "web scale", even if individual computers are not very powerful - like laptops or phones. These are early stage developments, but I think in another five years, or ten years, another kind of internet is definitely possible. The possible future internet has privacy built in as the default. Creating a new blog or web site is a one click operation and doesn't involve any further cost or complications. Initiatives like "let's encrypt" - important though they are to the web as it currently exists - become irrelevant within the new paradigm.

The people who currently argue that privacy is dead are really making a political statement. They're saying that those with power should keep it and those without should learn how to render unto Caesar all of their information so that it can be used to keep them in their place. It's a very anti-democratic kind of message.

Read more

Book Review: Beyond Zero and One

on Oct 08, 2015

Robotics and hallucinogens. An unlikely combination, you might think, but much of robotics is really about solving problems of perception and then deciding what to do. So the premise of the book by Andrew Smart, Beyond Zero and One, seemed good enough to be worth reading.

The part which I found interesting was about the Byzantine Generals Problem - the problem of designing fault tolerant systems. Being able to know what a fault is, detect that you've made one and then arrive at a plan to correct it or to ask for help is all very relevant to robotics, especially if you want to have them working in collaboration with people.

Aside from that though I didn't find the book especially enlightening. This may be in large part because I've already read many similar things, and so the philosophers, neuroscientists and technologists described are all familiar territory.

Early on the author comes out as being a singularitarian, but he's then in the odd position of also being anti-computationalist. This is a potentially interesting dichotomy, so I was anticipating some explanation, but that never really arrives. If you take the anti-computationalist stance then you really need to have a theory for how virtual machines can arise within human culture. Instead the author seems to stick to a sort of naive physicalism, repeatedly claiming that computation and the systems which emerge from that do not really exist. He seems unfamiliar with bootstrapping or multi-layered computational systems.

As a thought experiment, or maybe a practical experiment, try telling someone playing a computer game that the game does not really exist, that their experiences with it are purely illusory. A mere sham. Ditto for any human interactions with large non-trivial computational systems. As a naive physicalist you could say that the statement is correct and that the only "real" thing occurring is atoms in motion, but that's not a very satisfying or comprehensive explanation. It doesn't even have much causal power.

There are also some arguments from science fictional evidence in relation to Mr Data from Star Trek. The author obviously realizes that he's doing this, but continues merrily anyway. Amongst the various notes which I made on this book were comments such as:

Mr Data is just a plot device in a sci-fi. His behaviour in relation to jokes doesn't necessarily say anything about actually existing AI or robotics

On the hallucinatory side of things I had been expecting to get into the physics of early vision. The "form constants" of Cowan et al. Tunnels, funnels and spirals and their relation to transcendent experiences in diverse cultures throughout human history. It could be argued that for robots to really be conscious in the same manner as humans then they also need to support similar features, at the level of "the physics of perception". Unfortunately, the author doesn't go there.

Although the names dropped comprise mostly of the usual suspects, in relation to hallucinatory experience I had expected the author to describe some of Terrance McKenna's views. It seems to me that McKenna had quite a lot to say on the topic of machines and transcendent consciousness, although I don't know if he was a fan of Kant.

At the end there is what I'd characterize as the usual sort of singularitarian stuff. The obsession with ultra-intelligent machines, takeoffs and such. To me this kind of thinking ignores everything we know from simulated intelligent system evolution in preference to a simple eschatological narrative.

Although it may seem like a curious combination, the intersection of robotics with hallucinogenic experience is a legitimate topic worthy of study. There still a lot to be uncovered about how the brain generates and maintains perceptions, how it synchronises those with observations and how in combination with language new kinds of meta-machinery can be built on top of that whole system, which is really what defines the human project from the earliest signs of artistry and collective imagination. In my estimation this book doesn't provide the answers, but perhaps there are others which can shed more light on the issues.

Read more

Impersonal Computing

on Oct 08, 2015

Microsoft has now confirmed the spy features of Windows 10. So they're in new territory. Doing this probably seems fun or cool from their perspective, but there's abundant evidence that all kinds of shady enterprises can piggyback on that kind of data extraction. When Canonical tried something vaguely similar with ads in Ubuntu a while back they were rightly pilloried for it.

For Microsoft operating systems it's now debatable whether this really still qualifies as being "personal computing", because when every keystroke and search is exfiltrated that means that the user is not really in control of their system. They effectively have no privacy. Maybe that could be "impersonal computing" or just "bugged computing". It's not something I'd want to be involved with.

Microsoft claims that they don't exfiltrate emails (oh, how gracious of them), but a few more "innovations" down the line maybe even that won't be true.

These are interesting times.

Read more

Post-Snowden

on Oct 08, 2015

https://www.youtube.com/watch?v=panT9P_VdyE

In the immediate aftermath of the Snowden news story which began in 2013 there was the idea that things should proceed on two fronts. Firstly we should try to fix broken systems, get more sites onto HTTPS and generally make good cryptosystems simpler for the masses of internet users. Secondly, there should be a political effort to roll back the laws which facilitated the things which the Snowden documents revealed.

The second effort is ongoing, and the above is an imaginative way of trying to begin to turn things around. But I think the political effort is failing. If anything, spy agencies and prurient politicians appear to have been further emboldened, with even more calls for expanded surveillance powers and denunciations of end-to-end encryption (for the average user, of course, not for the rich and powerful).

So the way I see it is that this is shaping up to be a purely technical effort. Even the much vaunted Corbynistas so far have remained silent on the question of rolling back the Black Hole of GCHQ or decommissioning Tempora. Ironically, if their leader is as radical as they claim then I expect the full anti-democratic force of the surveillance state to be deployed against him and his entourage in an effort to ensure that he is overthrown by his own party before getting anywhere close to Downing Street and the levers of power.

Fortunately, the prospects for building a better internet seem to be quite good. ZeroNet looks like it could be a good way to go, and this week I'll also be testing IPFS. Both of those systems are inherently encrypted, should scale well and are also quite user friendly. So this could be the beginning of web 3.0.

Read more

Freedombone backups with Obnam

on Oct 08, 2015

For the Freedombone system I've now moved the default backup system from using rsyncrypto to Obnam. There are a few main advantages.

  • Obnam uses a GPG key for encryption (in this case 4096 bit RSA), as opposed to the OpenSSL based rsyncrypto. GPG keys are generally more trusted.
  • Filenames are not conspicuously visible for remote backups
  • Multiple backups of the same files can exist, so in principle you could restore a file from a specific backup

Also I've improved the backup script such that it's in a more maintainable form. Previously it had comprised of a growing series of copy-and-pastes as more applications were added, but now there are some functions which get called.

The method of doing backups hasn't changed. For a backup to LUKS encrypted USB drive:

ssh username@domainname -p 2222
su
backup

Then enter the passphrase for the drive. To subsequently restore:

ssh username@domainname -p 2222
su
restore

On initial installation of Freedombone a copy of the backup GPG key will be placed in your home directory. You should move those files to a safe place so that if you need to completely restore the system at some future time you'll have all the needed keys.

For local backups to USB drives this provides defense in depth with two levels of encryption. First you have the drive encryption (LUKS) and then you have stronger GPG encryption of the backup data. Even if an adversary can guess or crack the drive passphrase they'll be much less likely to get past the GPG encryption if they don't have the key. This means that if you accidentally lose the USB drive somewhere the chances of that resulting in a data compromise are small.

Currently the backup key is not distributed via the unforgettable key mechanism, but that will likely also be added later. Unforgettable key is an optional extra which ensures that the entire system is recoverable even under the worst conditions (eg. your house burns down in a bush fire and you lose everything).

Remote backups havn't been well tested yet, but local backups with Obnam certainly appear to be in a working condition.

Supplemental: It turned out to be quite simple to add the backup GPG key used by Obnam to social key management (aka "unforgettable key")

Read more

Is Software Engineering really Engineering?

on Oct 08, 2015

So the question has been posed:

Is software engineering really engineering at all, or is it just a "cowboy" profession in which nobody has a clue what they're doing?

Software engineering as a profession, or just as a phenomenon, is quite young. It's really only a few decades old. Middle aged by human standards. But when you compare it to other branches of engineering it's really just a baby.

For control systems working in the background with little or no direct human interaction I think it will be possible to formulate sophisticated engineering best practices for their creation and maintenance. For socio-technical systems which include a lot of human interactions I think there will be no universally definable best practices, because culture is always changing and software mediates the culture. Cultural systems built out of language operating under unconstrained conditions are inherently hard to predict (they are often "AI complete"), and so the degree to which planning and foresight are possible (in line with the common notion of a mature engineering profession) is quite limited.

Despite the unpredictability it's possible to distill a few general engineering principles:

  • Ensure the system is auditable. Software which can't be independently audited can't be improved.
  • Test your software. Don't just assume it works in all conditions.
  • Make frequent releases of small changes.
  • Don't assume you can plan far ahead in minute detail (eg. We will definitely release feature X on date Y).
  • Make it easy for users to provide feedback. This is where your information really comes from.
  • Avoid grandiose strategic planning. Think tactically most of the time. Assume there will be failures and redesigns.
  • Have a simple upgrade process
Read more

Principles of Open Source

on Sep 23, 2015

Pieter Hintjens recently published some rules for open source success on his blog. Rather like Tom Barbalet I think I was "open source" before that term even existed and so I've seen a lot of things tried, seen the rise and fall of projects, been both a maintainer and a contributor and an evangelist and so on. In the light of all that I think Hintjens rules hold up pretty well. Since information on the interwebs is ephemeral, I'll reproduce them here.

  1. People Before Code

This is the Golden Rule, taught to me by Isabel Drost-Fromm. Build community, not software. Without community your code will solve the wrong problems. It will be abandoned, ignored, and will die. Collect people and give them space to work together. Give them good challenges. Stop writing code yourself.

  1. Use a Share-Alike License

Share-alike is the seat belt of open source. You can boast about how you don't need it, until you have a bad accident. Then you will either find your face smeared on the wall, or have light bruising. Don't become a smear. Use share-alike. If GPL/LGPL is too political for you, use MPLv2.

  1. Use an Zero-Consensus Process

Seeking upfront consensus is like waiting for the ideal life partner. It is kind of crazy. Github killed upfront consensus with their fork/pull-request flow, so you've no excuse in 2015. Accept patches like Wikipedia accepts edits. Merge first, fix later, and discuss out of band. Do all work on master. Don't make people wait. You'll get consensus, after the fact.

  1. Problem, then Solution

Educate yourself and others to focus on problems not features. Every patch must be a minimal solution to a solid problem. Embrace experiments and wild ideas. Help people to not blow up the lab. Collect good solutions and throw away the bad ones. Embrace failure, at all levels. It is a necessary part of the learning process.

  1. Contracts Before Internals

Be aggressive about documenting contracts (APIs and protocols) and testing them. Use CI testing on all public contracts. Code coverage is irrelevant. Code documentation is irrelevant. All that matters is what contracts the code implements, and how well it does that.

  1. Promote From Within

Promote contributors to maintainers, and maintainers to owners. Do this smoothly, easily, and without fear. Keep final authority to ban bad actors. Encourage people to start their own projects, especially to build on, or compete, with existing projects. Remove power from people who are not earning it on a daily basis.

  1. Write Down the Rules

As you develop your rules, write them down so people can learn them. Actually, don't even bother. Just use the C4.1 rules we already designed for ZeroMQ, and simplify them if you want to.

  1. Enforce the Rules Fairly

Use your power to enforce rules, not bully others into your "vision" of the project's direction. Above all, obey the rules yourself. Nothing is worse than a clique of maintainers who act special while blocking patches because "they don't like them." OK, that's exaggeration. Many things are much worse. Still, the clique thing will harm a project.

  1. Aim For the Cloud

Aim for a cloud of small, independent, self-organizing, competing projects. Be intolerant of large projects. By "large" I mean a project that has more than 2-3 core minds working on it. Don't use fancy dependencies like submodules. Let people pick and choose the projects they want to put together. It's economics 101.

  1. Be Happy and Pleasant

Maybe you noticed that "be innovative" isn't anywhere on my list. It's probably point 11 or 12. Anyhow, cultivate a positive and pleasant mood in your community. There are no stupid questions. There are no stupid people. There are a few bad actors, who mostly stay away when the rules are clear. And everyone else is valuable and welcome like a guest who has travelled far to see us.

Some points where I'd diverge from this is that instead of calling the process "zero-consensus" I'd say minimal consensus. Obviously you do need some amount of consensus to do anything with more than one person, although it shouldn't require the contributors to hold identical political, religious or other viewpoints aside from a very minimalistic set.

I know that some maintainers of larger projects do sometimes stop writing code themselves, but I'd advise against that. Coding is a craft, and you don't learn how code works unless you're actually doing it. Since technology is always changing there is never any point at which the learning stops. If you don't know what good or bad code looks like then you're not going to be a good maintainer.

I'd also add high on the list, and maybe as the first item, "always be biased towards action". Avoid spending long amounts of time vacillating, pontificating, procrastinating, preparing, waiting on other people or for someone else's permission. Get going on something quickly, and then subsequently refine your plans in accordance with the outcomes.

Read more

Two paths to a new internet

on Sep 15, 2015

It now seems clear to me that two possible paths to the next generation of internet technology are opening up.

The first is the Freedombox-like approach, in which you have some box in your home which stores and serves your content and which federates with other similar boxes via open protocols. You maintain ownership of your data along with the potential for good privacy controls (similar to what happens currently with Zot).

The second is a purely peer to peer system similar to ZeroNet in which there are no servers and data gets replicated across the network, using cryptography to ensure integrity and ownership. Even when you are not connected to the network other peers may still be seeding your public content so that you still can have a 24/7 internet presence.

The Freedombox-like approach is more conventional, with its client/server architecture and use of legacy systems but I think ultimately the peer to peer approach may be more practical and easier to secure. The down side of the peer to peer approach is that this is largely a new frontier and so there are not so many pre-built software parts which it can be assembled from. That may also be a strength though because security can be built in from the very beginning, whereas with client/server security was an afterthought and still remains not well implemented even after multiple decades (examples are email and the certificate authority system for web sites).

Especially when combined with systems like Tor, I think the problem of mass surveillance will ultimately be partially mitigated via technology. Perhaps onion routing just becomes another layer of the network stack. Proponents of indiscriminate wiretapping call this "going dark", as they become less able to read and leverage private correspondence for their own commercial or political gain. To some extent I think it will become possible to make metadata analysis no longer useful, and so the future does not look quite as relentlessly depressing as it may have seemed in the immediate aftermath of Snowden. But be in no doubt, your information - however mundane - will remain a site of contestation and desire for a variety of groups within society for the foreseeable future.

Read more

Corbyn et al

on Sep 13, 2015

So apparently Jeremy Corbyn won the Labour party leadership. I notice many folks in the Twitterverse and elsewhere hailing this as a great moment in politics. My favourite is:

"YES YES YES YES GO JEREMY STICK IT TO THEM THE BASTARDS AND THOSE THREE CHILDREN YOU RAN AGAINST. BRING THE REVOLUTION."

You'll have to forgive me for being cynical about the machinations of the Labour party. I'm old enough to remember the mania around Tony Blair in the run up to 1997, and embarrassed to admit that I voted for him. That turned out to be a big mistake.

Whether I ever vote Labour again will depend on what sort of policies Corbyn comes out with. If he comes out decisively against Workfare then there's a chance I might. There's also the question of mass surveillance and whether a future Labour administration would try to reinstall the "central [biometric] identity database" and identity card system.

If Corbyn is as radical as some of his proponents believe then he will face considerable opposition from within his own party and his premiership might not last for long. In the bigger picture though the Labour party does need to move leftward if it's to stand any chance at all of regaining Scottish seats and having a big enough majority in a "first past the post" electoral system. The Blairite tendency may not like that, but it's just the reality of the situation. If Corbyn can form an alliance with SNP and/or the greens then he may have enough votes for a clear win in 2020.

But five years is a long time, so there's plenty of time for Corbyn to turn into another crazed authoritarian - an Ed Milliband or Blair 2.0.

Personally I'm not convinced that Corbyn is a radical and that it will take more significant efforts at a grassroots level (aka "on the streets") to move things in a better direction. But, of course, I could always be wrong.

Read more

The Freedombone Mesh

on Sep 05, 2015 ·
1 comments

For the last week I've been on holiday, or vacation if you prefer. Since I'm not all that interested in visiting exotic places or frequenting bars a holiday for me means some combination of reading and software development.

On the reading front I read Capital in the 21st Century by Thomas Piketty, Getting By by Lisa McKenzie and The Mind's Eye by Oliver Sacks.

In terms of software the main project was to get some communications services running on a mesh network. Mesh networking is really the last piece of the Freedombox feature set which didn't exist within Freedombone. There's a video of James Vasille from 2011 in which he says mesh networking for Freedombox "might be a lie", and there's certainly some disparaging stuff about it written on the Debian freedombox wiki with regard to the OLPC experience.

My expectation about OLPC is that firstly they were probably using low powered hardware which is many years old and secondly that they might have been trying to shoehorn systems designed for a client/server architecture into a mesh. Nearly all web services in existence currently assume a client/server architecture, and just moving that into a mesh could indeed be very inefficient.

For the mesh version of Freedombone I've tried to stick to services which are inherently designed for peer to peer, and which could scale up to some non-trivial number of peers bigger than "a few students sitting around under a tree". For chat and VoIP I've used Tox and for blogging I've used ZeroNet. With a few test peers this system seems to work well and is for all intents and purposes instantaneous, although Avahi seems quite slow to update itself when peers enter or leave the network.

So after a week of tinkering I think I have something which could be deployable. The initial web page which shows other peers in the network is about as primitive as it gets and so could certainly be poshed up, but the basic functionality is there.

The Freedombone mesh is currently an offline system in the sense that it's not connected to the wider internet. It's something which you could use for purely local communications if the internet wasn't available or became too untrustworthy. Possibly at a later stage I might add internet tunneling to other mesh networks to form an intermesh.

Strategically, it may be a good idea to keep wireless mesh networks separate from the internet and have users own the infrastructure in order to avoid it being seen as just a cheaper way to get internet access and possibly get into trouble with ISPs or as being just some new and loathsome vehicle for the spurious business models which already bedevil the existing internet.

Instructions on installing a Freedombone mesh can be found here.

http://freedombone.uk.to/mesh.html

Read more

More Bugs

on Sep 01, 2015

https://www.youtube.com/watch?v=ZujfUHIaAXM

It's another of those always listening things which looks up stuff on the internet. Part of me really likes this. It's a form of robotic automation which potentially could be quite practical if the speech recognition is good enough.

But then, from the blackhat side I think that if the security on this isn't really really really good then the potential for bad outcomes is also fairly significant. Think about it. An always-on microphone next to your bed and possibly also next to your children's beds (see the video on the kickstarter page) which is connected to the internet. What could possibly go wrong? Ashley Madison, but with audio. Or maybe your kids worst tantrums broadcast to the world so that their school mates can mercilessly taunt them. Or potentially creepier stuff.

So nice technology, but use with caution.

Also a slight correction. The Raspberry Pi isn't open hardware, even though many people mistakenly believe that it is. It's proprietary hardware which an open source operating system can be installed onto. It requires a closed proprietary blob even to boot. I'm not a Pi-hater, but that's just for clarification.

Read more

Of Idiots and Evildoers

on Aug 19, 2015

https://firstlook.org/theintercept/2015/08/19/jeb-bush-comes-encryption

It's easy to be disparaging about the folks who want to ban or backdoor encryption systems. Although it may look very much as if they are, I'm assuming that most of these people are not total idiots but instead are merely naive about what encryption is and how it's used.

In 2015 I think it's fair to say that encryption of a shaky though usually adequate level safeguards economic transactions worldwide. If encryption were banned or backdoored then this would unleash demons so horrible that I expect the law would be repealed faster than any other in history, and whichever politicians were instrumental in its passing would lose any reputation they previously might have enjoyed.

As people keep saying, there is no such thing as encryption which only the "good guys" can use. If there's a backdoor then probably sooner rather than later it will be found by all sorts of researchers. If there's no encryption then you're completely in the jungle, with no ability to have any confidence in the integrity of any important communication.

I agree with Bush that "I think this is a very dangerous kind of situation", but the danger comes not so much from evildoers cracking strong end-to-end encryption as from naive people like himself trying to have encryption banned, which would put us all in danger.

Read more

Novena: Like the Freedom but not the Design

on Aug 18, 2015

It has been a busy day. I saw a demo of a Novena laptop. It's fully free (as in beards) and has some fancy features, but its design makes it not incredibly practical.

Firstly it's not really a laptop format. The screen is the other way around than you'd expect, and the electronics are completely exposed. That's ok though if you plan on doing hardware hacking with this machine.

The connectors and screen seem very fragile, so this isn't something that you'd really want to move around too much. Apparently even picking the system up in a fairly conventional way can damage the screen and you also need to be careful about warping the screen.

Personally, if I was designing a laptop from scratch which was intended for hackers to use, I'd make it far more rugged and industrial-grade. In addition to the software freedom aspect I'd want to be able to use it in any environment, and move it around at short notice without fear of breaking any fragile cables.

Another observation is that the Novena is so heath-robinson in appearance that I wouldn't recommend owners to take it on a train or into an airport. It could certainly be mistaken for something suspicious.

Read more

The Book Dilemma

on Aug 16, 2015

I confess to being a rather bookish person. I liked books from a young age and over the course of decades have amassed quite a quantity of deceased trees. Most of my books are obscure titles on obscure science or technology related topics, rather than best-selling bodice-rippers or pot-boilers.

But times they are a-changing and books in electronic form now seem to be a no-brainer. About five years ago I tried to sell off most of my dead tree material and go purely metaphysical, but alas the obscure books are hard to sell or even give away. Probably the most logical thing to do would be to try to give them to a local library. Also in recent years I've been accumulating new books in epub or pdf formats and using Owncloud as a method by which to read them anywhere on laptops, netbooks or tablets. I also now mostly read books on a smartphone, but the battery life isn't great and the screen isn't necessarily ideal for reading.

So I'm now considering getting an ebook reader. That is, something actually designed for reading purposes and not much else. I've been an ebook reader holdout for the longest time. Initially from lack of money but more recently just out of dislike for the seemingly very unfree and locked-in cloud based business model of the Kindle. I recently read an article about just how dreadfully Amazon treats its employees, but there are other factors such as surveillance of the reader and proprietary formats to consider. So if I claim to be at all rational then I shouldn't get a Kindle, but at the same time it looks like in many cases from smaller publishers the only alternative to a chunk of dead tree via snail mail is a kindle file. Or maybe not even a downloadable file, but some link to a horrifically DRM'd thing accessible only on specific reader hardware under highly restrictive terms and conditions.

So if I don't want to accumulate paper going forward it's a dilemma. A first world wet liberal type of dilemma, you could say. It's annoying that the realm of publishing seems to have been so corrupted by a single monopolistic company. So I may remain a holdout, but I'm also reminded that it's often better to think strategically but act tactically in an imperfect world.

Read more

The Future Web

on Aug 15, 2015 ·
1 comments

https://www.youtube.com/watch?v=ubxWu0kne84

Anyone who is familiar with the technology of the web and who also paid passing attention to the Snowden revelations knows that there are critical aspects of how the web works which are rather broken, and where "bad actors" of multiple types are actively taking advantage of that. So the question arises: can we make a better web?

Recently I've been experimenting with something called Zeronet. The main idea here is that instead of the client/server architecture model which the current web depends upon we may be able to do everything in a peer-to-peer way. Replace the way that web pages are transported (via http/s) with something more similar to bittorrent and use elliptic curve cryptography similar to Bitcoin to ensure the integrity and ownership of data. In this model web sites wouldn't exist on a single server or cloud, but would be replicated (seeded) throughout the network.

A Zeronet-style web architecture, similar to what's described in the above talk by Brewster Kahle, would appear to solve a number of problems. There's the history problem of archiving the past, the backup problem of ensuring that you don't lose your data, the censorship problem of ensuring that sites can't be easily blocked and that social problems need to be confronted honestly rather than by merely blocking sites and hoping the problem can be swept under the carpet, and there's also the privacy problem of mass surveillance and having "the right to read".

At one point in the talk Jacob Appelbaum asks a question along the lines of "do we need this, since Tor already solves some of these problems?" and I think Brewster's reply to that is quite good. Tor is based around the paradigm of a web site existing on some particular server, but in a peer-to-peer web that data would be distributed, and so knowing which computer seeds which sites is not all that useful and the whole onion routing thing becomes far less relevant (except for things like VoIP). However, a thought occurs that it may also be possible to have onion layered web site seeds, which would make any mass surveillance of the web many orders of magnitude harder than it typically is today. There's also another benefit in that if a site isn't located on only one computer then things like DDoS attacks become pretty irrelevant and the more peers there are the faster things are likely to be (which is opposite to the current way things work).

So I'm quite optimistic about these new ideas of how a next generation web might work. Appelbaum's later political point that if the bad guys can't target information then they'll target people is true, but the counterpoint to that is that the 2am wakeup call is a business model which isn't "web scale" because it requires boots on the ground and there are only so many goons for hire.

Read more

The Australia Project

on Aug 14, 2015

https://www.youtube.com/watch?v=1Aup_EaO680

These days I don't pay a lot of attention to Singularitarian stuff, but this interview between Marshall Brain and Nikola Danaylov was quite interesting because it gets into the question of basic income and the difficulty of whether or not that can become a reality.

In the first scenario - what might be called Dormitarianism - most of the human population ends up as surplus to requirements and housed in prison-like dormitories. Think of Yarl's Wood, but on a much bigger scale, where the occupants aren't criminals but have nevertheless been corralled by one means or another. Another comparison might be to the impoverished reservations of native Americans, city ghettos or "sink estates". If you think about it Dormitarianism is already a phenomenon here now.

Increasing Dormitarianism seems to be very much the direction which things are headed in right now. So we have increasing automation together with workfare, zero hours contracts and militarised police along with mass surveillance. The policing and surveillance is I think mostly just to try to keep the lid on the situation and to predict and prevent activities which have "social action potential". The mainstream narrative is all about blaming the poor and claiming that welfare is "unsustainable" and austerity "unavoidable" and so on.

But as Marshall mentions, I think the narrative which supports that kind of ideology is losing its grip. At some point the Dormitarian population will become sufficiently large that even the most brutal robot and surveillance enabled repression will no longer be able to contain it. Even resorting to pure eliminationism would only tend to expedite the eventual point of revolution.

Between now and the time of The Revolt of the Dormitarians I think we could expect the repeal of universal suffrage within the nominally democratic countries, as a strategy to try to maintain the growing inequality. We might expect a mainstream media narrative to comprise of complaints about undesirable people "voting the wrong way" or being "too ignorant" to vote. Another strategy which we can see in action today is just to manoeuvre the main political parties into a position such that it's hard for the populace to vote against things like workfare or mass surveillance, because all the main parties believe in continuing those things.

Marshall Brain's second scenario is The Australia Project, in which there is something like a basic income together with automation fulfilling the essential human needs. That's a much more desirable long term destination, although it does require political change. Critically, it requires change in how people conceptualise ownership, and that might not be easy but we can perhaps see the early beginnings of it with things like open source software and other forms of peer production. Nikola talks about the bad PR which anything which might seem like socialism has in the US, but you can change the name of basic income to "citizen's dividend" or "negative income tax" to ease the ideological transition.

It's also worth reiterating that none of these ideas are new, and that futurists have been writing about similar things for decades. It's just that in the past the transition to reduced wage labour and increased leisure time seemed easier, whereas from today's perspective it appears that the transition will be more antagonistic. We now know that instead of gradually having more leisure time most people work similar hours to a few decades ago but that access to employment is much more strictly rationed and there is more unemployment/underemployment.

Read more

In the Beginning

on Aug 13, 2015 ·
2 comments

This is the first time I've tried Zeronet, and it looks like it has potential. Perhaps this could really be a genuinely decentralised web for the masses, where creating a new site is as simple as cloning an existing one and then adding your own content.

Read more
Add new post

Title

21 hours ago · 2 min read ·
3 comments
Body
Read more

Title

21 hours ago · 2 min read

0 Comments:

user_name1 day ago
Reply
Body
This page is a snapshot of ZeroNet. Start your own ZeroNet for complete experience. Learn More