Thursday, 31 October 2013

Booting ARM64 Ubuntu Core Images (on the Foundation Model)

For a while now we have been hearing about the new arm64 machines coming
from ARM licencees.  New machines are always interesting, new architectures
even more so.  For me this one is particularly interesting as is seems to
offer such a much higher bang for power consume ratio than we are used to,
and that can only bring down hosted server prices.  Cheap is something
we can all relate to.

There has been some awsome work going on in both Debian and Ubuntu to
bring up this new architecture as official ports.  In the Ubuntu Saucy
Salamander development cycle we started to see arm64 builds, and with
the release of Ubuntu 13.10 there was a new Ubuntu Core image for arm64.

This is awsome (to geeks at least), even if right now it is almost
impossible to actually get anything which can boot it.  Luckily for us ARM
offers a "Foundation Model" for this new processor.  This is essentially an
emulator akin to qemu which can run arm64 binary code on your amd64 machine,
allbeit rather slowly.

As one of the Ubuntu Kernel Engineers, the release of the Ubuntu Core image
for arm64 signalled time for there to be an official Ubuntu Kernel for
the arm64 architecture.  This would allow us to start booting and testing
these images.  Obviously as there is no actual hardware available for
the general public, it seemed appropriate that the first Ubuntu Kernel
would target the Foundation Model.  These kernel are now prototyped,
and the first image published into the archive.

As part of this work, indeed to allow validation of my new kernel, I was
forced to work out how to plumb these kernels into the Ubuntu Core image
and boot them using the Foundation Model from ARM.  Having done the work
I have documented this in the Ubuntu WIKI.

If such things exite you and you are interested in detailed instructions
check out the Ubuntu WIKI:

    http://wiki.ubuntu.com/ARM64/FoundationModel

Have fun!

Wednesday, 17 April 2013

BT Fail (or "I have never been so angry")

For those of you who do not have to hear me whine on a day to day basis about, well frankly, everything, you will not be aware that I have been waiting for broadband to be connected to my new house.  Today was the 5th week of waiting for this simple seeming task to be completed.  (Please don't make me even more angry by telling me how your US supplier pays you compensation every day it takes longer than ONE, I expect some level of suck from my UK service providers, else I would emigrate.)  Along the line I have had to have a huge hole made in my brand new house, and had to have countless engineers attend to try and supply my service.  Today should have been the end of this debackle, I should now have super fast Internet, I should be happy.

I am angry, so angry that it is unclear I have ever been more angry.  If my house was not so new I suspect that objects might have been thrown, hard.

Today was meant to be the third attempt to hook up my internet.  Today at 2pm I get a call:
"Hello we aren't coming today *beam*.
No sir, we don't know why, the system says 'Technical Problems'.
Someone will call you within 30 hours to tell you why, honest.
Sorry we do understand this isn't what you were hoping for."
Frankly you do not understand, you have no clue how I am feeling, so let me enlighten you.  My blood is boiling, if I had a heart condition you would likely have killed me.  I have had to go out for a walk to avoid breaking things.  I am now writing this in catharsis.

As I tried to explain to the caller, it is not so much that you are cancelling my slot, shit happens, people go sick, etc etc, it is that you have no idea why it went wrong and you won't know for 24 earth hours, that you cannot tell me why you are going to attend to actually complete the work.  This is utterly unacceptable.  Actually when I phoned your own helpdesk they seemed to be able to find out that "Your appointment was cancelled because we [BT] failed to confirm it with the suppliers".  The website says that "Your appointment was no longer needed because the engineer could enable your service from the exchange."  Who knows what is true.  Whatever is true, I do not have the promised service, I did not have an engineer attend despite confirming the appointment was scheduled on four separate occasions over four consecutive days including Monday this week, on some days by more than one person at BT actually calling the engineers to check.

BT you SUCK.  If Virgin (perhaps one day I will be calm enough to tell you how they suck) didn't suck harder you would have lost my business today.

Monday, 18 February 2013

IPv6 exceeds 1% of google search traffic (continuously)

In the ongoing march towards an IPv6 only Internet, IPv6 is not a speedy traveller but it did reach a mini-milestone this week.  Google reported that IPv6 traffic was greater than 1% of it's total traffic all week, on a regular ordinary week (well the week has the same basic shape as most non-holiday weeks).  Usage continues to edge higher and higher:
http://www.google.com/ipv6/statistics.html
Do I hear 2%?  (Probably not for a little while.)  Yes I know it is sad to be interested in this graph but hey, one has to be into something.

Tuesday, 12 February 2013

GPG key managment

As all good boys did, I (relatively) recently generated a nice shiny new GPG key, all 4096 bits of it.  I have switched everything over to this key and have been happy.  Today I was wondering whatever happened to the old key.  After some minutes trying to remember what the passphrase (oops) I finally managed to find and open the key.

Time it seems to revoke it so that I never have to worry about it again (and before I forget the passphrase for good).  Revoking a key essentially puts an end date on the key, it says any use of the key after this date is definitively invalid.  Luckily revoking a key (that you can remember the passphrase for) is relatively simple:
gpg --edit key
gpg> revoke
gpg> save
gpg --send-key
While I was at it I started to wonder about losing keys and how one guards against total loss of a key.  The received wisdom is to set an expiration date on your key.  These may be extended at any time, even after the key has technically expired, assuming you still have the private key.  If you do not then at least the key will automatically fall out of use when it expires.  Adding an expiry date to a key is also pretty simple:
gpg --edit-key
gpg> key 0
gpg> expire
...
Key is valid for? (0) 18m
gpg> key 1
gpg> expire
Changing expiration time for a subkey.
...
Key is valid for? (0) 12m
gpg> save
gpg --send-key
Note here I am setting the subkey (or keys, key 1 and higher) to expire in a year, and the main key to expire in 18 months.

At least now the keys I care about are protected and those I do not are put out of use.


Monday, 11 February 2013

HTML should be simple even with a javascript infection

Having been there in the simple days when a web server was a couple of hundred lines of code, and when HTML was a simple markup language pretty much only giving you hyperlinks and a bit of bold, I have always found javascript at best an abomination and certainly to be avoided in any personal project.

My hatred mostly stems from just how unclean the page source became when using lots of fancy javascript and how javascript dependant everything became as a result.  Turning javascript off just broke everything, basically meaning you had to have it enabled or not use the sites.  This is just wrong.

Recently I have been helping a friend to build their own website, a website which for reasons I find hard to understand could not be simple, with just links and bold, but really had to have popups, fading things, slides which move, all those things you really can only do easily and well in javascript.  Fooey.

Reluctantly embracing these goals I spent some time implementing various bits of javascript and ended up as predicted in some kind of maze of twisty passages all the same.  I was fulfilling my own nightmare.  Then something happened.  I accidentally discovered jquery.  Now jquery is no panacea at all, yes it does simplify the javascript you need to write so it is clearer and cleaner which is no bad thing.  The real jolt was the methodology espoused by the community there.   To write pages which work reasonably well with just HTML and CSS, and then during page load if and only if javascript is enabled rewrite the pages to add the necessary magic runes.  Now you can have nice maintainable HTML source files and still have fancy effects when available.

I have used this to great effect to do "twistable" sections.  When there is no javascript you get a plain document with all of the text displayed at page open.  If it is available then little buttons are injected into the sections to allow sections to be opened and closed on demand and the body is hidden by default.  All without any significant markup in the main HTML source, what little semantic markup there is has no effect:
<h2 class="twist-title">Section Title</h2>
<div class="twist-body">
Section Text
</div>
Now that is source you can be proud of.  Yes there is some site-wide jquery instantiations required here which I will avoid including in its full glory as it is rather messy.  But this example shows the concept:
$(function() {
        $(".twist-title").prepend("<span class=\"twist-plus\">+</span>
                <span class=\"twist-minus\">-</span> ")
        $(".twist-body").hide();
        $(".twist-title").click(function (event) {
                $(this).children('.twist-plus').toggle();
                $(this).children('.twist-minus').toggle();
                $(this).next().toggle();
        })
        $(".twist-title").css("cursor", "pointer");
});
Ok this is not so easy to understand, but the majority of the code, the HTML pages that the people who write the content have to look at is easy to understand.  I think you only agree this is a win all round.