Tuesday, 13 June 2017

Gnome Desktop -- Terminal icon wars

I have recently switched my default desktop environment to Ubuntu Gnome as part of our ongoing dogfooding program.  First impressions are good.  As always different is confusing and I keep hitting buttons that used to do one thing and get another; 5 years with one desktop will do that to a man.

As a consummate command-line junkie I really use my desktop environment to hold my terminal windows, lots of them.  It is really important my desktop environment will sensibly collate them; sensibly to me.  I use a lot of command line tools for email, IRC etc.  Those are necessarily hosted in terminal windows but not semantically terminal windows.  I want them to be grouped together away from actual terminals and preferably they should have my preferred icons.

I am pleased to say I have been able to persuade Gnome of my predilections.  It has been a long and frustrating journey.  I have to thank Laney for his support in this endeavour, for answering interminable IRC questions and stopping me from sticking a fist through my screen.

Gnome Terminal

The default terminal application in Gnome is Terminal (gnome-terminal to me).  This has lots of nice sounding options which ought to let me have the control I want.  It supports the --class option to set the window manager class.  In the X11 world this is meant to tell the window manager the type of window this is and it is common to use that to group windows.  Great:
gnome-terminal --class weechat -- weechat
Of course nothing is ever simple.  gnome-terminal is now smart, starting a server which spawns new windows for you thus defeating the window manager class option.  After some playing and a lot of whining at people further down the road they pointed me to the --app-id option allowing me to separate instances by use case with a server for each.  After some work (and getting a bug fixed in the Ubuntu gnome-terminal wrapper) I was able to use those two in combination:
gnome-terminal --app-id com.example.Terminal.weechat \
    --class weechat -- weechat
Now my windows are separated and grouped in the alt-TAB popup.  Sadly they are all called gnome-terminal.

Gnome Dash

In order to distinguish the various otherwise visually identical Terminal icons with their associated terminal windows I wish to have specific icons for each group.  Icons are determined by the .desktop file for the application.  So first we have to create one for each class:
[Desktop Entry]
Encoding=UTF-8
Name=my-weechat
Comment=Chat with other people using Internet Relay Chat
Exec=gnome-terminal --app-id org.shadowen.Terminal.my-weechat --class my-weechat --hide-menubar --title Weechat
Icon=weechat
Terminal=false
Type=Application
Note that you want the Name= attribute to be unique in space and time otherwise it will associate your windows with another application (likely with the same icon) but not with your command and generally make your head hurt.  This had me going for an hour as one of my groups was fine (the name happened to be unique) and the other not.

This file tells the launcher which icon to associate with this application.  You need to drop that into your personal applications directory ($HOME/.local/share/applications) for Gnome to know about it.  Now you can start this new application from the overview search box.  You can also drag that icon from the searcher to the Gnome Dash to have it clickable.  Nice.  Now I have my windows grouped on alt-TAB with the specified name underneath and the appropriate icon.  Win!

This seemed to work for a while, until it stopped working and they all went back to being named gnome-terminal and using the original Terminal icon in alt-TAB.  Arrgggh.

Startup Notification Protocol

After much reading around the subject it seems I was hitting a race so that the Gnome Launcher was having to guess which window was associated with the applications it knew.  As these are all Terminal windows Gnome felt at liberty to associate them with that application even though it did correctly group them by class.  This happens as launching an application is a fire-and-forget process and finding the windows which were spawned by the started application rather than something that happened to start around that time is hard.  To sort this out there is a protocol which allows the newly started application to communicate with the window-manager to tell it that this window is that application.  In Gnome these are defined using the StartupNotify= and StartupWMClass= attributes:
StartupNotify=true
StartupWMClass=my-weechat
With these set to match the class used by gnome-terminal the Gnome Launcher was able to reliably associate the new windows with the appropriate icon but in the Gnome Dash and in the alt-TAB window.

Complete Example

Here is the final complete desktop entry:
[Desktop Entry]
Encoding=UTF-8
Name=my-weechat
Comment=Chat with other people using Internet Relay Chat
Exec=gnome-terminal --app-id org.shadowen.Terminal.my-weechat --class my-weechat --hide-menubar --title Weechat
Icon=weechat
Terminal=false
Type=Application
StartupNotify=true
StartupWMClass=my-weechat

Result 

Finally I have windows grouped under the appropriate icons every time.  Nice.


Monday, 8 June 2015

Living with a Ubiquiti EdgeRouter Lite-3

I have been using an old Dell Mini 9 as firewall, ipv6 tunnel, and file server for my local networks, for some years.  Fear of that just melting into a heap of slag was starting to keep me up at night, time for it to be put out to pasture.  This also seemed like a good time to spend a little money and split out the functions sanely.

After a lot of research I ended up purchasing a Ubiquiti EdgeRouter Lite-3 with a view to using it as my boundary router, and ipv6 tunnel end-point.  All the documentation implied that this little device would handle all of the pieces I need, DHCP, Hurricane Electric IPv6 tunnels, VLANs, firewalls etc. All that and it was sub 100 GBP delivered to my house.  Well worth a punt.  So I ordered one, and waited impatiently for it to arrive.  Once it arrived I put it on the shelf planning on playing with it "this" evening, needless to say the box sat on the shelf for a couple of months, ooops.

Finally, this weekend I got round to pulling it out and booting it up.  It is nice small package installed, silent of course and it seems to perform admirably.  Using the web interface I was quickly able to assign the various interfaces to the appropriate networks, add the VLAN interfaces I needed, and put down basic addresses on them.  Not bad for an hour of fiddling.

When I went to sort out my fairly complicated firewalling requirements things got a bit trickier.  After some googling I found the simplest approach was to use Zone based firewalling, but this form is not supported by the web interface.  Time to break out a bigger hammer and get to know the configuration CLI.

The configuration CLI turned out to be very simple to use, and pretty intuitive.  I am sure it is instantly recognisable to those of you who have to incant at cisco style routers.  You update the configuration in "configure" mode and you then "commit" to test the changes, and "save" to make the changes persistent across reboots.  A handy split for when you firewall yourself away from the configuration interfaces!  After another couple of hours of googling and hacking at my rules I had the IPv4 side of things setup as I wanted and working pretty well.

I still need to setup the DHCP servers, and IPv6 side of my world, but good progress and so far a pretty nice experience.

Thursday, 31 October 2013

Booting ARM64 Ubuntu Core Images (on the Foundation Model)

For a while now we have been hearing about the new arm64 machines coming
from ARM licencees.  New machines are always interesting, new architectures
even more so.  For me this one is particularly interesting as is seems to
offer such a much higher bang for power consume ratio than we are used to,
and that can only bring down hosted server prices.  Cheap is something
we can all relate to.

There has been some awsome work going on in both Debian and Ubuntu to
bring up this new architecture as official ports.  In the Ubuntu Saucy
Salamander development cycle we started to see arm64 builds, and with
the release of Ubuntu 13.10 there was a new Ubuntu Core image for arm64.

This is awsome (to geeks at least), even if right now it is almost
impossible to actually get anything which can boot it.  Luckily for us ARM
offers a "Foundation Model" for this new processor.  This is essentially an
emulator akin to qemu which can run arm64 binary code on your amd64 machine,
allbeit rather slowly.

As one of the Ubuntu Kernel Engineers, the release of the Ubuntu Core image
for arm64 signalled time for there to be an official Ubuntu Kernel for
the arm64 architecture.  This would allow us to start booting and testing
these images.  Obviously as there is no actual hardware available for
the general public, it seemed appropriate that the first Ubuntu Kernel
would target the Foundation Model.  These kernel are now prototyped,
and the first image published into the archive.

As part of this work, indeed to allow validation of my new kernel, I was
forced to work out how to plumb these kernels into the Ubuntu Core image
and boot them using the Foundation Model from ARM.  Having done the work
I have documented this in the Ubuntu WIKI.

If such things exite you and you are interested in detailed instructions
check out the Ubuntu WIKI:

    http://wiki.ubuntu.com/ARM64/FoundationModel

Have fun!

Wednesday, 17 April 2013

BT Fail (or "I have never been so angry")

For those of you who do not have to hear me whine on a day to day basis about, well frankly, everything, you will not be aware that I have been waiting for broadband to be connected to my new house.  Today was the 5th week of waiting for this simple seeming task to be completed.  (Please don't make me even more angry by telling me how your US supplier pays you compensation every day it takes longer than ONE, I expect some level of suck from my UK service providers, else I would emigrate.)  Along the line I have had to have a huge hole made in my brand new house, and had to have countless engineers attend to try and supply my service.  Today should have been the end of this debackle, I should now have super fast Internet, I should be happy.

I am angry, so angry that it is unclear I have ever been more angry.  If my house was not so new I suspect that objects might have been thrown, hard.

Today was meant to be the third attempt to hook up my internet.  Today at 2pm I get a call:
"Hello we aren't coming today *beam*.
No sir, we don't know why, the system says 'Technical Problems'.
Someone will call you within 30 hours to tell you why, honest.
Sorry we do understand this isn't what you were hoping for."
Frankly you do not understand, you have no clue how I am feeling, so let me enlighten you.  My blood is boiling, if I had a heart condition you would likely have killed me.  I have had to go out for a walk to avoid breaking things.  I am now writing this in catharsis.

As I tried to explain to the caller, it is not so much that you are cancelling my slot, shit happens, people go sick, etc etc, it is that you have no idea why it went wrong and you won't know for 24 earth hours, that you cannot tell me why you are going to attend to actually complete the work.  This is utterly unacceptable.  Actually when I phoned your own helpdesk they seemed to be able to find out that "Your appointment was cancelled because we [BT] failed to confirm it with the suppliers".  The website says that "Your appointment was no longer needed because the engineer could enable your service from the exchange."  Who knows what is true.  Whatever is true, I do not have the promised service, I did not have an engineer attend despite confirming the appointment was scheduled on four separate occasions over four consecutive days including Monday this week, on some days by more than one person at BT actually calling the engineers to check.

BT you SUCK.  If Virgin (perhaps one day I will be calm enough to tell you how they suck) didn't suck harder you would have lost my business today.

Monday, 18 February 2013

IPv6 exceeds 1% of google search traffic (continuously)

In the ongoing march towards an IPv6 only Internet, IPv6 is not a speedy traveller but it did reach a mini-milestone this week.  Google reported that IPv6 traffic was greater than 1% of it's total traffic all week, on a regular ordinary week (well the week has the same basic shape as most non-holiday weeks).  Usage continues to edge higher and higher:
http://www.google.com/ipv6/statistics.html
Do I hear 2%?  (Probably not for a little while.)  Yes I know it is sad to be interested in this graph but hey, one has to be into something.

Tuesday, 12 February 2013

GPG key managment

As all good boys did, I (relatively) recently generated a nice shiny new GPG key, all 4096 bits of it.  I have switched everything over to this key and have been happy.  Today I was wondering whatever happened to the old key.  After some minutes trying to remember what the passphrase (oops) I finally managed to find and open the key.

Time it seems to revoke it so that I never have to worry about it again (and before I forget the passphrase for good).  Revoking a key essentially puts an end date on the key, it says any use of the key after this date is definitively invalid.  Luckily revoking a key (that you can remember the passphrase for) is relatively simple:
gpg --edit key
gpg> revoke
gpg> save
gpg --send-key
While I was at it I started to wonder about losing keys and how one guards against total loss of a key.  The received wisdom is to set an expiration date on your key.  These may be extended at any time, even after the key has technically expired, assuming you still have the private key.  If you do not then at least the key will automatically fall out of use when it expires.  Adding an expiry date to a key is also pretty simple:
gpg --edit-key
gpg> key 0
gpg> expire
...
Key is valid for? (0) 18m
gpg> key 1
gpg> expire
Changing expiration time for a subkey.
...
Key is valid for? (0) 12m
gpg> save
gpg --send-key
Note here I am setting the subkey (or keys, key 1 and higher) to expire in a year, and the main key to expire in 18 months.

At least now the keys I care about are protected and those I do not are put out of use.


Monday, 11 February 2013

HTML should be simple even with a javascript infection

Having been there in the simple days when a web server was a couple of hundred lines of code, and when HTML was a simple markup language pretty much only giving you hyperlinks and a bit of bold, I have always found javascript at best an abomination and certainly to be avoided in any personal project.

My hatred mostly stems from just how unclean the page source became when using lots of fancy javascript and how javascript dependant everything became as a result.  Turning javascript off just broke everything, basically meaning you had to have it enabled or not use the sites.  This is just wrong.

Recently I have been helping a friend to build their own website, a website which for reasons I find hard to understand could not be simple, with just links and bold, but really had to have popups, fading things, slides which move, all those things you really can only do easily and well in javascript.  Fooey.

Reluctantly embracing these goals I spent some time implementing various bits of javascript and ended up as predicted in some kind of maze of twisty passages all the same.  I was fulfilling my own nightmare.  Then something happened.  I accidentally discovered jquery.  Now jquery is no panacea at all, yes it does simplify the javascript you need to write so it is clearer and cleaner which is no bad thing.  The real jolt was the methodology espoused by the community there.   To write pages which work reasonably well with just HTML and CSS, and then during page load if and only if javascript is enabled rewrite the pages to add the necessary magic runes.  Now you can have nice maintainable HTML source files and still have fancy effects when available.

I have used this to great effect to do "twistable" sections.  When there is no javascript you get a plain document with all of the text displayed at page open.  If it is available then little buttons are injected into the sections to allow sections to be opened and closed on demand and the body is hidden by default.  All without any significant markup in the main HTML source, what little semantic markup there is has no effect:
<h2 class="twist-title">Section Title</h2>
<div class="twist-body">
Section Text
</div>
Now that is source you can be proud of.  Yes there is some site-wide jquery instantiations required here which I will avoid including in its full glory as it is rather messy.  But this example shows the concept:
$(function() {
        $(".twist-title").prepend("<span class=\"twist-plus\">+</span>
                <span class=\"twist-minus\">-</span> ")
        $(".twist-body").hide();
        $(".twist-title").click(function (event) {
                $(this).children('.twist-plus').toggle();
                $(this).children('.twist-minus').toggle();
                $(this).next().toggle();
        })
        $(".twist-title").css("cursor", "pointer");
});
Ok this is not so easy to understand, but the majority of the code, the HTML pages that the people who write the content have to look at is easy to understand.  I think you only agree this is a win all round.