Monday, December 30, 2019

SD Cards as Audio Players

I've often found it useful to be able to convince Rhythmbox (or other similar media player software) that an SD Card (or other similar USB Mass Storage) should be treated as an Audio Player. This trickery means that Rhythmbox will show the "device" in its UI and allow synchronising playlists to the "device".

As I'm often forgetting how to achieve this, this blog post contains some notes on the topic. This is drawn from a combination of the Gnome docs "Portable Audio Player Source" and the Almost a Technocrat blog ".is_audio_player". The latter I have partially duplicated below in case it falls off the internet.

In short, you can turn some plain old mass storage into an audio device by adding a file .is_audio_player in the root of the storage. That's all that's needed to make it work.

This neatly sidesteps MTP (the "Media Transfer Protocol") which many modern devices prefer to use, though offers similar feeling integrations to various media playing software.

However if you want to go further then you can also customise various aspects of the apparent device by adding content to the file in what appears to be an ini-like format.

Quoted from Almost a Technocrat blog .is_audio_player:
  • name
    Device name (use "" to quote a name that have spaces in between) that will appear in the music management softwares.
    • Without this option, the device name that appears in the music management softwares will follow the mount name specified in the system.
  • audio_folders
    The list (comma separated values) of folders where music files are stored on the device. Music management softwares will copy tracks to the first folder in that list.
    • Without this option, the music management software will transfer the music files to the root path.
  • folder_depth
    The maximum foler depth supported by the device. Do not set it if the device has no limitation.

    If the player stores all the music with the tree / <audio_folders> /<Artist> / <Album>, use the parameter folder_depth = 2.

    For a compilation of Beatles containing two discs with the tree
    / <audio_folders> /Beatles/Compilation/Disc1, with folder_depth=2, sound files located in "Disc1" will not be seen.

    This parameter is also used when importing new songs:
    folder_depth =0 : puts the files in the root folder specified in the first audio_folders as
    <audio_folders> /<filename> ;
    folder_depth =1 : place the files in a subfolder. He will be placed in the first reported in audio_folders as
    <audio_folders> /<artiste> - <album> / <filename> ;
    folder_depth =2 and above : puts the files in two sub-folder. These will be placed in the first reported in audio_folders as
    <audio_folders> /<artiste> / <album> / <filename>.
    • Without this option, the folder depth is considered as 0 (folder_depth=0).
  • output_formats 
    The list of file types (MIME types, comma separated values) supported by the device. The first type listed will be used for the automatic "on-the-fly" conversion/transcoding when transferring into the "portable audio player".
    • Without this option, the music management software will use the file types set in it's preferences.
  • input_formats The list of file types (MIME types, comma separated values) supported by the device that can be save into the "portable audio player" using the microphone or from a radio show. 
    • Without this option, it is considered that the "portable audio player" cannot record audio.
  • playlist_format 
    The list of playlist formats (MIME types, comma separated values) supported by the device.
  • playlist_path 
    The folder containing playlists files on the device. 
    • Without this option, the music management software will copy the playlists to the same folder where it copies the music files.
  • cover_art_file_type 
    The type of image supported by the device, such as jpeg, png, tiff, ico or bmp.
  • cover_art_file_name 
    The filename expected by the device for the cover art image.
  • cover_art_size The size of the cover art image, in pixels. The image is a square, so it's only one number.

Note:
To find out the MIME types of your Ubuntu Linux, goto "/usr/share/mime".

Example:
-------.is_audio_player-------
name="My Portable Audio Player"
audio_folders=Music/, Sounds/
folder_depth=2
output_formats=audio/mpeg,audio/x-ms-wma,application/ogg
-----------------------------------

Thursday, December 28, 2017

Diagnostic Assertions: How to make reading, writing & fixing tests easier

Is testing hard?

All good engineers validate their work to ensure that it behaves as expected. As a software engineer, that means writing automated tests which can easily be run whenever your code changes.

While developers strive to create tests for their code, few enjoy doing so and even fewer enjoy fixing their tests. This therefore creates a problem, as people will tend to avoid doing things they don't enjoy.

If we can make the creation and fixing of tests easier then it seems likely that more tests will be created, leading to software that is better checked in how it behaves.

Note: this post is mostly a write-up of a lightning talk I gave at PyCon UK 2017 at the end of October, so if you'd rather consume this as four minute talk then head over to the video on YouTube.

Some bad tests

In the following examples, we assume that we already have a make_request function defined which makes a web request of a local client and that it's the local client we're testing. The tests themselves appear to fail which is fine as it's the way that the tests themselves that we're interested in looking at.

These tests don't provide much in the way of useful output. even though we've added messages to our assertions:

FFF
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-2-438368b7f979>", line 14, in test_invalid_post
    self.assertTrue(200 == response, "Bad status code")
AssertionError: Bad status code

======================================================================
FAIL: test_page_loads (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-2-438368b7f979>", line 6, in test_page_loads
    self.assertTrue(200 == response, "Bad status code")
AssertionError: Bad status code

======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-2-438368b7f979>", line 10, in test_valid_post
    self.assertTrue(200 == response, "Bad status code")
AssertionError: Bad status code

----------------------------------------------------------------------
Ran 3 tests in 0.003s

FAILED (failures=3)

Adding longMessage

The standard unittest library provides a mechanism to get a bit more information from assertion messages in the form of the longMessage attribute which you can set on your classes. This improves the results slightly as we can now see what the expected and actual values are:
FFF
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-3-fd71f2c85adb>", line 16, in test_invalid_post
    self.assertTrue(200 == response, "Bad status code")
AssertionError: False is not true : Bad status code

======================================================================
FAIL: test_page_loads (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-3-fd71f2c85adb>", line 8, in test_page_loads
    self.assertTrue(200 == response, "Bad status code")
AssertionError: False is not true : Bad status code

======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-3-fd71f2c85adb>", line 12, in test_valid_post
    self.assertTrue(200 == response, "Bad status code")
AssertionError: False is not true : Bad status code

----------------------------------------------------------------------
Ran 3 tests in 0.002s

FAILED (failures=3)

However due to the way that the assertions are constructed that isn't actually very helpful just yet.

Using assertEqual

Thankfully unittest provides more useful assertion helpers. These include a number of more advanced comparisons (in particular for collections), though for now we'll just look at assertEqual.

By using assertEqual we let unittest do the comparison, which means that it can also generate a more descriptive error message:

FFF
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-4-1e8aaded9316>", line 16, in test_invalid_post
    self.assertEqual(200, response, "Bad status code")
AssertionError: 200 != (500, "Oops! Here's a stack trace...") : Bad status code

======================================================================
FAIL: test_page_loads (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-4-1e8aaded9316>", line 8, in test_page_loads
    self.assertEqual(200, response, "Bad status code")
AssertionError: 200 != (200, '<h1>This is the good page</h1>') : Bad status code

======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-4-1e8aaded9316>", line 12, in test_valid_post
    self.assertEqual(200, response, "Bad status code")
AssertionError: 200 != (200, '<h2>Your submission was invalid</h2>') : Bad status code

----------------------------------------------------------------------
Ran 3 tests in 0.004s

FAILED (failures=3)

This failure now shows us that the response is not in the format we expected, which explains some of the failures we've got.

Fixing that leads to our first passing tests:

F..
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-5-5f85b03559e5>", line 16, in test_invalid_post
    self.assertEqual(200, status_code, "Bad status code")
AssertionError: 200 != 500 : Bad status code

----------------------------------------------------------------------
Ran 3 tests in 0.004s

FAILED (failures=1)

While that's clearly a step forward, we're still interested more in the failure than the passing tests. As we can see, we now know how it's failing (the wrong status code), though we don't really understand why it's failing.

Assert more things

On the way to making our tests provide clearer explanations of what's wrong, we should aim to have them check as much as would be useful to know has broken. This is partly as we want to constrain the behaviour of the system under test, but also because we can usually extract a better understanding of the system if we know more about the failure.

Having done this, we can see that one of the tests which was previously passing now catches a bug:

F.F
======================================================================
FAIL: test_invalid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-6-5c8355f13e67>", line 18, in test_invalid_post
    self.assertEqual(200, status_code, "Bad status code")
AssertionError: 200 != 500 : Bad status code

======================================================================
FAIL: test_valid_post (__main__.RequestTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-6-5c8355f13e67>", line 14, in test_valid_post
    self.assertIn('submission succeeded', body)
AssertionError: 'submission succeeded' not found in '<h2>Your submission was invalid</h2>'

----------------------------------------------------------------------
Ran 3 tests in 0.003s

FAILED (failures=2)

Extract helper assertions

Having added a large number of assertions to each of your tests, you'll find that there is quite a lot of duplicated assertions. We can therefore apply the usual practise of extracting the common parts to a helper so that we can achieve both clarity and space improvements:

This results in the same assertions being run and the same test failures being found, though now it's much clearer to anyone reading the test code what the intent of those assertions is.

Summary

There are a number of things which you can do to make it easier to find & fix issues in your code, and test code is no different. To make it easier to work with your unittest tests, you should make use of the longMessage attribute and extract custom assertions where they will improve clarity.

Thursday, November 17, 2011

Adventures of a Queen's Scout: The Cenotaph

We join the story shortly before last weekend. Having been awarded my Queen's Scout award I've been invited to take part in the Queen's Scout Honour Gaurd at the Cenotaph on Remembrance Sunday.

So, last Thursday, I drove up to London for a rehearsal at Whitehall, which outlined the tasks we would have on the day, and provided an introduction to the other Queen's Scouts taking part. We met up on Whitehall mid-evening to be escorted inside the Foreign & Commonwealth Offices where we wandered around uncertainly before finding the room we'd been allocated to leave our stuff in. The briefing that followed explained the tasks that we'd have three days later, namely handing out programmes to members of the public that were arriving to see the ceremony, and forming up outside the offices, right in front of the Cenotaph.

The latter turned out to be very simple — we had to form up just inside the building, march outside and then stand still as the dignitaries walked past on their way out. We practised this a few times in a corridor before moving out on to the pavement to confuse any late-night tourists, and soon mastered the manoeuvres.

Bright and early on Sunday morning (and chilly at 8:30am!) we again met outside the Foreign and Commonwealth office to make our way inside. After heading outside with large stacks of orders of service, it became clear just how many people would be attending. The number of people already present on Whitehall at 9am was quite remarkable, and by an hour later it was certainly filling up. The trick, it emerged, was to wait by the entrances to hand out the programmes as people arrived, rather than trying to hand them out to the few people in the crowds that might not already have one.

Shortly after 10am we had to be back inside the offices so that the Armed Forces could begin forming up at the Cenotaph, and so that we could be in place for the start of the ceremony. There was also a chance for a quick drink and biscuits before we had to get ready for the more formal part of our duties.

Following some initial confusion about where we should be forming up, we were eventually given the nod that we should begin marching outside. We were the first to emerge from the building into the morning sunlight, which was thankfully beginning to warm things up a bit. Not long after, the choristers and the Lord Bishop of London followed us out, signalling the start of the emergence of the dignitaries. This was pretty much the start of the ceremony, and is captured on TV (of course), and if you've got access to iplayer then you can start watching along.

Despite having known all along that there would be highly important people walking right past me, I don't think that it was until the politicians, led by David Cameron, were walking past that this really hit home. I think my thoughts were approximately:
Cameron... (woah, that's the actual prime minister) Clegg.. (they really do look like their photos) more politicians.. (hrm this is a bit serious) politicians.. politicians.. politicians.. Oh, look, it's Boris!


The politicians were followed by other dignitaries, and eventually the Royal party, lead by the Queen. It was quite a remarkable experience to see the Royal family walk literally a couple of feet in front of me. Of course their arrival signalled that it was nearly 11 o'clock, and time for the 2 minutes silence. Even having been warned that there would be a loud bang when the artillery gun on Horse Guards Parade would be fired to coincide with Big Ben's chime of 11, but nothing quite prepares you for just how loud and thunderous it is (it may sound loud on the TV, but trust me, that doesn't get close).

Following the silence, the wreath laying and the short service lead by the Lord Bishop the dignitaries headed back into the Foreign & Commonwealth Offices. The Honour Guard is the last to re-enter the building, in many ways it was good to be "off the hook" as we disappeared from public view, but it was also sad as it signalled the end of a unique experience.

Tuesday, November 15, 2011

Adventures of a Queen's Scout: Journey to the center of London

Back when I was at school, everyone did the Duke of Edinburgh's Silver, it was just expected. DOE Gold was offered, and each year some sixth-formers went for it. Having been a cub and then a scout, and having completed by Explorer Belt a few years earlier, it seemed the natural thing to progress into. One of the teachers suggested that we try doing it in Kayaks, rather than walking, which turned out to be a great fun. We ended up spending a week exploring the river Allier, in the South of France, for a week. The only downside to this mode of travel was that the nights were a little cold, though that is to be expected and balanced out the glorious sunshine we had during the days.

I didn't actually end up completing my DOE Gold until the summer after my A-Levels, when I spent a week on residential with a friend from scouts rebuilding the stone path down to Dancing Ledge in South Dorset. This complicated getting everything signed off, and I eventually got everything sorted and went to St James' Palace for a presentation in the summer of 2009, where we were graced by the Duke of Edinburgh himself. Amusingly, the Duke recognised the Gold broach my mum, who was among the guests present, was wearing as being her DOE Gold award, and commented on it.

Fast forward to this Easter, when I was invited to attend the Queen's Scout parade at Windsor. This is the formal, pomp-and-circumstance, part of the award being conferred, and collects all the recent Queen's Scouts from around the country to parade with a military band inside Windsor Castle. There, the Chief Scout (currently Bear Grylls) and a representative of the Royal family congratulate each of the scouts and take a marching salute from each of the regional groups. This is, in many ways, a more interesting ceremony than the local part where you are actually given the badges and certificate which confer the award.

I managed to arrange for my award ceremony to take place at the end of an otherwise (mostly) ordinary meeting at the start of the new term of my home scout group, First Ealing North. At the end of the meeting we presented a large number of badges to the scouts that had earned them towards the end of the previous term, or had had them signed off during our Summer Camp. There were a surprisingly large number of badges, and it took quite a while. This culminated with our County Commissioner presenting me with my Queen's Scout certificate and asking some questions about the things I had done, and encouraging all the scouts present to pursue their own awards. Afterwards, one of the things that he mentioned was that there was an opportunity for Queen's Scouts from the London counties to be a part of the Honour Guard at the Cenotaph on Remembrance Sunday. Since was a remarkable honour and literally a once-in-a-lifetime opportunity, I asked him to put my name down for inclusion, and I was lucky enough to be asked to take part by the organisers.

Continued in Part 2: The Cenotaph...

Thursday, September 8, 2011

Using MySQL as a backend for Apache Authentication

You might be wondering why you'd want to use MySQL as a backend to Apache Authorisation. For my planned use there was one very compelling reason: I already had a list of users with accounts in a MySQL database (from my CMS), and I didn't want to have to replicate the list, or make my users have another login.

Some quick searching on the internet yielded an Apache module called mod_auth_mysql, which sounded perfect. Unfortunately all is not as simple as it sounds. As a result I've compiled a guide that will hopefully be helpful to someone else too.

Install & Enable:

I'm using Ubuntu Natty and Apache2, which means that it's available from the repos, so:

    (sudo) $ apt-get install libapache2-mod-auth-mysql
    (sudo) $ a2enmod auth_mysql
    (sudo) $ apache2ctl restart

Configure:

For reasons unknown the online documentation on SourceForge is out of date, referring to a much older version than the one in the current Ubuntu. Thankfully I did find that as part of the package install some docs were also installed into /usr/share/doc/libapache2-mod-auth-mysql/. There I found a couple of useful GZipped files: USAGE.gz and DIRECTIVES.gz. Having found these it's much simpler than trying to piece things together from the old docs on SourceForge, but here are the highlights.

Disable other Auths:

For some reason Apache doesn't like using mod_auth_mysql concurrently with other auth types, so you need to disable any other auths for the location where you're trying to use MySQL auth. It's suggested that you can do this by disabling the other auth modules, but this simply confused things for me, and didn't seem to work. The alternative (which I recommend) is to locally disable them using directives:

    # You might only need one of these.
    AuthBasicAuthoritative Off
    AuthUserFile /dev/null

Create a MySQL account for apache:

You could use an existing account, but I wouldn't recommend this for two reasons:

  1. The password needs to be stored in plaintext as part of the Apache config.
  2. You'll probably want to restrict the access rights of the user down to just SELECT on the table the users are stored in.
I'm not going to detail how to do this here, except to note that I used PHPMyAdmin, which made it really easy.

Setup your MySQL details:

Using the DIRECTIVES file as a reference this was actually pretty simple, this is approximately what the MySQL bits of my config file look like:

    AuthType                     Basic
    Auth_MySQL                   On
    # My CMS uses PHP's md5()
    Auth_MySQL_Encryption_Types  PHP_MD5
    Auth_MySQL_Host              localhost
    Auth_MySQL_DB                my_cms_db
    Auth_MySQL_User              apache
    Auth_MySQL_Password          password
                             # ^Change this^ !
    Auth_MySQL_Password_Table    users
    Auth_MySQL_UserName_Field    username
    Auth_MySQL_Password_Field    password
    Auth_MySQL_Authoritative     On
As you can see it's pretty easy to figure out which directives do what once you know the names. The slight oddity is the Auth_MySQL_Password_Table directive, which is the table the users and their password are looked up in.

You're done

Sit back and enjoy managing your users in MySQL.

Sunday, August 1, 2010

Ubuntu Upgrade - The Aftermath

Well, as mentioned above, I forgot to grab a list of custom repos, but that was about the only thing of note in what I can honest describe was the smoothest upgrade I've ever done. More so than some of the in-place ones I've done too, which I'm pretty impressed with. There were a few things that I had to tweak, but these were down to how I've got my system setup, and I doubt they'll affect the majority of people. (I needed to copy the grub bootsector twice -- I presume an update went through in my first batch of updates -- and the usual tomfoolery to get ATI's graphics manager to work how I want with my two screens, but again this is far improved vs Karmic.)

Re-installing my packages was smooth too. I ran diff on the two lists of manually installed files, and then just looked down the list for those I wanted to bring back. Despite this I'm wishing that apt allowed you to add packages of the list of those to be installed once it was going, or at least during the downloading stage, since I still forgot some. Another nice feature there would be to sort the downloads such that it can begin some of the installs while other downloads continue.

It was really pleasing to login to my clean install and immediately have all my old settings present (though since I store some on yet another partition I didn't quite get my look-and-feel back right away). Thankfully a quick dive into /etc/fstab to paste in my old settings quickly resolved this one.

There are some slight issues with Lucid, most noticeably the desktop background isn't quite right though it had already been filed as a bug on Launchpad.

Thursday, July 29, 2010

Ubuntu Upgrade - The Plan

Well once again it's time to upgrade my Ubuntu install. OK, so I'm a bit late to be upgrading from Karmic to Lucid, but I've been busy!

Clearly I don't want to lose all that I've built up over the past year or so on this machine, but I have decided that I want to do a clean install. There are a number of reasons for this, the main one is that I haven't done so yet on this machine, and this Lucid is an LTS. Not that this means that I won't upgrade again in November, but it does provide an extra impetus to do it. The upshot here is that I'm going to need to generate a backup of my settings etc. so that I can restore them in Lucid.

User files

Thankfully my /home directory is in fact another partition, which makes it easier to keep, and provides a nearby backup location for all the other files that I've modified that don't live there.

Installed packages:

Thankfully apt remembers which packages the user installed, and which were installed as dependencies. Thus, after much forum searching, I've come up with the following line to grab all the manually installed packages:

aptitude search '~i!~E' | grep -v "i A" | cut -d " " -f 4 | sort | uniq > manual

This searches for all installed (~i) not-Essential (!~E) packaages, removing (-v) those that were installed automatically ("i A"). The descriptions are then removed (cut) and the list sorted and made unique. Since this is quite a big list (and I want to keep it so I can install them back) I threw this into a file.

I plan to run the same line in the new install, and then just ignore any packages already there. I'll probably take the opportunity to re-evaluate what I have installed too.

Config files

Modified config files are so easy to overlook. I expect that everyone's modified a config file somewhere, probably the apache config? These are somewhat harder to find unless you know what you've modified. If you've used gedit to modify them then remembering that it makes backup copies of the files it mods by appending a ~ to the name can give a clue. Beyond that I've not found a foolproof way to locate them, If anyone finds one let me know!

Application Data

Also (sort-of) in this category are things such as any LDAP or MYSQL databases that you might have lying around that you may want to keep.

Extra Repos

I forget to have a look at what repos I had added to my defaults, damn. Be sure you don't!