Sunday 5 December 2010

Liet International 2010, playing bass with Jousnen Jarved



It's always been one of my big dreams to play bass for world music and although, since having this dream in the early 90's when I was briefly a fill-in disk-jockey at an Ann Arbor, Michigan radio station - I've recorded fairly little in this regard - and of extremely variable quality, in relation to where I've actually been over the last 10 years or so. Most of my recorded stuff is from Russia, Siberia, China, (I've recorded others from Taiwan), the middle-east (stylistically), or Fenno-Scania - and I can add both Americas and Greenland to the itinerary. Occasionally, however, I do something which represents a bit of a zenith.

Ahh, but enter Jousnen Jarved, from Russia. I've been travelling to the Russian side of Karelia since the early 2000s and have worked with the project's creator, Peter Coon, for almost as long. Peter's quite the producer, having written, scored, and produced a few hard-rocking singles for me out of his own studio in Petrozavodsk, and being an honest-to-goodness music professional in his own right, something which I am most definitely not. And so, it was like the dream fairy came and dropped this thing in my lap, as I was sitting in Helsinki, bored to death with my musical prospects there, and about ready to give up on the whole damn thing. The Liet International festival in Lorient, France, was just the ticket.



TV Renne in France, Fryslan T.V. in the Netherlands and CNN covered the event for a global audience. We're up for a spot (and hopefully rotation) on Radio St. Petersburg at some point, but I don't know the details yet.

Monday 15 November 2010

Off the beaten path - Test Automation that can vary (and learn)...


The problems with most software test automation is essentially twofold: for the most part, it's only capable of following the same path, over and over again, and, it's incapable of ever learning. It's as dumb today as it was yesterday. That said, it's just not as classy as it could be.

What, it can be asked, is such repetitive test automation actually useful for? Well, most importantly, for verifying that something doesn't get broken as new features get added to our software; we opt to run our tests to give us a sense of confidence in what's already there.

Of interesting note, though, is the fact that we nearly always need to pair our automated tests with hands-on exploratory testing. Except that it never really happens like that. The hands-on exploratory testing gradually gives way to the Manual Monkey Test as the testers and managers lose confidence in those unique skills of perception and hunches that make us human and instead seek something quantifiable, repeatable and reproducible.





So how can we turn this around? Boredom with repetitive work begets automation; automation begets need to exploratory test which somehow though time pressures turns into more of the same....in the meantime, the defects to be found live "somewhere" off the path of this tedious nightmare.

There are no complete solutions presented here, but I hope you might get some idea of the possibilities available.


Well, one thing to do is to make our test automation more intelligent. Exploratory testing essentially involves two skills, before thought to be uniquely the domain of homo-sapiens - 1.) the ability to randomize inputs and 2.) the ability to assess and learn if a given output is appropriate given a non-linear input. Let me elaborate.

One of the great benefits of open-source software on test automation is (aside from not being bound to a proprietary language) the ability to leverage the wealth of extension libraries that come with the open source languages, and you're supported for free by a massive user community of volunteers. Tools like Squish and Selenium use Python (among others); Nokia's TDriver uses Ruby.

Without going into too much low level detail ;-), both Python and Ruby have randomizer functions. In Python, one can use the rand()function with an integer as an argument, for example rand(7) returns any number between 0 and 6; Ruby has exactly the same thing. Here's a recipe in Ruby for a simple sort and shuffle of a deck of cards, producing n log n variable swaps:









Now, this won't be the fastest way if you're dealing with a very large list, but for generating a small set to explore alternative paths, it could well suffice.

Now this randomizing can apply to just about any input that comes as an index - so this means just about anything: selected items in a list, buttons in toolbars, parts of a string or a numerical sequence. Once we are randomizing, we are dealing with inputs. But, how to know when to randomize?

Most users out there are going to use a software application in a given or fairly static way - just enough steps to suit their purpose. Configure the music player, cue it up and play the song. But every ONCE in a while, they might deviate from that sequence. So knowing when to randomize, means knowing how often, based on probability, to do something completely different.

By way of a practical example....Cucumber has become a popular way to express automatic tests in plain language. A cucumber test may read something like this:

Given the MusicPlayer application is started
When I press Options softkey
When I select Songs from menu
And I Select the Song number 1
Then Now playing view is opened correctly

Where each line in that block relates to a function implemented in the core language below it, for example:










Once we pass a certain threshold (e.g. the function has been accessed n times, we are twice or three times likely to deviate from our original number and choose a random index). Doing this effectively requires a trick though - the ability to visualize a software system as a state machine in 4D. And if we randomize at the function level, our cucumber test writer gets the benefit of that randomization without having to do anything differently

Once our scripts start to randomize, however, the fixed answer sets to our test runs will no longer suffice; our tests will require the ability to be able to be trained, to learn, and to guess based on previous history as per a correct answer. Fortunately, our open-source languages have the tools to allow us to be able to do just that.


Bayesian gems are already in common use in email spam filtering. Python, for example offers the PEBL and Classifier libraries; Ruby offers Bishop and Classifier gems. Training for the Classifier gem works along the lines as follows (note - NOT TESTED):

require 'rubygems'

require 'classifier'

classifier = Classifier::Bayes.new('Song', 'Not_Song)

classifier.train_Song('%r{.+\.(?i)mp3}')

classifier.train_Song('%r{.+\.(?i)3gp}')

...

classifier.train_Not_Song('%r{.+\.(?i)jpg}')

...

And the good stuff - where the real demonstration of learning happens - is here:

classifier.classify "bubba_chops.jpg"

#=>"Not_Song"

classifier.classify "song.3gp"

#=>"Song"

classifier.classify "song.m3u"

#=>"Song"

In the case where the script gets it wrong - you have to train it. But after more and more iterations - it will start to make the correct decisions in more and more of the cases.

This is where your script can think, and make decisions as you do.

And that's no monkey business.







Wednesday 11 August 2010

The sine wave and quantum beats


s long as I'm mucking about trying to find parallels between quantum mechanics, set theory, and rythm, I set about with a bit of a musical experiment on my trusty korg electribe mx as piped through my soundcraft lexicon mixer's drum plate. The tubes in the electribe give the drums a predictable warmth, while the plate adds room overtones and depth.



The problem is no doubt simple from a purely musical or musician's perspective, but interesting from a mathematical one.



Just like an ordinary sine wave, we simply build a rythm that alternates parts from one measure to the next. For example, we can build a rhythm consisting of bass drum, snare, and decorate it up top with a constant closed hihat (with an open on the last half beat).

Our bass drum hits on [1, 5, 9, 13], where as our snare is on [3, 6, 9, 12]. What we end up with is a pretty straightforward bossa nova, with the intersection of both parts occuring on the half four [9]. Enter the sine wave: we alternate the bass and snare every other measure! (Incidentally, the effect that this has - if I first switch that opening snare to an opening kick - is a binary alternation between bossa nova and rock-steady beat!).

Granted this is the simplest way of looking at our beat composition and arrangement, but the best way I can think of to introduce the analogy. But think of the tapestry we could weave when we apply more sophistication to this schema? Like introducing limits, Integration/Differentiation, partial derivatives to the same set of numbers? (Note: I haven't figured all this out myself yet - still some of the results I've observed are really interesting - not to mention they groove, albeit in ways you probably don't expect!) It's probably possible to build this on e.g. complex functions, while keeping the groove. Then you're in uncharted territory, sweet stuff, you're sailing in blue oceans. Of course, there's more than just these musical variables to tweak.

When combined with acoustic instruments, the intent is hopefully to sincerely emulate "the groove" within the prescribed toolset with this added rythmic complexity as part of the mix. But like everything I'm into at the moment, I've kind of ripped the machine down to its parts to muse and dwell on particular small bits of the problem that fascinate me. Hopefully I'll get down to proper business with it soon enough!

Conservation - of energy and matter - would solve humanities major problem(s)...if we could just get there!


The key to all of this is about humans being able to convert matter into energy, transmit it, and reconvert it back into matter, perhaps conveniently reconfigured!

Yes, it's more or less right out of Star Trek (we could beam people around, we could project objects like holographs), is only remotely possible with our current understanding of quantum mechanics (but is apparently only as far away as our grasp of a grand unification theory) which is being stabbed at constantly by professionals and amateurs everywhere (which introduces a promising possibility that something will emerge). Which reminds me, I must look into whatever exists out there in terms of P2P scientific communities and start farming for ideas...same with my fractal art, my music, my programming, my performance style, and just about anything else I've ever smugly attempted to synthesize :P



When I was recently in Greenland, I was thinking about the simplicity of water, versus the apparent impossibility (technically speaking) to transform it into energy and back again - and how it would fundamentally transform the conditions of humans, and planet earth. Imagine...being able to take water gushing and streaming off the arctic and antarctic ice and make it bubble out of the ground across all the arid regions on planet earth - from the Sub Sahara across the 'stans to Mongolia. Certainly, the room full of Somalis, Kenyans and other Africans on Helsinki's north-east side agreed with me hands down that this is the major number one obstacle on earth: obviously it would feed and economically enable billions who are currently dormant. By the greening of the deserts, would transform back the environment into something far more oxygen rich and equally less CO2 saturated.

And yet's what stopping it?


Humans have, in the 20th century, been able to convert very unstable matter into raw energy in the course of wiping out foes, and related pursuits. And we have harnessed the practical energy of grabbing electrons as they fly off bits of U238. This is used in many countries to supplement what we can't get from wind, solar, hydroelectric and other sources of free-in-nature kinetic energy, and wouldn't want to necessarily get from other carbon-based non-renewable sources. But the pure conversion of matter to energy (and back) still seems a long way off in spite of our best efforts. And I find that somewhat puzzling.

In browsing across the macro-patterns of human activity we observe attempts to get there, but, for whatever reason, no solution completely bridges the divide. Take
current recycling...what we are gaining in reduced quantities of discarded paper (and to a far lesser degree, plastics), we gain in e-waste. With what I call "next years junk" syndrome, technophiles and technophytes, eagerly snatch up the latest available doodad that has arrived to their point on the bell-curve. Off to the "recycling" center or in some cases, landfill, with 2001's latest. Stuff resells on the market for anything greater than junk value for only so long, and when last years doodad goes to Africa, it simply ends up in the ground and water there, because Africans like the latest and greatest, just like everyone else, and they raid their shopping malls in Lagos and Johannesburg just like we do ours in Helsinki, London or suburban LA. Anyone seen my track-pad iPod?

I'm not sure what the state of the art in e-recycling is, but I don't believe it goes far enough. What happens to non-renewable plastics, parts like transistors that are not easily reducible (except with the same sort of "expensive" equipment employed in their manufacture and assembly), etc. I have often said that for everything that is manufactured, there should be an equal and opposite demanufacturing process/facility waiting for it at the end of its life. To what extent we fall short of it, I cannot say, but any shortcoming in it's achievement sets the immutability of those physical laws to a different footlight.



"Gross bodies and light are convertible into one another..." Isaac Newton, 1704. But what's the catch? Personally I stick to my e=mc2 as a matter of faith, but understand, that it's opposite, its anti-particle, the yin to its hang, is subtly transformed, like water, to patterns that our symmetrical approach to reason and practical implementation perhaps cannot yet fully grasp.

Tuesday 10 August 2010

In-between moments







...when you care enough to give the very least.