We test our software (an API) on a fairly diverse set of platforms (OS X, multiple Linux versions, and multiple Windows versions), with an automated test suite (450+ tests). Since we support 4 versions back (trunk, r3.0, r2.2, r2.1, r1.0), with each branch being tested nightly, we end up with a large swath of test results to wade through on a regular basis. I'm looking for a strategy to improve our analyses process.

The first (and simplest), is to integrate our testing with our Jenkins CI server, so that we only run tests when the code changes (we're still stuck on the concept of "nightly builds", unfortunately, which isn't something I can change).

The next thing is, I think, to start getting the test results into a database. Right now, each test suite run generates a list of tests, with diffs from expected output interspersed in the list of tests, and each nightly run for a branch (consisting of a test suite run for each supported platform) is essentially concatenated into a beastly html page.

Before I go off and invent my own thing, I've been trying to see what technology exists in this sphere. The closest I come to fitting our needs is Cuanto, but doesn't seem to handle a tree-like testing structure nicely. Nearly everything else (open-source, and proprietary) seems to be either a) a dead project, or b) crafted more to the QA, test management side of things, where all your tests are described by, contained in, and driven by, that tool -- the sort of thing that will rapidly be ignored by our small team of 5 people, as being to unwieldy.

Any one else in the same boat? What do you use?