Genome-sequencing machines now read at a dizzying rate the molecules that make up DNA, shrinking the time required to spell out a bacteria genome from weeks to hours. The cost comes in analyzing the data.
"You have lots of data being produced by sequencing machines, and the only solution for assembly is to maintain a very expensive computing cluster dedicated to this process", stated Mihai Pop, computer science professor at Maryland's Center for Bioinformatics and Computational Biology.
To quickly read through DNA, sequencers chop a strand into sections just tens or hundreds of base pairs long. "It produces very cheap but very poorly sequenced data", stated David Schwartz, a University of Wisconsin-Madison genetics professor and director of the Laboratory for Molecular and Computational Genomics. "You've got a pile of short sequences, but you don't know the order they should fit back together."
Mihai Pop has been trying to fit the pieces back together for a decade. "After sequencing, it's like having this huge jigsaw puzzle, but you don't actually have that picture on the box that shows you what you're trying to reconstruct", Mihai Pop stated. "And there are a lot of sky regions. And a lot of the pieces will fit in more than one place."
Until Mihai Pop and fellow Maryland professor Michael Schatz brought their "Contrail" assembly software to University of Wisconsin-Madison's Center for High Throughput Computing, the popular working genome assemblers - such as one called Velvet at the European Bioinformatics Institute - were reconstituting bacteria genomes that measure in the thousands of base pairs. The human genome contains more than 3 billion pairs.
David Schwartz' work in optical mapping of genomes helped identify the puzzle shapes that should fit together. "He can give us a map of where certain landmarks are in the genome, and that can help us with ordering these short sequences", Mihai Pop stated. But the computing power - measured in complexity and cost - needed to handle the mass of data far outstrips that of the sequencing machines cranking out base pairs by the billion. "There really is no standard approach to doing this for a human genome", Mihai Pop stated. "It's virtually impossible to do that on a single machine. We needed access to a large cluster."
University of Wisconsin-Madison had that cluster and had its own tool - called Condor, a programme run by professor Miron Livny at the Center for High Throughput Computing - to manage the work of Maryland's software across a network of computers. Condor breaks up long lists of heavy computing tasks and distributes them across networked computer workstations whose intended use leaves their processors with a little or a lot of idle time.
"In the assembly, you have a very complex job work flow. You must take the data and do this analysis and that analysis, and when that analysis is done you take the results from the first two and do a third", stated Todd Tannenbaum, project manager for Condor. "You have this big chain of events that need to happen, and that's what Condor does very well."
To manage both the complex work flow chain and the large data management problems, the Condor group added features of Hadoop, another distributed computing tool adept at spreading data storage and retrieval across networks, to the mix to help haul around the billions of letters gleaned from human DNA by a sequencing machine.
"By running them together, we're able to efficiently run this biological application - efficient not just in terms of computer time, but efficient in terms of dollars", stated Greg Thain, a University of Wisconsin-Madison systems programmer who worked closely on the effort with Condor Project graduate student Faisal Khan. "Because Condor could efficiently schedule the work, Maryland didn't have to buy a multimillion-dollar disc cluster."
And on the first successful run, they needed just four days and about 150 computer processors. "It's two plus two equals five, if you will", Todd Tannenbaum stated. "Condor integrated with Hadoop is a software system powerful enough to tackle problems as complex as human genome assembly without the need for expensive supercomputers or dedicated special-purpose hardware, lowering the barrier of entry for labs across the country to make contributions in this important area of research."
While there is more work to do before the process can be made available to the genomic computing public, it should be flexible enough to assemble genomes on all sorts of networks, including rented computer time available from sources such as amazon.com.
7quot;The combination of Mihai Pop and colleagues' algorithms and Miron Livny and colleagues' computing I think really make this all work", David Schwartz stated. "And what I mean by making it work is making effective use out of the data to create the full picture of a genome and allow us to discern genomic differences between individuals."