Although difficult to work with for the experimentalist, membrane proteins have a number of nice features when viewed from a bioinformatics perspective. This has made it possible to develop quite reliable methods for identifying membrane proteins in genome sequences and for predicting their topology. With such tools in hand, it is possible to move efficiently back and forth between theory and experiment when addressing questions about membrane protein assembly and topology. The field is now focussing on 3D structure prediction, where much remains to be done.
von Heijne, G. (1999) A day in the life of Dr K. or How I learned
to stop worrying and love lysozyme. A tragedy in six acts.
J.Mol.Biol. 293, 367-379.
von Heijne, G. (2000) Recent advances in the understanding of membrane protein assembly and structure. Quart.Rev.Biophys. 32, 285-307.
Two different strategies for determining the human genome are currently being pursued: one is the ``clone-by-clone'' approach, employed by the publicly funded project, and the other is the ``whole genome shotgun assembler'' approach, favored by researchers at Celera Genomics. An interim strategy employed at Celera, called compartmentalized shotgun assembly, makes use of preliminary data produced by both approaches. In this paper we describe the design, implementation and operation of the ``compartmentalized shotgun assembler''.
Physical map assembly is the inference of
genome structure from experimental data. Map assembly depends
on the integration of diverse data including sequence tagged
site (STS) marker content, clone sizing, and restriction digest
fingerprints (RDF). As experimentally measured data, these are
uncertain and error prone. Physical map assembly from error free
data is straightforward and can be accomplished in linear time
in the number of clones, but the assembly of an optimal map from
error prone data is an NP-hard problem.
We present an alternative approach to physical map assembly that is based on a probabilistic view of the data and seeks to identify those features of the map that can be reliably inferred from the available data. With this approach, we achieve a number of goals. These include the use of multiple data sources, appropriate representation of uncertainties in the underlying data, the use of clone length information in fingerprint map assembly, and the use of higher order information in map assembly. By higher order information, we mean relationships that are not expressible in terms of neighbouring clone relationships. These include triplet and multiple clone overlaps, the uniqueness of STS position, and fingerprint marker locations. In a probabilistic view of physical mapping, we assert that all of the many possible map assemblies are equally likely a priori. Given experimental data, we can only state which assemblies are more likely than others given the experimental observations. Parameters of interest are then derived as likelihood weighted averages over map assemblies. Ideally these averages should be sums or integrals over all possible map assemblies, but computationally this is not feasible for real-world map assembly problems. Instead, sampling is used to asymptotically approach the desired parameters. Software implementing our probabilistic approach to mapping has been written. Assembly of mixed RDF and STS maps containing up to 60 clones can be accomplished on a desktop PC with run times under an hour.
For the last twenty years fragment assembly was dominated by the ``overlap - layout - consensus" algorithms that are used in all currently available assembly tools. However, the limits of these algorithms are being tested in the era of genomic sequencing and it is not clear whether they are the best choice for large-scale assemblies. Although the ``overlap - layout - consensus" approach proved to be useful in assembling clones, it faces difficulties in genomic assemblies: the existing algorithms make assembly errors even in bacterial genomes. We abandoned the ``overlap - layout - consensus" approach in favour of a new Eulerian Superpath approach that outperforms the existing algorithms for genomic fragment assembly. In this paper we describe our new EULER-DB algorithm that, similarly to the Celera assembler takes advantage of clone-end sequencing by using the double-barreled data. However, in contrast to the Celera assembler, EULER-DB does not mask repeats but uses them instead as a powerful tool for contig ordering. We also describe a new approach for the Copy Number Problem: ``How many times a given repeat is present in the genome?". For long nearly-perfect repeats this question is notoriously difficult and some copies of such repeats may be ``lost" in genomic assemblies. We describe our EULER-CN algorithm for the Copy Number Problem that proved to be successful in difficult sequencing projects.
Proteomics, or the direct analysis of the expressed protein components of a cell, is critical to our understanding of cellular biological processes in normal and diseased tissue. A key requirement for its success is the ability to identify proteins in complex mixtures. Recent technological advances in tandem mass spectrometry has made it the method of choice for high-throughput identification of proteins. Unfortunately, the software for unambiguously identifying peptide sequences has not kept pace with the recent hardware improvements in mass spectrometry instruments. Critical for reliable high-throughput protein identification, scoring functions evaluate the quality of a match between experimental spectra and a database peptide. Current scoring function technology relies heavily on ad-hoc parameterization and manual curation by experienced mass spectrometrists. In this work, we propose a two-stage stochastic model for the observed MS/MS spectrum, given a peptide. Our model explicitly incorporates fragment ion probabilities, noisy spectra, and instrument measurement error. We describe how to compute this probability based score efficiently, using a dynamic programming technique. A prototype implementation demonstrates the effectiveness of the model.
We propose two efficient heuristics for minimizing the number of oligonucleotide probes needed for analyzing populations of ribosomal RNA gene (rDNA) clones by hybridization experiments on DNA microarrays. Such analyses have applications in the study of microbial communities. Unlike in the classical SBH (sequencing by hybridization) procedure, where multiple probes are on a DNA chip, in our applications we perform a series of experiments, each one consisting of applying a single probe to a DNA microarray containing a large sample of rDNA sequences from the studied population. The overall cost of the analysis is thus roughly proportional to the number of experiments, underscoring the need for minimizing the number of probes. Our algorithms are based on two well-known optimization techniques, i.e. simulated annealing and Lagrangian relaxation, and our preliminary tests demonstrate that both algorithms are able to find satisfactory probe sets for real rDNA data.
We present the first practical algorithm for the optimal linear leaf ordering of trees that are generated by hierarchical clustering. Hierarchical clustering has been extensively used to analyze gene expression data, and we show how optimal leaf ordering can reveal biological structure that is not observed with an existing heuristic ordering method. For a tree with n leaves, there are 2n-1 linear orderings consistent with the structure of the tree. Our optimal leaf ordering algorithm runs in time O(n4), and we present further improvements that make the running time of our algorithm practical.
Gene expression measurements are a powerful tool in molecular biology, but when applied to heterogeneous samples containing more than one cellular type the results are difficult to interpret. We present here a new approach to this problem allowing to deduce the gene expression profile of the various cellular types contained in a set of samples directly from the measurements taken on the whole sample.