Normalized maximum likelihood models in genomics
published: Aug. 13, 2008, recorded: July 2008, views: 3861
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
The normalized maximum likelihood (NML) model (Rissanen, 1996; Rissanen, 2001, Shtarkov, 1987) for a class of Markov sources (Tabus and Korodi, 2008) was recently used for the compression of full genomes, obtaining for the human genome the best existing compression results (Korodi and Tabus, 2007). We show that one of the underlying biological features that the compression algorithm implicitly uncovers is the existence of approximate gene duplication. We proposed a refined method based on the same NML models for the segmentation of DNA sequences for uncovering gene duplications (Tabus, Yang, and Astola, 2008). Several analysis tasks in genomic sequences involve preliminary segmentation or clustering of the data, which can be performed by a number of techniques, based on various similarity measures. Here we review and further pursue the application of MDL techniques for genomic sequence analysis. The process of sequence matching will be used for solving the problem of uncovering gene duplications with the help of a preliminary segmentation of a complex DNA locus, known to have evolved through a series of duplications.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Write your own review or comment: