Pages
Keynote talks
Download the abstracts booklet (PDF)
- Andreas Keller, febit GmbH, Heidelberg, Germany: Bioinformatics for challenging high-throughput genomics and transcriptomics: the “Iceman’s” last secrets
- Marie-Dominique Devignes, LORIA Nancy: Research in bioinformatics at the LORIA (Nancy) : high-performance algorithms and knowledge-based approaches for structural systems biology.
- Mario Albrecht, MPI Saarbrücken: Mining and Prioritizing Molecular Data and Processes for Integrative Systems Biomedicine
- Holger Stenzhorn, Uniklinikum Saarland: Towards personalized medicine: From clinical trial data sharing and integration to targeted treatments
Bioinformatics for challenging high-throughput genomics and transcriptomics: the “Iceman’s” last secrets
By Andreas Keller, febit GmbH, Heidelberg, Germany
Novel high-throughput techniques are applied to produce huge amounts of data. Examples include the well-established microarray based gene- or microRNA expression profiling and the more recently developed second generation sequencing approaches. Especially in the light of personalized medicine, these experimental techniques bear the potential to detect diseases in time, to improve the understanding of molecular causes of the respective diseases and to enhance patients’ therapies. To this end, however, sophisticated computer aided analysis is a necessary prerequisite.
One example of a high-throughput transcriptomics project is the „Human Diseases miRNome“. Here, blood samples of about 1.000 individuals with a wide variety of diseases, including patients with different cancer types, autoimmune diseases, cardiovascular, and chronic inflammatory diseases, as well as healthy control individuals, were screened for all small non-coding miRNAs and the respective peripheral profiles were investigated and compared using biostatistics, systems biology and machine learning tools.
Even more challenging are whole genome sequencings. Here, gigabases of genomic information can be generated within few days, detailing the complete genomic code of individuals. As one example we sequenced Ötzi, the Icemann, a mummy, which is over 5000 years old but very well conserved. By analyzing over 3 billion reads generated by SOLiD 4 paired-end sequencing we found more than 1.7 Single Nucleotide Polymorphisms that could be matched well to the phenotype and helped to understand the last secrets of the iceman.
In summary, bioinformatics approaches and tools that are tailored for high-throughput applications are essential to grasp the full potential of the promising experimental techniques and thus important to improve personalized medicine.
Back to top
Research in bioinformatics at the LORIA (Nancy) : high-performance algorithms and knowledge-based approaches for structural systems biology.
By Marie-Dominique Devignes(biosketch, PDF)
This talk will present some recent work and perspectives on bioinformatics research at the LORIA. The first part of the talk concerns one of the current challenges in computational biology, which is to predict how a pair of proteins fit together, or "dock", to form a protein complex. To tackle this problem, we have developed a novel FFT-based protein docking algorithm using spherical polar Fourier correlations of protein shape and electrostatic properties. This approach has performed well in several of the “CAPRI” blind docking experiments. In addition to correlating protein shapes, our approach can also be used to encode the electrostatic potential, ionisation potential, electron affinity, and polarisability of small molecules. This provides a fast way to perform shape-based and property-based virtual screening of chemical databases. This part of the talk will give an overview of using polar Fourier correlations for docking and virtual screening, and it will describe some recent work at the LORIA on mapping these calculations to modern graphics processors.*
The second part of the talk will present our vision of knowledge-based approaches in bioinformatics. Knowledge discovery from databases (KDD) can be considered as a process involving three main steps : (i) data preparation – i.e. integrating, cleaning, selecting and formatting the data, (ii) data mining – i.e. extracting regularities, patterns, classes, or rules (also called knowledge units) from the data, and (iii) interpretation – i.e. recognizing knowledge units that are new, valid, non-trivial, and reusable. Thus, the KDD process is an experimental, iterative process which is guided by the expert with domain knowledge. Applying KDD to biological data has become necessary due to the huge amount of data produced by today's high-throughput experimental platforms. However, KDD is still hampered by various difficulties at every step of the process. Some solutions will be presented on the basis of selected research projects currently under way at the LORIA.**
In conclusion, high-performance algorithms and knowledge-based approaches bring new ways to enhance our understanding of biological systems at the structural level. This should be of benefit to many applications of systems biology in the therapeutic and biotechnological domains.
* HPASSB (High-Performance Algorithms for Structural Systems Biology),
ANR Chaire d’Excellence, Dave Ritchie.
** MBI (Modeling Biomolecules and their Interactions) project, funded by
the Region Lorraine, INRIA and European FEDER, coordinated by MD
Devignes. http://misn.loria.fr/spip.php?article2
Back to top
Mining and Prioritizing Molecular Data and Processes for Integrative Systems Biomedicine
By Mario Albrecht (biosketch, online), Department of Computational Biology and Applied Algorithmics, Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbrücken, Germany
Sophisticated bioinformatics methods are necessary to integrate, analyze, and visualize rapidly increasing amounts of molecular data for systems biomedicine. This talk will present novel computational approaches to gaining biological insight into cellular processes by mining large-scale datasets and networks. Possible solutions to current quality and prioritization challenges of integrative data exploration will be described.
Back to top
Towards personalized medicine: From clinical trial data sharing and integration to targeted treatments
By Holger Stenzhorn (Uniklinikum Saarland)
Medicine is currently seeing a revolutionary transformation of the very nature of healthcare from reactive to preventive: The changes - catalyzed by a novel systems approach to disease focusing on integrated diagnosis, treatment and prevention in individuals - lead existing approaches to medicine to personalized predictive treatments over the coming years.
To face this challenge, we propose the creation of an open, standards-compliant and modular framework of tools and services that includes efficient secure sharing and handling of large personalized data sets enabling demanding multiscale in-silico simulations. Important aspects at this are privacy, non-discrimination, and access policies to maximize protection of and benefit for patients.
The developed tools and technologies will be validated within concrete, advanced clinical research settings: Pilot cancer trials are selected based on clear research objectives, emphasizing the need for multilevel dataset integration. To sustain a self-supporting infrastructure real-world use case will be set-up to highlight tangible results for clinicians.
Back to top