Skip to content
Connect Career

Bioinformatics

State-of-the-art data analysis beyond mere statistical significance

Running a statistical test to obtain a significant result is not everything. Our bioinformatics analysis pipelines focus on explainability, reproducibility, and robust statistical analysis and interpretation. We focus on standardized data sets from multiple omics- and non-omics techniques.

Bioinformatics

State-of-the-art data analysis beyond mere statistical significance

Running a statistical test to obtain a significant result is not everything. Our bioinformatics analysis pipelines focus on explainability, reproducibility, and robust statistical analysis and interpretation. We focus on standardized data sets from multiple omics- and non-omics techniques.

Flow cytometry

To complement the manual, expert-driven analysis of flow cytometry data performed in our labs, we run automated gating algorithms to identify cell populations in a hypothesis-free way. This approach simplifies the (automated) analysis of hundreds or thousands of samples while at the same time providing novel insights into rare cell-populations that would have otherwise gone unnoticed.

Transcriptomics and Proteomics

We perform differential expression analysis, gene set testing, and co-expression network analysis on RNAseq and Proteomics data to generate the most insights from valuable data. Our routine analysis combines standard pipelines and exploratory components that involve frequent feedback cycles between the data science team and the researcher who knows their data best.

Metabolomics

Our pipelines are designed to operate both on targeted and untargeted metabolomics data. This includes mapping of raw machine readouts to metabolome databases, feature selection, and downstream statistical analysis to obtain interpretable, actionable results.

Digital Pathology

Our labs produce histologic images using H&E, chromogenic and fluorescence multiplex immunohistochemistry images. That’s why our digital pathology pipelines can process all of them by respecting each methods unique strengths and weaknesses. For all images, we perform tissue segmentation, cell segmentation, and phenotyping of identified cells. This enables us to perform basic cell quantification for highly interpretable features, and the extraction spatial features, like nearest neighbor analysis or the investigation of touching-cells. If needed, custom object detection and feature extraction pipelines can be setup according to researcher specifications.

Electronic Health Records

To tap into the vast potential of manually written, unstructured clinical health records we employ natural language processing (NLP) pipelines to identify relevant clinical concepts in the text and map them to well-defined clinical ontologies (ICD-10, SNOMED CT). This enables downstream search and automated processing, including the correlation of features with lab parameters in an automated and interpretable way.

Data integration and joint analysis

When predictive models are desired, we combine multiple data sets to get a holistic view of the data. To avoid that black-box machine learning models make good predictions that cannot be interpreted by humans, we employ both feature extraction and feature selection on each data set. Small, interpretable models combined with statistical analysis of the most relevant features enable us to combine the power of machine learning with the necessity of understanding every detail of our data.

All our bioinformatics pipelines are designed to work seamlessly with our in-house generated lab data. However, we also offer these services to third-party collaborators and project partners in a “bring-your-own-data” setting.

We are eager to hear what you want to do with your data!

Please enable JavaScript in your browser to complete this form.
Name