Paper-analyzer aims to facilitate knowledge extraction from scientific (biomedical) papers via Deep Learning (DL) models for Natural Language Processing (NLP). The core of the Paper-analyzer is a Language Model (LM) built with Transformer-like architectures fine-tuned on scientific papers. The objective of LM is to predict the next word, given the context. We trained models built on top of LM to solve several downstream tasks like Named Entity Recognition (NER), Relation Extraction (RE), and Question Answering (QA) as consecutive steps to the main goal, which is automatic knowledge extraction.
We implemented NER and RE in the form of classifiers (which assign various classes to words or word tuples) and QA in the extractive form (where the answer to a question is a text span).
We also experiment with generative models for paper summarization and sentence paraphrasing tasks.
The group is based at JetBrains.