Gene Regulation Network(GRN) task in BioNLP

Table of Contents

In order to work on the patient guideline events extraction part in the MUSE 1 project, I was advised to exploit the methods used by the team of KU Leuven in the BioNLP workshop. They attend this workshop to develop and evaluate algorithms on the benchmark data set and planed to use developed algorithms on the patient guideline events extraction problem.

1 GRN task definition

The GRN task contains two steps: first, extract formulas from text; second, create GRN from extracted formulas. The algorithm used for the second step had been developed by the task organizer, hence, the participants only have to study the algorithm for the first step.

1.1 GRN annotations has three levels

  1. Text-bound entities are given both in train and test. Unlike GENIA task, GRN task provide also trigger words and distinguish the type of gene, protein, gene-family, etc. Gene, protein, gene-family, etc. are called genic entites.
  2. Biomedical events and relatiosn are like the simple events in GENIA task. However, the defined relations distinguish the passive and positive roles such as Transcription_from and Transcription_by are defined as two events. Promoter_of and Master_of represent the knowledge more precisely. The argument type are strictly defined respect to the type of gene, protein, etc. defined in the first level. This level of annotations are called events and relations, and it does NOT contain recursive events.
  3. Interactions contains six types of relations: Binding, Transcription, Activatio, Requirement, Inhibition and Regulation. The first two types represent mechanisms, the next three types represent effects and the last collect all the other relations. Interactions can be recursive.

1.2 Generation of network

Thought only the level 3 annotation and the network will be submitted to the official evaluation, the construction of GRN needs the inference with the level 2 annotation for the interactions that do not directly link to the genic entities.

  1. If the agent/target of an interaction is a genic named entity, the agent/target node is the gene identifier of the entity. If the entity does not contain gene identifier, it is not a genic name. In GENIA task, there are some protein entities that are sub-strings such as Il-1,2,3. Does GRN contain similar annotations? Are they ignored (2, 3 do not contain gene identifier)?
  2. If the agent/target is an event, the node is the entity referenced by the event.
  3. If the agent/target is a relation, the agent of both arguments (agent/target) are nodes.
  4. If the agent/target is a promoter, the agent is the argument follows the promoter_of or master_of_promoter relation.
  5. Edges are ordered by hierarchy and remove edges with lower priority.When both (A, Transcription, B) and (A, Regulation, B) exit, (A, Transcription, B) is kept.

2 KU Leuven method

2.1 Framework

SVMLight implementation in the Shogun Machine Learning Toolbox. Observing all pairs of genic entities in a sentence. Differential weighting to deal with the data imbalance. Do they worked on the extraction of the level 2 annotation?


Entity features fent and pairwise featues fextra. Used Stanford parse tree. dependency tree or phrase structure tree?

  1. fent contains the base features and context features for all the words in the entity. Features are normalized by the number of words.
    1. Base features fbase:
      1. entity type
      2. similarity scores for words in the dictionary by shared beginning (details? stemming?).
      3. Part-of-speech produced by NLTK. similarity scores?
      4. Location of words in the sentence, normalized to (0,1). subspace of the two location dimensions of the two entites?
      5. Depth in the parse tree.
    2. Context features Weighted average of all other words in the sentence. It is a weighted sum of the fbase feature vectors of every words in the sentence. The weights are computed by αdi(w,wj), where the di(w,wj) is the parse tree distance from w to wj for sentence i.
  2. fextra: distance of two entities on Stanford parse tree, location and count of Promoter entities.



Machine Understanding for interactive StorytElling (MUSE) project Username: muse, password: Pa4MpPw@kul.

Author: Xiao LIU

Created: 2014-10-29 Wed 18:05

Emacs 24.3.1 (Org mode 8.2.10)