What do the BILOU tags mean in Named Entity Recognition?
Title pretty much sums up the question. I've noticed that in some papers people have referred to a BILOU encoding scheme for NER as opposed to the typical BIO tagging scheme (Such as this paper by Ratinov and Roth in 2009 http://cogcomp.cs.illinois.edu/page/publication_view/199)
From working with the 2003 CoNLL data I know that
B stands for 'beginning' (signifies beginning of an NE)
I stands for 'inside' (signifies that the word is inside an NE)
O stands for 'outside' (signifies that the word is just a regular word outside of an NE)
While I've been told that the words in BILOU stand for
B - 'beginning'
I - 'inside'
L - 'last'
O - 'outside'
U - 'unit'
I've also seen people reference another tag
E - 'end', use it concurrently with the 'last' tag
S - 'singleton', use it concurrently with the 'unit' tag
I'm pretty new to the NER literature, but I've been unable to find something clearly explaining these tags. My questions in particular relates to what the difference between 'last' and 'end' tags are, and what 'unit' tag stands for.
Solution 1:
Based on an issue and a patch in Clear TK, it seems like BILOU stands for "Beginning, Inside and Last tokens of multi-token chunks, Unit-length chunks and Outside" (emphasis added). For instance, the chunking denoted by brackets
(foo foo foo) (bar) no no no (bar bar)
can be encoded with BILOU as
B-foo, I-foo, L-foo, U-bar, O, O, O, B-bar, L-bar
Solution 2:
I would like to add some experience comparing BIO and BILOU schemes. My experiment was on one dataset only and may not be representative.
My dataset contains around 35 thousand short utterances (2-10 tokens) and are annotated using 11 different tags. In other words, there are 11 named entities.
The features used include the word, left and right 2-grams, 1-5 character ngrams (except middle ones), shape features and so on. Few entities are gazetteer backed as well.
I shuffled the dataset and split it into 80/20 parts: training and testing. This process was repeated 5 times and for each entity I recorded Precision, Recall and F1-measure. The performance was measured at entity level, not at token level as in Ratinov & Roth, 2009 paper.
The software I used to train a model is CRFSuite. I used L-BFGS solver with c1=0 and c2=1.
First of all, the test results compared for the 5 folds are very similar. This means there is little of variability from run to run, which is good. Second, BIO scheme performed very similarly as BILOU scheme. If there is any significant difference, perhaps it is at the third or fourth digit after period in Precision, Recall and F1-measures.
Conclusion: In my experiment BILOU scheme is not better (but also not worse) than the BIO scheme.
Solution 3:
B = Beginning
I/M = Inside / Middle
L/E = Last / End
O = Outside
U/W/S = Unit-length / Whole / Singleton
So BILOU is the same with IOBES and BMEWO.
Cho et al. compares performance of different IO, IB, IE, IOB, IOBES, etc. annotation variants. https://www.academia.edu/12852833/Named_entity_recognition_with_multiple_segment_representations
There is also BMEWO+, which put more information about surrounding word class to Outside tokens (thus "O plus"). See details here https://lingpipe-blog.com/2009/10/14/coding-chunkers-as-taggers-io-bio-bmewo-and-bmewo/
Solution 4:
This just gives more context to your tags saying which part of the entity.
BILOU Method/Schema
| ------|--------------------|
| BEGIN | The first token |
| ------|--------------------|
| IN | An inner token |
| ------|--------------------|
| LAST | The final token |
| ------|--------------------|
| Unit | A single-token |
| ------|--------------------|
| Out | A non-entity token |
| ------|--------------------|
BIOES
A more sophisticated annotation method distinguishes between the end of a named entity and single entities. This method is called BIOES for Begin, Inside, Outside, End, Single.
IOB (e.g. CoNLL 2003)
IOB (or BIO) stands for Begin, Inside and Outside. Words tagged with O are outside of named entities
for more detailed information Ple go through the below link
URL : https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)
URL :https://towardsdatascience.com/deep-learning-for-ner-1-public-datasets-and-annotation-methods-8b1ad5e98caf