You will be able to evaluate the performance of the NER model only if you have already
checked the checkbox(es) for “Use test set” or/and “CV, fold” when creating the model. For
more information on creating a new NER model, go to
Building machine learning models
section. Once the model is built, you can conduct an error analysis to compare the
annotations with the predicted ones (the annotations that are built based on the
model that you have specified).
To perform error analysis: Double click on one of the .xmi files listed in the output folder of your choice on the corpus panel. This will open a new window where you can see the original text along with both gold-standard and predicted annotations.
Please note that all named entities in both gold-standard and predicted annotations are listed on the "Display Options" panel. You can choose which named entities to be highlighted in the text file and assign different colors to them as described in "Visualization of entity and relation types" section.