The code can be found in this repository. The labeler takes conll as input, using the features UPOS and DEPREL.
Trained on gold-conllu/sonar1_train.conll
evaluate in two ways:
- on golden pos/deprel data (
gold-conllu
). - on stanfordnlp output (
stanfordnlp-conllu
).
Maybe not necessary
The code can be found in this repository (note that this an updated fork to use python 3). It takes NAF as input, using features from the term, constituent and dependency layers (as produced by Alpino). It includes a trained model, but it's unclear on what data this model is trained.
As the system outputs NAF, we need to convert it to conll in order to evaluate.
It needs the features coming out of Alpino, so we evaluate it with the data in alpino-naf
.
Train and evaluate on the output of Alpino (alpino-naf
). We cannot train on golden data, as we don't have the golden constituents??
TODO: wait for the code and models released from the RUG. As it is an end-to-end system, we can train and evaluate simply on the gold conllu files.