The quality of your training data determines how well the predictions are of your model. It is, in other words, of utmost importance to perform regular quality checks especially before (re)training a model. We have created a – tasks – module around this where annotated documents can be easily reviewed. In this release, we have improved the ease of review by adding hints (“misannotation hints”). These will suggest annotations that might be missing or redundant. Also we have worked on simplifying annotation tasks by adding predictions to unlabelled documents once a training is present (“model assisted labelling”). This will help the user with annotations. By not annotating from scratch, the user can save a significant amount of (annotation) time.