×

back About MAIA

An introduction to the Machine Learning Assisted Image Annotation method (MAIA).

The goal of the MAIA method is to enable you to annotate large image collections much faster than with a purely manual "traditional" approach. To speed up image annotation, MAIA uses machine learning methods that automatically process the images. You can find a detailed description of MAIA in the paper [1]. Here is a short overview of the four consecutive stages of MAIA.

The four stages of MAIA

In the first stage of MAIA (novelty detection), the method attempts to automatically find "interesting" objects in the images. It does so by assuming that interesting objects are rare and distinguishable from a rather uniform background. This works well on many deep-sea images but is not perfect for every case. The detected interesting objects are passed to the second stage as "training proposals".

As the training proposals usually contain many instances that actually are not interesting objects, they are manually filtered (by you) in the second stage. In this stage, you select only the actually interesting objects of all proposals. These are used in the third step.

In addition to the novelty detection of the original MAIA method, BIIGLE offers alternative ways to obtain training proposals. Read more in the articles to use existing annotations or knowledge transfer.

In the third step, object detection (called "instance segmentation" in the MAIA paper [1]), the manually filtered or automatically obtained set of training proposals is used to train a machine learning model for the automatic detection of the selected interesting objects. The model is highly specialized for this task and can usually detect most (if not all) instances of the interesting objects in the images. In the tests reported by the MAIA paper, 84% of the interesting objects were detected on average [1]. The detections are passed on as "annotation candidates" to the fourth step.

As with the training proposals of the novelty detection, the annotation candidates can contain detections that are not actually interesting objects. In addition, the machine learning model only detects the objects and does not attempt to automatically assign labels to them. In the fourth step, the annotation candidates are again manually filtered to select only the actually interesting objects. Furthermore, labels are manually attached to the selected candidates which are subsequently transformed to actual annotations.

While it is likely that some interesting objects are missed by MAIA, the process is much faster for large image collections than the purely manual approach. If you still need to make sure that all objects are annotated, you can use MAIA as a first step to annotate the majority of the objects and then add a manual sweep through the images to find objects that were missed. The objects that were detected using MAIA can serve as examples to identify the missed ones.

MAIA in BIIGLE

To create new annotations with MAIA in BIIGLE, project editors, experts or admins can start a new MAIA "job" for a volume of a project. To start a new MAIA job, click on the button in the sidebar of the volume overview. This will open up the MAIA overview for the volume, which lists any running or finished jobs, as well as a form to create a new MAIA job for the volume. New jobs can only be created when no other job is currently running for the volume.

The form to create a new MAIA job presents you a choice between several methods to obtain training data (training proposals). Choose one that best fits to your use case. The form initially shows only the parameters that are most likely to be modified for each job. To show all available parameters, click on the button below the form. There can be quite a lot parameters that can be configured for a MAIA job. Although sensible defaults are set, a careful configuration may be crucial for a good quality of the resulting annotations. You can read more on the configuration parameters for novelty detection, existing annotations, knowledge transfer and object detection in the respective articles.

A MAIA job can run for many hours or even a day. Please choose your settings carefully before you submit a new job.

If novelty detection is chosen as the method to obtain training data, a MAIA job runs through the four consecutive stages outlined above. The first and the third stages perform automatic processing of the images. The second and the fourth stages require manual interaction by you. Once you have created a new job, it will immediately start the automatic processing of the first stage. BIIGLE will notify you when this stage is finished and the job is waiting for your manual interaction in the second stage. In the same way you are notified when the automatic processing of the third stage is finished. If existing annotations or knowledge transfer were chosen as the method to obtain training data, the job will directly proceed with the third stage, skipping the first two. You can change the way you receive new MAIA notifications in the notification settings of your user account.

The overview page of a MAIA job shows a main content area and a sidebar with multiple tabs. The first tab shows general information about the job, including all the parameters that were used. The second and third tabs belong to the training proposals stage and are enabled once the job progresses to this stage. These tabs are visible only if novelty detection was chosen as the method to obtain training data. The fourth and fifth tabs belong to the annotation candidates stage and are enabled once the job progresses to this stage.

Continue reading about MAIA in the articles about the methods to obtain training data. You can start with the first method: novelty detection.

Further reading

References

  1. Zurowietz, M., Langenkämper, D., Hosking, B., Ruhl, H. A., & Nattkemper, T. W. (2018). MAIA—A machine learning assisted image annotation method for environmental monitoring and exploration. PloS one, 13(11), e0207498. doi: 10.1371/journal.pone.0207498