Atenção: Last day for emailing the code: September 23.
Nowadays, neural networks can detect, segment, and classify objects and people in images. Training or adapting a network to a new object requires an expert to mark that object in new images. Projects like the MIT LabelMe help but do not really solve the problem. They only deal with photos, not medical or seismic images. It also has only a straightforward segmentation tool based on the definition of a polygon that represents the object boundary. There are more effcient UI strategies to peform this segmentation. Furthermore, LabelMe is an application alone and could hardly be included in a project where several people are working collaboratively to tag images and train networks. If it were part of a portal of collaboration would be better.
This assignment aims to improve some of these deficiencies.
One way would be to use SuperPixels to define object boundaries.
To illustrate this idea consider the image below taken from
Achanta and others article .
Note that the borders of the objects in the figure coincides with the boundaries of the SuperPixels.
A simple user action would be to tell which Superpixels form the object.
There are also many automatic segmentation algorithms based on the SuperPixels.
DBSCAN is one of these algorithms
.
The idea here is that the file io and the segmentation algorithm should occur in the server. The correction in a web client.
Python has many libraries to handle images, videos and SuperPixels, in particular it has the OpenCV and the VTK libs. To handle seismic data that is little more complicated, for this reason I prepared a sample notebook that is available in this directory on the network . The directory also contains a SEGY and a DICOM file.