Skip to content

Image segmentation - artifacts on the output classification image #76

@HardRock4Life

Description

@HardRock4Life

Hello!

I'm stuck with the image segmentation.

I've preprocessed the images, then extracted patches like this:

image
image
image
image

The CNN model training runs fine

image

For inference, I give a normalized image as as input layer. The final result I get is the following:

image

image

The way I understand it, the U-NET model is supposed to delete those artifacts in this piece of code?

def myModel(x):

  depth = 16
  
  # Encoding
  conv1   = _conv(x,        1*depth)         #  64 x 64 --> 32 x 32 (31 x 31)
  conv2   = _conv(conv1,    2*depth)         #  32 x 32 --> 16 x 16 (15 x 15)
  conv3   = _conv(conv2,    4*depth)         #  16 x 16 -->  8 x  8 ( 7 x  7)
  conv4   = _conv(conv3,    4*depth)         #   8 x  8 -->  4 x  4 ( 3 x  3)
  
  # Decoding (with skip connections)
  deconv1 = _dconv(conv4,           4*depth) #  4  x  4 -->  8 x  8 ( 5 x  5)
  deconv2 = _dconv(deconv1 + conv3, 2*depth) #  8  x  8 --> 16 x 16 ( 9 x  9)
  deconv3 = _dconv(deconv2 + conv2, 1*depth) # 16  x 16 --> 32 x 32 (17 x 17)
  deconv4 = _dconv(deconv3 + conv1, 1*depth) # 32  x 32 --> 64 x 64 (33 x 33)
  
  # Neurons for classes
  estimated = tf.layers.dense(inputs=deconv4, units=nclasses, activation=None)
  
  return estimated

Or should it be done differently? Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions