![box model 3d plant cell box model 3d plant cell](https://i.pinimg.com/originals/ce/25/f5/ce25f507c5b176b84dbb3429657ab14c.jpg)
#BOX MODEL 3D PLANT CELL SOFTWARE#
While deep learning-based methods define the state-of-the-art for all image segmentation problems, only a handful of software packages strives to make them accessible to non-expert users in biology (reviewed in ). Although this method can in principle be generalized to 3D images, the necessity to train two separate networks poses additional difficulty for non-experts.
![box model 3d plant cell box model 3d plant cell](https://i.ytimg.com/vi/XYMl0Mlb4Fg/maxresdefault.jpg)
Recently, an approach combining the output of two neural networks and watershed to detect individual cells showed promising results on segmentation of cells in 2D ( Wang et al., 2019). This method, however, suffers from the usual drawback of the watershed algorithm: misclassification of a single cell centroid results in sub-optimal seeding and leads to segmentation errors. For example, in Eschweiler et al., 2018 a 3D U-Net was trained to predict cell contours together with cell centroids as seeds for watershed in 3D confocal microscopy images. If centroids (‘seeds’) of the objects are known or can be learned, the problem can be solved by the watershed algorithm ( Couprie et al., 2011 Cerrone et al., 2019). For noisy, real-world microscopy data, this post-processing step still represents a challenge and has attracted a fair amount of attention from the computer vision community ( Turaga et al., 2010 Nunez-Iglesias et al., 2014 Beier et al., 2017 Wolf et al., 2018 Funke et al., 2019a). Once the boundaries are found, other pixels need to be grouped into objects delineated by the detected boundaries.
![box model 3d plant cell box model 3d plant cell](https://i.pinimg.com/originals/ba/53/fd/ba53fdf56355820a4f44d18bbc4252f1.jpg)
In particular, the U-Net architecture ( Ronneberger et al., 2015) has demonstrated excellent performance on 2D biomedical images and has later been further extended to process volumetric data ( Çiçek et al., 2016). Currently, the most powerful boundary detectors are based on Convolutional Neural Networks (CNNs) ( Long et al., 2015 Kokkinos, 2015 Xie and Tu, 2015). More recently, a combination of edge detectors and other image filters was commonly used as input for a machine learning algorithm, trained to detect boundaries ( Lucchi et al., 2012). In the early days of computer vision, boundaries were usually found by edge detection algorithms ( Canny, 1986). Segmentation of cells is then performed based on their boundary prediction. With a few notable exceptions, such as the Brainbow experiments ( Weissman and Pan, 2015), imaging cell shape during morphogenesis relies on staining of the plasma membrane with a fluorescent marker. A few segmentation pipelines have been proposed ( Fernandez et al., 2010 Stegmaier et al., 2016), but these either do not leverage recent developments in the field of computer vision or are difficult to use for non-experts. With such microscopes now in routine use, segmentation of the resulting images has become a major bottleneck in the downstream analysis of large-scale imaging experiments. State-of-the-art light microscopes allow for such analysis by capturing the anatomy and development of plants and animals in terabytes of high-resolution volumetric images. Large-scale quantitative study of morphogenesis in a multicellular organism entails an accurate estimation of the shape of all cells across multiple specimen.