Skip to content
This repository was archived by the owner on Feb 11, 2023. It is now read-only.

Commit fd248da

Browse files
authored
update Segm (#8)
* drop labels for train & statisctic @segm * draw: overlap image-segm * outsource metric for classif. hyper-param * visual train samples (label purity) * parallelise label filter * fix setup for installing * fix loading input list * remove LOO @segm * minor renaming * update contrib.
1 parent 819daf1 commit fd248da

32 files changed

+573
-402
lines changed

.shippable.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ script:
6262

6363
# SEGMENTATION section
6464
- rm -r -f results && mkdir results
65-
- python experiments_segmentation/run_compute_stat_annot_segm.py --visual
65+
- python experiments_segmentation/run_compute_stat_annot_segm.py -a "data_images/drosophila_ovary_slice/annot_struct/*.png" -s "data_images/drosophila_ovary_slice/segm/*.png" --visual
6666
- python experiments_segmentation/run_segm_slic_model_graphcut.py -i "data_images/drosophila_disc/image/img_[5,6].jpg" -cfg ./experiments_segmentation/sample_config.json --visual
6767
- python experiments_segmentation/run_segm_slic_classif_graphcut.py -l data_images/drosophila_ovary_slice/list_imgs-annot-struct_short.csv -i "data_images/drosophila_ovary_slice/image/insitu41*.jpg" -cfg ./experiments_segmentation/sample_config.json --visual
6868

@@ -96,3 +96,5 @@ after_success:
9696
- coverage xml -o $COVERAGE_REPORTS/coverage.xml
9797
- codecov # public repository on Travis CI
9898
- coverage report
99+
100+
- cd .. && python -c "import imsegm.descriptors"

.travis.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,3 +54,4 @@ after_success:
5454
- coverage xml
5555
- python-codacy-coverage -r coverage.xml
5656
- coverage report
57+
- cd .. && python -c "import imsegm.descriptors"

README.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@
1010

1111
## Superpixel segmentation with GraphCut regularisation
1212

13-
Image segmentation is widely used as an initial phase of many image processing tasks in computer vision and image analysis. Many recent segmentation methods use superpixels because they reduce the size of the segmentation problem by order of magnitude. Also, features on superpixels are much more robust than features on pixels only. We use spatial regularization on superpixels to make segmented regions more compact. The segmentation pipeline comprises (i) computation of superpixels; (ii) extraction of descriptors such as color and texture; (iii) soft classification, using a standard classifier for supervised learning, or the Gaussian Mixture Model for unsupervised learning; (iv) final segmentation using Graph Cut. We use this segmentation pipeline on real-world applications in medical imaging (see a sample [images](./images)). We also show that [unsupervised segmentation](./notebooks/segment-2d_slic-fts-model-gc.ipynb) is sufficient for some situations, and provides similar results to those obtained using [trained segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb).
13+
Image segmentation is widely used as an initial phase of many image processing tasks in computer vision and image analysis. Many recent segmentation methods use superpixels because they reduce the size of the segmentation problem by order of magnitude. Also, features on superpixels are much more robust than features on pixels only. We use spatial regularization on superpixels to make segmented regions more compact. The segmentation pipeline comprises (i) computation of superpixels; (ii) extraction of descriptors such as color and texture; (iii) soft classification, using a standard classifier for supervised learning, or the Gaussian Mixture Model for unsupervised learning; (iv) final segmentation using Graph Cut. We use this segmentation pipeline on real-world applications in medical imaging (see a sample [images](./images)). We also show that [unsupervised segmentation](./notebooks/segment-2d_slic-fts-model-gc.ipynb) is sufficient for some situations, and provides similar results to those obtained using [trained segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb).
1414

1515
![schema](figures/schema_slic-fts-clf-gc.jpg)
1616

1717
**Sample ipython notebooks:**
1818
* [Supervised segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb) requires training anottaion
1919
* [Unsupervised segmentation](notebooks/segment-2d_slic-fts-model-gc.ipynb) just asks for expected number of classes
20-
* **partially annotated images** with missing annotatio is marked by a negative number
20+
* **partially annotated images** with missing annotation is marked by a negative number
2121

2222
**Illustration**
2323

@@ -44,7 +44,7 @@ Borovec J., Kybic J., Nava R. (2017) **Detection and Localization of Drosophila
4444

4545
## Superpixel Region Growing with Shape prior
4646

47-
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed approach differs from standard region growing in three essential aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speedup. Second, our method uses learned statistical shape properties which encourage growing leading to plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily, or iteratively using GraphCuts.
47+
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed approach differs from standard region growing in three essential aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speedup. Second, our method uses learned statistical shape properties which encourage growing leading to plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as energy minimization and is solved either greedily, or iteratively using GraphCuts.
4848

4949
**Sample ipython notebooks:**
5050
* [General GraphCut](notebooks/egg_segment_graphcut.ipynb) from given centers and initial structure segmentation.
@@ -93,7 +93,7 @@ We have implemented cython version of some functions, especially computing descr
9393
```bash
9494
python setup.py build_ext --inplace
9595
```
96-
If loading of compiled descriptors in cython fails, it is automatically swapped to numpy which gives the same results, but it is significantly slower.
96+
If loading of compiled descriptors in `cython` fails, it is automatically swapped to `numpy` which gives the same results, but it is significantly slower.
9797

9898
**Installation**
9999

@@ -191,40 +191,40 @@ We utilize (un)supervised segmentation according to given training examples or s
191191
* For both experiment you can evaluate segmentation results.
192192
```bash
193193
python experiments_segmentation/run_compute-stat_annot-segm.py \
194-
-annot "./data_images/drosophila_ovary_slice/annot_struct/*.png" \
195-
-segm "./results/experiment_segm-supervise_ovary/*.png" \
196-
-img "./data_images/drosophila_ovary_slice/image/*.jpg" \
197-
-out ./results/evaluation
194+
-a "./data_images/drosophila_ovary_slice/annot_struct/*.png" \
195+
-s "./results/experiment_segm-supervise_ovary/*.png" \
196+
-i "./data_images/drosophila_ovary_slice/image/*.jpg" \
197+
-o ./results/evaluation --visual
198198
```
199199
![vusial](figures/segm-visual_D03_sy04_100x.jpg)
200200

201201
The previous two (un)segmentation accept [configuration file](experiments_segmentation/sample_config.json) (JSON) by parameter `-cfg` with some extra parameters which was not passed in arguments, for instance:
202202
```json
203203
{
204-
"slic_size": 35,
205-
"slic_regul": 0.2,
206-
"features": {"color_hsv": ["mean", "std", "eng"]},
207-
"classif": "SVM",
208-
"nb_classif_search": 150,
209-
"gc_edge_type": "model",
210-
"gc_regul": 3.0,
211-
"run_LOO": false,
212-
"run_LPO": true,
213-
"cross_val": 0.1
204+
"slic_size": 35,
205+
"slic_regul": 0.2,
206+
"features": {"color_hsv": ["mean", "std", "eng"]},
207+
"classif": "SVM",
208+
"nb_classif_search": 150,
209+
"gc_edge_type": "model",
210+
"gc_regul": 3.0,
211+
"run_LOO": false,
212+
"run_LPO": true,
213+
"cross_val": 0.1
214214
}
215215
```
216216

217217
### Center detection and ellipse fitting
218218

219-
In general, the input is a formatted list (CSV file) of input images and annotations. Another option is set `-list none` and then the list is paired from given paths to images and annotations.
219+
In general, the input is a formatted list (CSV file) of input images and annotations. Another option is set `-list none` and then the list is paired with given paths to images and annotations.
220220

221221
**Experiment sequence is following:**
222222

223223
1. We can create the annotation completely manually or use following script which uses annotation of individual objects and create the zones automatically.
224224
```bash
225225
python experiments_ovary_centres/run_create_annotation.py
226226
```
227-
1. With zone annotation, we train a classifier for center candidate prediction. The annotation can be a CSV file with annotated centers as points, and the zone of positive examples is set uniformly as the circular neighborhood around these points. Another way (preferable) is to use annotated image with marked zones for positive, negative and neutral examples.
227+
1. With zone annotation, we train a classifier for center candidate prediction. The annotation can be a CSV file with annotated centers as points, and the zone of positive examples is set uniformly as the circular neighborhood around these points. Another way (preferable) is to use an annotated image with marked zones for positive, negative and neutral examples.
228228
```bash
229229
python experiments_ovary_centres/run_center_candidate_training.py -list none \
230230
-segs "./data_images/drosophila_ovary_slice/segm/*.png" \
@@ -269,7 +269,7 @@ In general, the input is a formatted list (CSV file) of input images and annotat
269269

270270
![ellipse fitting](figures/insitu7544_ellipses.jpg)
271271

272-
### Region growing with shape prior
272+
### Region growing with a shape prior
273273

274274
In case you do not have estimated object centers, you can use [plugins](ij_macros) for landmarks import/export for [Fiji](http://fiji.sc/).
275275

circle.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ test:
4848
- python handling_annotations/run_segm_annot_relabel.py -imgs "./data_images/drosophila_ovary_slice/center_levels/*.png" -out ./results/relabel_center_levels
4949

5050
# SEGMENTATION section
51-
- python experiments_segmentation/run_compute_stat_annot_segm.py --visual
51+
- python experiments_segmentation/run_compute_stat_annot_segm.py -a "data_images/drosophila_ovary_slice/annot_struct/*.png" -s "data_images/drosophila_ovary_slice/segm/*.png" --visual
5252
- python experiments_segmentation/run_segm_slic_model_graphcut.py -i "data_images/drosophila_disc/image/img_[5,6].jpg" -cfg ./experiments_segmentation/sample_config.json --visual
5353
- python experiments_segmentation/run_segm_slic_classif_graphcut.py -l data_images/drosophila_ovary_slice/list_imgs-annot-struct_short.csv -i "data_images/drosophila_ovary_slice/image/insitu41*.jpg" -cfg ./experiments_segmentation/sample_config.json --visual
5454

docs/CONTRIBUTING.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -57,10 +57,8 @@ Here's the long and short of it:
5757
* Refer to array dimensions as (plane), row, column, not as x, y, z. See :ref:`Coordinate conventions <numpy-images-coordinate-conventions>` in the user guide for more information.
5858
* Functions should support all input image dtypes. Use utility functions such as ``img_as_float`` to help convert to an appropriate type. The output format can be whatever is most efficient. This allows us to string together several functions into a pipeline
5959
* Use ``Py_ssize_t`` as data type for all indexing, shape and size variables in C/C++ and Cython code.
60-
* Use relative module imports, i.e. ``from .._shared import xyz`` rather than ``from skimage._shared import xyz``.
6160
* Wrap Cython code in a pure Python function, which defines the API. This improves compatibility with code introspection tools, which are often not aware of Cython code.
62-
* For Cython functions, release the GIL whenever possible, using
63-
``with nogil:``.
61+
* For Cython functions, release the GIL whenever possible, using ``with nogil:``.
6462
6563
6664
## Testing
@@ -76,12 +74,12 @@ the library is installed in development mode::
7674
```
7775
Now, run all tests using::
7876
```
79-
$ PYTHONPATH=. pytest pyImSegm
77+
$ pytest -v pyImSegm
8078
```
8179
Use ``--doctest-modules`` to run doctests.
8280
For example, run all tests and all doctests using::
8381
```
84-
$ PYTHONPATH=. pytest --doctest-modules --with-xunit --with-coverage pyImSegm
82+
$ pytest -v --doctest-modules --with-xunit --with-coverage pyImSegm
8583
```
8684
8785
## Test coverage
@@ -92,7 +90,7 @@ To measure the test coverage, install `pytest-cov <http://pytest-cov.readthedocs
9290
```
9391
$ coverage report
9492
```
95-
This will print a report with one line for each file in `skimage`,
93+
This will print a report with one line for each file in `imsegm`,
9694
detailing the test coverage::
9795
```
9896
Name Stmts Exec Cover Missing

experiments_ovary_centres/run_center_clustering.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -47,15 +47,15 @@
4747
'DBSCAN_max_dist': 50,
4848
'DBSCAN_min_samples': 1,
4949
}
50-
PARAMS = run_train.CENTER_PARAMS
51-
PARAMS.update(CLUSTER_PARAMS)
52-
PARAMS.update({
53-
'path_expt': os.path.join(PARAMS['path_output'],
54-
FOLDER_EXPERIMENT % PARAMS['name']),
50+
DEFAULT_PARAMS = run_train.CENTER_PARAMS
51+
DEFAULT_PARAMS.update(CLUSTER_PARAMS)
52+
DEFAULT_PARAMS.update({
53+
'path_expt': os.path.join(DEFAULT_PARAMS['path_output'],
54+
FOLDER_EXPERIMENT % DEFAULT_PARAMS['name']),
5555
'path_images': os.path.join(run_train.PATH_IMAGES, 'image', '*.jpg'),
5656
'path_segms': os.path.join(run_train.PATH_IMAGES, 'segm', '*.png'),
57-
'path_centers': os.path.join(PARAMS['path_output'],
58-
FOLDER_EXPERIMENT % PARAMS['name'],
57+
'path_centers': os.path.join(DEFAULT_PARAMS['path_output'],
58+
FOLDER_EXPERIMENT % DEFAULT_PARAMS['name'],
5959
'candidates', '*.csv')
6060
})
6161

@@ -227,5 +227,5 @@ def main(params):
227227

228228
if __name__ == '__main__':
229229
logging.basicConfig(level=logging.DEBUG)
230-
params = run_train.arg_parse_params(PARAMS)
230+
params = run_train.arg_parse_params(DEFAULT_PARAMS)
231231
main(params)

experiments_ovary_centres/run_center_evaluation.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -53,18 +53,18 @@
5353

5454
FOLDER_ANNOT = 'annot_user_stage-%s'
5555
FOLDER_ANNOT_VISUAL = 'annot_user_stage-%s___visual'
56-
PARAMS = run_train.CENTER_PARAMS
57-
PARAMS.update({
56+
DEFAULT_PARAMS = run_train.CENTER_PARAMS
57+
DEFAULT_PARAMS.update({
5858
'stages': [(1, 2, 3, 4, 5),
5959
(2, 3, 4, 5),
6060
(1, ), (2, ), (3, ), (4, ), (5, )],
6161
'path_list': '',
62-
'path_centers': os.path.join(os.path.dirname(PARAMS['path_centers']),
62+
'path_centers': os.path.join(os.path.dirname(DEFAULT_PARAMS['path_centers']),
6363
'*.csv'),
6464
'path_infofile': os.path.join(run_train.PATH_IMAGES,
6565
'info_ovary_images.txt'),
66-
'path_expt': os.path.join(PARAMS['path_output'],
67-
run_detect.FOLDER_EXPERIMENT % PARAMS['name']),
66+
'path_expt': os.path.join(DEFAULT_PARAMS['path_output'],
67+
run_detect.FOLDER_EXPERIMENT % DEFAULT_PARAMS['name']),
6868
})
6969

7070
NAME_CSV_TRIPLES = run_train.NAME_CSV_TRIPLES
@@ -277,5 +277,5 @@ def main(params):
277277

278278
if __name__ == '__main__':
279279
logging.basicConfig(level=logging.DEBUG)
280-
params = run_train.arg_parse_params(PARAMS)
280+
params = run_train.arg_parse_params(DEFAULT_PARAMS)
281281
main(params)

experiments_ovary_centres/run_center_prediction.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,10 @@
4343
FOLDER_EXPERIMENT = 'detect-centers-predict_%s'
4444

4545
# This sampling only influnece the number of point to be evaluated in the image
46-
PARAMS = run_train.CENTER_PARAMS
47-
PARAMS.update(run_clust.CLUSTER_PARAMS)
48-
PARAMS['path_centers'] = os.path.join(PARAMS['path_output'],
49-
run_train.FOLDER_EXPERIMENT % PARAMS['name'],
46+
DEFAULT_PARAMS = run_train.CENTER_PARAMS
47+
DEFAULT_PARAMS.update(run_clust.CLUSTER_PARAMS)
48+
DEFAULT_PARAMS['path_centers'] = os.path.join(DEFAULT_PARAMS['path_output'],
49+
run_train.FOLDER_EXPERIMENT % DEFAULT_PARAMS['name'],
5050
'classifier_RandForest.pkl')
5151

5252

@@ -173,7 +173,7 @@ def main(params):
173173

174174
if __name__ == '__main__':
175175
logging.basicConfig(level=logging.DEBUG)
176-
params = run_train.arg_parse_params(PARAMS)
176+
params = run_train.arg_parse_params(DEFAULT_PARAMS)
177177

178178
params['path_classif'] = params['path_centers']
179179
assert os.path.isfile(params['path_classif']), \

experiments_ovary_detect/run_cut_segmented_objects.py

Lines changed: 10 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ def arg_parse_params(dict_paths):
4444
parser.add_argument('-imgs', '--path_image', type=str, required=False,
4545
help='path to directory & name pattern for images',
4646
default=dict_paths['image'])
47-
parser.add_argument('-out', '--path_out', type=str, required=False,
47+
parser.add_argument('-out', '--path_output', type=str, required=False,
4848
help='path to the output directory',
4949
default=dict_paths['output'])
5050
parser.add_argument('--padding', type=int, required=False,
@@ -55,19 +55,15 @@ def arg_parse_params(dict_paths):
5555
help='using background color', default=None, nargs='+')
5656
parser.add_argument('--nb_jobs', type=int, required=False, default=NB_THREADS,
5757
help='number of processes in parallel')
58-
args = parser.parse_args()
58+
args = vars(parser.parse_args())
5959
logging.info('ARG PARAMETERS: \n %s', repr(args))
60-
dict_paths = {
61-
'annot': tl_data.update_path(args.path_annot),
62-
'image': tl_data.update_path(args.path_image),
63-
'output': tl_data.update_path(args.path_out),
64-
}
60+
dict_paths = {k.split('_')[-1]:
61+
os.path.join(tl_data.update_path(os.path.dirname(args[k])),
62+
os.path.basename(args[k]))
63+
for k in args if k.startswith('path_')}
6564
for k in dict_paths:
66-
if dict_paths[k] == '':
67-
continue
68-
p = os.path.dirname(dict_paths[k]) \
69-
if k in ['annot', 'image', 'output'] else dict_paths[k]
70-
assert os.path.exists(p), 'missing (%s) "%s"' % (k, p)
65+
assert os.path.exists(os.path.dirname(dict_paths[k])), \
66+
'missing (%s) "%s"' % (k, os.path.dirname(dict_paths[k]))
7167
return dict_paths, args
7268

7369

@@ -126,4 +122,5 @@ def main(dict_paths, padding=0, use_mask=False, bg_color=None,
126122
if __name__ == '__main__':
127123
logging.basicConfig(level=logging.INFO)
128124
dict_paths, args = arg_parse_params(PATHS)
129-
main(dict_paths, args.padding, args.mask, args.background, args.nb_jobs)
125+
main(dict_paths, args['padding'], args['mask'],
126+
args['background'], args['nb_jobs'])

experiments_ovary_detect/run_egg_swap_orientation.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
'drosophila_ovary_slice')
3131
PATH_RESULTS = tl_data.update_path('results', absolute=True)
3232
SWAP_CONDITION = 'cc'
33-
PARAMS = {
33+
DEFAULT_PARAMS = {
3434
'path_images': os.path.join(PATH_IMAGES, 'image_cut-stage-2', '*.png'),
3535
'path_output': os.path.join(PATH_RESULTS, 'image_cut-stage-2'),
3636
}
@@ -129,5 +129,5 @@ def main(params):
129129

130130
if __name__ == '__main__':
131131
logging.basicConfig(level=logging.INFO)
132-
params = r_match.arg_parse_params(PARAMS)
132+
params = r_match.arg_parse_params(DEFAULT_PARAMS)
133133
main(params)

0 commit comments

Comments
 (0)