Skip to content

m-faezi/MTO2

Repository files navigation

MTO2 - Astronomical Source Detection Tool

DOI Higra Astropy contributions welcome Py3.13 PyTorch Matrix

MTO2 is a photometric object (e.g., LSB galaxies) detection and extraction software, representing and processing on max-tree (Salembier et al.) data structure.

  • Background estimation: Robust constant and/or morphological background subtraction(s).
  • Detection: Detects faint complex emissions; statistically significant.
  • Deblending: Max-tree spatial attribute-based accurate deblending.
  • Cataloging: Precise parameter extraction.

Table of Contents

Processing Pipeline

The procedure begins with Gaussian smoothing—regulated by the s_sigma parameter—to suppress small-scale noise. Next, a constant or morphological background is subtracted to improve source visibility.

A max-tree is constructed from the background-subtracted image, and statistical test is applied to extract significant nodes based on their flux attributes.

Optional Post-Processing

Arguments are available to refine the segmentation map:

  • move_factor: Adjusts isophotal boundaries by shifting components relative to background noise
  • area_ratio: Corrects deblending by evaluating area size relationships between nodes and their parents
  • G_fit: Gaussian-fit attribute filtering

MTO2-flow
MTO2 processing pipeline workflow.

Installation

Tip

It is recommended to use an isolated Python virtual environment to avoid dependency conflicts. The simplest way is to use Python's built-in venv. If you already have another virtual environment active, be sure to deactivate it first.

Dependencies

The dependencies are listed in the ./requirements directory.

python3 -m venv ./venvs/mto2
source ./venvs/mto2/bin/activate
pip install -r ./requirements/requirements_base.txt
pip install -r ./requirements/requirements_torch.txt || pip install -r ./requirements/requirements_torch_fallback.txt

Minimal run

python mto2.py image.fits

Tuned run

python mto2.py image.fits
    --s_sigma 1.6 
    --move_factor 0.1 
    --area_ratio 0.91  
    --G_fit 
    --skip_reduction 
    --par_out 
    --background_mode const
    --crop 10 20 10000 20000

Get started with a demo in Google Colab: Google Colab

Command line arguments

Option Description Type Default Range/Values
--s_sigma Standard deviation for smoothing kernel float 2.00 ≥ 0
--move_factor Adjust the spread of objects float 0.00 ≥ 0
--area_ratio Adjust deblending sensitivity float 0.90 [0.0, 1.0)
--par_out Extract and save parameters in .csv format flag - -
--G_fit Apply Gaussian-fit attribute filter flag - -
--skip_reduction Runs without background reduction flag - -
--background_mode Select constant or morphological background choice const const, morph
--crop Crops the image int[4] x_min y_min x_max y_max 0 0 -1 -1
-h, --help Show the help message and exit flag - -

Output formatting

Output files (segmentation maps and catalogs) are automatically timestamped using the iso format to prevent overwriting, provide clear analysis tracking and reproducing the experiments. The following files are generated:

  • Segmentation maps: segmentation_map.fits and segmentation_map.png
  • Source catalogs: parameters.csv (when --par_out is enabled)
  • Background and reduction maps: background_map.fits and reduced.fits
  • Segmentation intensity calibration map: cali_base.fits
  • Run metadata: run_metadata.json containing run arguments and background mode information
  • Run tracking: Centralized your_runs.csv recording all runs with their status (more useful for automated pipelines)

The run_metadata.json file provides complete information about each run, including background mode, argument setting, and software version for full reproducibility.

run_metadata.json template:

{
  "software": "MTO2",
  "version": "1.0.0",
  "time_stamp": "2025-10-09T12:14:45.580277",
  "file_name": "Your-Data-Name",
  "arguments": {
    "background_mode": "morph",
    "move_factor": 0.1,
    "area_ratio": 0.91,
    "s_sigma": 1.6,
    "G_fit": true,
    "skip_reduction": true,
    "crop": [
      3100,
      3600,
      4200,
      4800
    ]
  }
}

Citation

If you use MTO2 in your research, please cite the following paper:

@ARTICLE{10535192,
  author={Hashem Faezi, Mohammad and Peletier, Reynier and Wilkinson, Michael H. F.},
  journal={IEEE Access}, 
  title={Multi-Spectral Source-Segmentation Using Semantically-Informed Max-Trees}, 
  year={2024},
  volume={12},
  number={},
  pages={72288-72302},
  doi={10.1109/ACCESS.2024.3403309}
}

Acknowledgments

This software was developed for Faint Object Detection in Multidimensional Astronomical Data Ph.D. thesis (Mohammad H. Faezi, 2026) at the Rijksuniversiteit of Groningen under the supervision of Dr. Michael Wilkinson, Prof. Dr. Reynier Peletier and Prof. Dr. Nicolai Petkov.

MTO2 is developed using the Higra Python package, and builds on their example implementation of MTO: Astronomical object detection with the Max-Tree - MMTA 2016.

This implementation draws inspiration from Caroline Haigh's work (Teeninga et al.).

Bibliography

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

AI-powered max-tree based source detection and parameter extraction software for astronomical image data processing.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors

Languages