2025-12-12 23:52:34 +02:00
2025-12-12 21:51:12 +02:00
2025-12-08 16:28:58 +02:00
2025-12-08 16:28:58 +02:00
2025-12-12 23:52:34 +02:00
2025-12-05 09:41:38 +02:00
2025-12-05 09:39:45 +02:00
2025-12-05 09:42:30 +02:00

Microscopy Object Detection Application

A desktop application for detecting and segmenting organelles and membrane branching structures in microscopy images using YOLOv8-seg, featuring comprehensive training, validation, and visualization capabilities with pixel-accurate segmentation masks.

Python PySide6 YOLOv8

Features

  • 🎯 Object Detection & Segmentation: Real-time and batch detection with pixel-accurate segmentation masks
  • 🎓 Model Training: Fine-tune YOLOv8s-seg on custom microscopy datasets
  • 📊 Validation & Metrics: Comprehensive model validation with visualization
  • 💾 Database Storage: SQLite database for detection results and metadata
  • 📈 Visualization: Interactive plots and charts using pyqtgraph
  • 🖼️ Annotation Tool: Manual annotation interface (future feature)
  • 📤 Export: Export detection results to CSV, JSON, or Excel
  • ⚙️ Configuration: Flexible configuration management

Technology Stack

Prerequisites

  • Python 3.8 or higher
  • CUDA-capable GPU (recommended for training, optional for inference)
  • 8GB RAM minimum
  • 2GB disk space for models and data

Installation

pip install microscopy-object-detection

This will install the package and all its dependencies.

Option 2: Install from Source

1. Clone the Repository

git clone <repository-url>
cd object_detection

2. Create Virtual Environment

# Linux/Mac
python3 -m venv venv
source venv/bin/activate

# Windows
python -m venv venv
venv\Scripts\activate

3. Install in Development Mode

# Install in editable mode with dev dependencies
pip install -e ".[dev]"

# Or install just the package
pip install .

4. Download Base Model

The application will automatically download the YOLOv8s-seg.pt segmentation model on first use, or you can download it manually:

# The model will be downloaded automatically by ultralytics
# Or download manually from: https://github.com/ultralytics/assets/releases

Note: YOLOv8s-seg is a segmentation model that provides pixel-accurate masks for detected objects, enabling more precise analysis than standard bounding box detection.

Quick Start

1. Launch the Application

After installation, you can launch the application in two ways:

Using the GUI launcher:

microscopy-detect-gui

Or using Python directly:

python -m microscopy_object_detection

If installed from source:

python main.py

2. Configure Image Repository

  1. Go to File → Settings
  2. Set the path to your image repository
  3. Click Save

3. Perform Detection

  1. Navigate to the Detection tab
  2. Select a model (default: yolov8s-seg.pt)
  3. Choose an image or folder
  4. Set confidence threshold
  5. Click Detect
  6. View results with segmentation masks overlaid
  7. Save results to database

4. Train Custom Model

  1. Navigate to the Training tab
  2. Prepare your dataset in YOLO format:
    dataset/
    ├── train/
    │   ├── images/
    │   └── labels/
    ├── val/
    │   ├── images/
    │   └── labels/
    └── data.yaml
    
  3. Select dataset YAML file
  4. Configure training parameters
  5. Click Start Training
  6. Monitor progress in real-time

5. View Results

  1. Navigate to the Results tab
  2. Browse detection history
  3. Filter by date, model, class, or confidence
  4. Export results to CSV/JSON/Excel

Project Structure

object_detection/
├── main.py                          # Application entry point
├── requirements.txt                 # Python dependencies
├── ARCHITECTURE.md                  # Detailed architecture documentation
├── IMPLEMENTATION_GUIDE.md          # Implementation specifications
├── README.md                        # This file
├── config/
│   └── app_config.yaml             # Application configuration
├── src/
│   ├── database/                   # Database operations
│   │   ├── db_manager.py           # Database manager
│   │   ├── models.py               # Data models
│   │   └── schema.sql              # Database schema
│   ├── model/                      # ML model operations
│   │   ├── yolo_wrapper.py         # YOLO wrapper
│   │   └── inference.py            # Inference engine
│   ├── gui/                        # User interface
│   │   ├── main_window.py          # Main window
│   │   ├── tabs/                   # Application tabs
│   │   ├── dialogs/                # Dialog windows
│   │   └── widgets/                # Custom widgets
│   └── utils/                      # Utility modules
│       ├── config_manager.py       # Configuration management
│       ├── logger.py               # Logging setup
│       └── file_utils.py           # File operations
├── data/                           # Data directory
│   ├── models/                     # Saved models
│   ├── datasets/                   # Training datasets
│   └── results/                    # Detection results
├── tests/                          # Unit tests
└── docs/                           # Documentation

Dataset Format

YOLO Format Structure

dataset/
├── train/
│   ├── images/
│   │   ├── img001.png
│   │   ├── img002.png
│   │   └── ...
│   └── labels/
│       ├── img001.txt
│       ├── img002.txt
│       └── ...
├── val/
│   ├── images/
│   └── labels/
├── test/
│   └── images/
└── data.yaml

Label Format

Each label file (.txt) contains one line per object:

<class_id> <x_center> <y_center> <width> <height>

All coordinates are normalized to 0-1 range.

Example img001.txt:

0 0.5 0.5 0.3 0.4
1 0.2 0.3 0.15 0.2

data.yaml Configuration

path: /path/to/dataset
train: train/images
val: val/images
test: test/images  # optional

names:
  0: organelle
  1: membrane_branch

nc: 2

Database Schema

The application uses SQLite with the following main tables:

  • models: Stores trained model information and metrics
  • images: Stores image metadata and paths
  • detections: Stores detection results with bounding boxes and segmentation masks (polygon coordinates)
  • annotations: Stores manual annotations with optional segmentation masks (future feature)

See ARCHITECTURE.md for detailed schema information.

Configuration

Application Configuration (config/app_config.yaml)

database:
  path: "data/detections.db"

image_repository:
  base_path: ""  # Set via GUI
  allowed_extensions: [".jpg", ".jpeg", ".png", ".tif", ".tiff"]

models:
  default_base_model: "yolov8s-seg.pt"
  models_directory: "data/models"

training:
  default_epochs: 100
  default_batch_size: 16
  default_imgsz: 640
  default_patience: 50

detection:
  default_confidence: 0.25
  max_batch_size: 100

visualization:
  bbox_colors:
    organelle: "#FF6B6B"
    membrane_branch: "#4ECDC4"
  bbox_thickness: 2

Usage Examples

Training a Model

from src.model.yolo_wrapper import YOLOWrapper

# Initialize wrapper
yolo = YOLOWrapper("yolov8s-seg.pt")

# Train model
results = yolo.train(
    data_yaml="dataset/data.yaml",
    epochs=100,
    imgsz=640,
    batch=16,
    name="organelle_detector"
)

print(f"Training complete! mAP50: {results['metrics']['mAP50']}")

Batch Detection

from src.model.inference import InferenceEngine
from src.database.db_manager import DatabaseManager

# Initialize components
db = DatabaseManager("data/detections.db")
engine = InferenceEngine(
    model_path="data/models/best.pt",
    db_manager=db,
    model_id=1
)

# Detect objects in multiple images
results = engine.detect_batch(
    image_paths=["img1.jpg", "img2.jpg"],
    repository_root="/path/to/images",
    conf=0.25
)

print(f"Processed {len(results)} images")

Querying Detection Results

from src.database.db_manager import DatabaseManager

db = DatabaseManager("data/detections.db")

# Get all detections for a specific class
detections = db.get_detections(
    filters={'class_name': 'organelle'}
)

# Export to CSV
db.export_detections_to_csv(
    output_path="results.csv",
    filters={'confidence': 0.5}  # Only high-confidence detections
)

GUI Tabs

1. Detection Tab

  • Single image or batch detection
  • Real-time visualization
  • Adjustable confidence threshold
  • Save results to database

2. Training Tab

  • Dataset selection
  • Hyperparameter configuration
  • Real-time training progress
  • Loss and metric plots

3. Validation Tab

  • Model validation
  • Confusion matrix
  • Precision-Recall curves
  • Per-class metrics

4. Results Tab

  • Detection history browser
  • Advanced filtering
  • Statistics dashboard
  • Export functionality

5. Annotation Tab (Future)

  • Manual annotation tool
  • YOLO format export
  • Annotation verification

Development

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=src tests/

# Run specific test file
pytest tests/test_database.py

Code Style

# Format code
black src/ tests/

# Lint code
pylint src/

# Type checking
mypy src/

Building Documentation

# Generate API documentation
cd docs
make html

Troubleshooting

Common Issues

Issue: CUDA out of memory during training

Solution: Reduce batch size in training parameters or use CPU for training

Issue: Model not found error

Solution: Ensure YOLOv8s-seg.pt is downloaded. Run:

from ultralytics import YOLO
model = YOLO('yolov8s-seg.pt')  # Will auto-download

Issue: Database locked error

Solution: Close any other connections to the database file

Issue: Import errors

Solution: Ensure virtual environment is activated and dependencies are installed:

source venv/bin/activate  # Linux/Mac
pip install -r requirements.txt

Performance Tips

  1. GPU Acceleration: Use CUDA-capable GPU for training and inference
  2. Batch Processing: Process multiple images at once for efficiency
  3. Image Size: Use appropriate input size (640px default) for balance between speed and accuracy
  4. Confidence Threshold: Adjust confidence threshold to filter false positives
  5. Database Indexing: Keep database size manageable for fast queries

Roadmap

Phase 1 (Core Features)

  • Model training and validation
  • Real-time and batch detection
  • Results storage in SQLite
  • Basic visualization

Phase 2 (Enhanced Features) 🚧

  • Annotation tool implementation
  • Advanced filtering and search
  • Model comparison tools
  • Enhanced export options

Phase 3 (Advanced Features) 📋

  • Real-time camera detection
  • Multi-model ensemble
  • Cloud storage integration
  • Collaborative annotation

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Citation

If you use this application in your research, please cite:

@software{microscopy_object_detection,
  title = {Microscopy Object Detection Application},
  author = {Your Name},
  year = {2024},
  url = {https://github.com/yourusername/object_detection}
}

Contact

For questions, issues, or suggestions:

Support

If you find this project helpful, please consider:

  • Starring the repository
  • 🐛 Reporting bugs
  • 💡 Suggesting new features
  • 📖 Improving documentation

Happy detecting! 🔬🔍

Description
Project for detecting and segmenting objects primarily from microscope images.
Readme MIT 718 KiB
Languages
Python 100%