Making it installabel package and switching to segmentation mode

This commit is contained in:
2025-12-05 15:51:16 +02:00
parent 9011276584
commit 310e0b2285
20 changed files with 667 additions and 56 deletions

View File

@@ -2,11 +2,11 @@
## Project Overview ## Project Overview
A desktop application for detecting organelles and membrane branching structures in microscopy images using YOLOv8s, with comprehensive training, validation, and visualization capabilities. A desktop application for detecting and segmenting organelles and membrane branching structures in microscopy images using YOLOv8s-seg, with comprehensive training, validation, and visualization capabilities including pixel-accurate segmentation masks.
## Technology Stack ## Technology Stack
- **ML Framework**: Ultralytics YOLOv8 (YOLOv8s.pt model) - **ML Framework**: Ultralytics YOLOv8 (YOLOv8s-seg.pt segmentation model)
- **GUI Framework**: PySide6 (Qt6 for Python) - **GUI Framework**: PySide6 (Qt6 for Python)
- **Visualization**: pyqtgraph - **Visualization**: pyqtgraph
- **Database**: SQLite3 - **Database**: SQLite3
@@ -110,6 +110,7 @@ erDiagram
float x_max float x_max
float y_max float y_max
float confidence float confidence
text segmentation_mask
datetime detected_at datetime detected_at
json metadata json metadata
} }
@@ -122,6 +123,7 @@ erDiagram
float y_min float y_min
float x_max float x_max
float y_max float y_max
text segmentation_mask
string annotator string annotator
datetime created_at datetime created_at
boolean verified boolean verified
@@ -139,7 +141,7 @@ Stores information about trained models and their versions.
| model_name | TEXT | NOT NULL | User-friendly model name | | model_name | TEXT | NOT NULL | User-friendly model name |
| model_version | TEXT | NOT NULL | Version string (e.g., "v1.0") | | model_version | TEXT | NOT NULL | Version string (e.g., "v1.0") |
| model_path | TEXT | NOT NULL | Path to model weights file | | model_path | TEXT | NOT NULL | Path to model weights file |
| base_model | TEXT | NOT NULL | Base model used (e.g., "yolov8s.pt") | | base_model | TEXT | NOT NULL | Base model used (e.g., "yolov8s-seg.pt") |
| created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP | Model creation timestamp | | created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP | Model creation timestamp |
| training_params | JSON | | Training hyperparameters | | training_params | JSON | | Training hyperparameters |
| metrics | JSON | | Validation metrics (mAP, precision, recall) | | metrics | JSON | | Validation metrics (mAP, precision, recall) |
@@ -159,7 +161,7 @@ Stores metadata about microscopy images.
| checksum | TEXT | | MD5 hash for integrity verification | | checksum | TEXT | | MD5 hash for integrity verification |
#### **detections** table #### **detections** table
Stores object detection results. Stores object detection results with optional segmentation masks.
| Column | Type | Constraints | Description | | Column | Type | Constraints | Description |
|--------|------|-------------|-------------| |--------|------|-------------|-------------|
@@ -172,11 +174,12 @@ Stores object detection results.
| x_max | REAL | NOT NULL | Bounding box right coordinate (normalized 0-1) | | x_max | REAL | NOT NULL | Bounding box right coordinate (normalized 0-1) |
| y_max | REAL | NOT NULL | Bounding box bottom coordinate (normalized 0-1) | | y_max | REAL | NOT NULL | Bounding box bottom coordinate (normalized 0-1) |
| confidence | REAL | NOT NULL | Detection confidence score (0-1) | | confidence | REAL | NOT NULL | Detection confidence score (0-1) |
| segmentation_mask | TEXT | | JSON array of polygon coordinates [[x1,y1], [x2,y2], ...] (normalized 0-1) |
| detected_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP | When detection was performed | | detected_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP | When detection was performed |
| metadata | JSON | | Additional metadata (processing time, etc.) | | metadata | JSON | | Additional metadata (processing time, etc.) |
#### **annotations** table #### **annotations** table
Stores manual annotations for training data (future feature). Stores manual annotations for training data with optional segmentation masks (future feature).
| Column | Type | Constraints | Description | | Column | Type | Constraints | Description |
|--------|------|-------------|-------------| |--------|------|-------------|-------------|
@@ -187,6 +190,7 @@ Stores manual annotations for training data (future feature).
| y_min | REAL | NOT NULL | Bounding box top coordinate (normalized) | | y_min | REAL | NOT NULL | Bounding box top coordinate (normalized) |
| x_max | REAL | NOT NULL | Bounding box right coordinate (normalized) | | x_max | REAL | NOT NULL | Bounding box right coordinate (normalized) |
| y_max | REAL | NOT NULL | Bounding box bottom coordinate (normalized) | | y_max | REAL | NOT NULL | Bounding box bottom coordinate (normalized) |
| segmentation_mask | TEXT | | JSON array of polygon coordinates [[x1,y1], [x2,y2], ...] (normalized 0-1) |
| annotator | TEXT | | Name of person who created annotation | | annotator | TEXT | | Name of person who created annotation |
| created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP | Annotation timestamp | | created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP | Annotation timestamp |
| verified | BOOLEAN | DEFAULT 0 | Whether annotation is verified | | verified | BOOLEAN | DEFAULT 0 | Whether annotation is verified |
@@ -245,8 +249,9 @@ graph TB
### Key Components ### Key Components
#### 1. **YOLO Wrapper** ([`src/model/yolo_wrapper.py`](src/model/yolo_wrapper.py)) #### 1. **YOLO Wrapper** ([`src/model/yolo_wrapper.py`](src/model/yolo_wrapper.py))
Encapsulates YOLOv8 operations: Encapsulates YOLOv8-seg operations:
- Load pre-trained YOLOv8s model - Load pre-trained YOLOv8s-seg segmentation model
- Extract pixel-accurate segmentation masks
- Fine-tune on custom microscopy dataset - Fine-tune on custom microscopy dataset
- Export trained models - Export trained models
- Provide training progress callbacks - Provide training progress callbacks
@@ -255,10 +260,10 @@ Encapsulates YOLOv8 operations:
**Key Methods:** **Key Methods:**
```python ```python
class YOLOWrapper: class YOLOWrapper:
def __init__(self, model_path: str = "yolov8s.pt") def __init__(self, model_path: str = "yolov8s-seg.pt")
def train(self, data_yaml: str, epochs: int, callbacks: dict) def train(self, data_yaml: str, epochs: int, callbacks: dict)
def validate(self, data_yaml: str) -> dict def validate(self, data_yaml: str) -> dict
def predict(self, image_path: str, conf: float) -> list def predict(self, image_path: str, conf: float) -> list # Returns detections with segmentation masks
def export_model(self, format: str, output_path: str) def export_model(self, format: str, output_path: str)
``` ```
@@ -435,7 +440,7 @@ image_repository:
allowed_extensions: [".jpg", ".jpeg", ".png", ".tif", ".tiff"] allowed_extensions: [".jpg", ".jpeg", ".png", ".tif", ".tiff"]
models: models:
default_base_model: "yolov8s.pt" default_base_model: "yolov8s-seg.pt"
models_directory: "data/models" models_directory: "data/models"
training: training:

178
BUILD.md Normal file
View File

@@ -0,0 +1,178 @@
# Building and Publishing Guide
This guide explains how to build and publish the microscopy-object-detection package.
## Prerequisites
```bash
pip install build twine
```
## Building the Package
### 1. Clean Previous Builds
```bash
rm -rf build/ dist/ *.egg-info
```
### 2. Build Distribution Archives
```bash
python -m build
```
This will create both wheel (`.whl`) and source distribution (`.tar.gz`) in the `dist/` directory.
### 3. Verify the Build
```bash
ls dist/
# Should show:
# microscopy_object_detection-1.0.0-py3-none-any.whl
# microscopy_object_detection-1.0.0.tar.gz
```
## Testing the Package Locally
### Install in Development Mode
```bash
pip install -e .
```
### Install from Built Package
```bash
pip install dist/microscopy_object_detection-1.0.0-py3-none-any.whl
```
### Test the Installation
```bash
# Test CLI
microscopy-detect --version
# Test GUI launcher
microscopy-detect-gui
```
## Publishing to PyPI
### 1. Configure PyPI Credentials
Create or update `~/.pypirc`:
```ini
[pypi]
username = __token__
password = pypi-YOUR-API-TOKEN-HERE
```
### 2. Upload to Test PyPI (Recommended First)
```bash
python -m twine upload --repository testpypi dist/*
```
Then test installation:
```bash
pip install --index-url https://test.pypi.org/simple/ microscopy-object-detection
```
### 3. Upload to PyPI
```bash
python -m twine upload dist/*
```
## Version Management
Update version in multiple files:
- `setup.py`: Update `version` parameter
- `pyproject.toml`: Update `version` field
- `src/__init__.py`: Update `__version__` variable
## Git Tags
After publishing, tag the release:
```bash
git tag -a v1.0.0 -m "Release version 1.0.0"
git push origin v1.0.0
```
## Package Structure
The built package includes:
- All Python source files in `src/`
- Configuration files in `config/`
- Database schema file (`src/database/schema.sql`)
- Documentation files (README.md, LICENSE, etc.)
- Entry points for CLI and GUI
## Troubleshooting
### Import Errors
If you get import errors, ensure:
- All `__init__.py` files are present
- Package structure follows the setup configuration
- Dependencies are listed in `requirements.txt`
### Missing Files
If files are missing in the built package:
- Check `MANIFEST.in` includes the required patterns
- Check `pyproject.toml` package-data configuration
- Rebuild with `python -m build --no-isolation` for debugging
### Version Conflicts
If version conflicts occur:
- Ensure version is consistent across all files
- Clear build artifacts and rebuild
- Check for cached installations: `pip list | grep microscopy`
## CI/CD Integration
### GitHub Actions Example
```yaml
name: Build and Publish
on:
release:
types: [created]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
pip install build twine
- name: Build package
run: python -m build
- name: Publish to PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
run: twine upload dist/*
```
## Best Practices
1. **Version Bumping**: Use semantic versioning (MAJOR.MINOR.PATCH)
2. **Testing**: Always test on Test PyPI before publishing to PyPI
3. **Documentation**: Update README.md and CHANGELOG.md for each release
4. **Git Tags**: Tag releases in git for easy reference
5. **Dependencies**: Keep requirements.txt updated and specify version ranges
## Resources
- [Python Packaging Guide](https://packaging.python.org/)
- [setuptools Documentation](https://setuptools.pypa.io/)
- [PyPI Publishing Guide](https://packaging.python.org/tutorials/packaging-projects/)

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 Your Name
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

37
MANIFEST.in Normal file
View File

@@ -0,0 +1,37 @@
# Include documentation files
include README.md
include LICENSE
include ARCHITECTURE.md
include IMPLEMENTATION_GUIDE.md
include QUICKSTART.md
include PLAN_SUMMARY.md
# Include requirements
include requirements.txt
# Include configuration files
recursive-include config *.yaml
recursive-include config *.yml
# Include database schema
recursive-include src/database *.sql
# Include tests
recursive-include tests *.py
# Exclude compiled Python files
global-exclude *.pyc
global-exclude *.pyo
global-exclude __pycache__
global-exclude *.so
global-exclude .DS_Store
# Exclude git and IDE files
global-exclude .git*
global-exclude .vscode
global-exclude .idea
# Exclude build artifacts
prune build
prune dist
prune *.egg-info

View File

@@ -38,7 +38,7 @@ This will install:
- OpenCV and Pillow (image processing) - OpenCV and Pillow (image processing)
- And other dependencies - And other dependencies
**Note:** The first run will automatically download the YOLOv8s.pt model (~22MB). **Note:** The first run will automatically download the YOLOv8s-seg.pt segmentation model (~23MB).
### 4. Verify Installation ### 4. Verify Installation
@@ -84,11 +84,11 @@ In the Settings dialog:
### Single Image Detection ### Single Image Detection
1. Go to the **Detection** tab 1. Go to the **Detection** tab
2. Select a model from the dropdown (default: Base Model yolov8s.pt) 2. Select a model from the dropdown (default: Base Model yolov8s-seg.pt)
3. Adjust confidence threshold with the slider 3. Adjust confidence threshold with the slider
4. Click "Detect Single Image" 4. Click "Detect Single Image"
5. Select an image file 5. Select an image file
6. View results in the results panel 6. View results with segmentation masks overlaid on the image
### Batch Detection ### Batch Detection
@@ -108,9 +108,18 @@ Detection results include:
- **Class names**: Types of objects detected (e.g., organelle, membrane_branch) - **Class names**: Types of objects detected (e.g., organelle, membrane_branch)
- **Confidence scores**: Detection confidence (0-1) - **Confidence scores**: Detection confidence (0-1)
- **Bounding boxes**: Object locations (stored in database) - **Bounding boxes**: Object locations (stored in database)
- **Segmentation masks**: Pixel-accurate polygon coordinates for each detected object
All results are stored in the SQLite database at [`data/detections.db`](data/detections.db). All results are stored in the SQLite database at [`data/detections.db`](data/detections.db).
### Segmentation Visualization
The application automatically displays segmentation masks when available:
- Semi-transparent colored overlay (30% opacity) showing the exact shape of detected objects
- Polygon contours outlining each segmentation
- Color-coded by object class
- Toggle-able in future versions
## Database ## Database
The application uses SQLite to store: The application uses SQLite to store:
@@ -176,7 +185,7 @@ sudo apt-get install libxcb-xinerama0
### Detection Not Working ### Detection Not Working
**No models available** **No models available**
- The base YOLOv8s model will be downloaded automatically on first use - The base YOLOv8s-seg segmentation model will be downloaded automatically on first use
- Make sure you have internet connection for the first run - Make sure you have internet connection for the first run
**Images not found** **Images not found**

View File

@@ -1,6 +1,6 @@
# Microscopy Object Detection Application # Microscopy Object Detection Application
A desktop application for detecting organelles and membrane branching structures in microscopy images using YOLOv8, featuring comprehensive training, validation, and visualization capabilities. A desktop application for detecting and segmenting organelles and membrane branching structures in microscopy images using YOLOv8-seg, featuring comprehensive training, validation, and visualization capabilities with pixel-accurate segmentation masks.
![Python](https://img.shields.io/badge/python-3.8+-blue.svg) ![Python](https://img.shields.io/badge/python-3.8+-blue.svg)
![PySide6](https://img.shields.io/badge/PySide6-6.5+-green.svg) ![PySide6](https://img.shields.io/badge/PySide6-6.5+-green.svg)
@@ -8,8 +8,8 @@ A desktop application for detecting organelles and membrane branching structures
## Features ## Features
- **🎯 Object Detection**: Real-time and batch detection of microscopy objects - **🎯 Object Detection & Segmentation**: Real-time and batch detection with pixel-accurate segmentation masks
- **🎓 Model Training**: Fine-tune YOLOv8s on custom microscopy datasets - **🎓 Model Training**: Fine-tune YOLOv8s-seg on custom microscopy datasets
- **📊 Validation & Metrics**: Comprehensive model validation with visualization - **📊 Validation & Metrics**: Comprehensive model validation with visualization
- **💾 Database Storage**: SQLite database for detection results and metadata - **💾 Database Storage**: SQLite database for detection results and metadata
- **📈 Visualization**: Interactive plots and charts using pyqtgraph - **📈 Visualization**: Interactive plots and charts using pyqtgraph
@@ -34,14 +34,24 @@ A desktop application for detecting organelles and membrane branching structures
## Installation ## Installation
### 1. Clone the Repository ### Option 1: Install from PyPI (Recommended)
```bash
pip install microscopy-object-detection
```
This will install the package and all its dependencies.
### Option 2: Install from Source
#### 1. Clone the Repository
```bash ```bash
git clone <repository-url> git clone <repository-url>
cd object_detection cd object_detection
``` ```
### 2. Create Virtual Environment #### 2. Create Virtual Environment
```bash ```bash
# Linux/Mac # Linux/Mac
@@ -53,25 +63,44 @@ python -m venv venv
venv\Scripts\activate venv\Scripts\activate
``` ```
### 3. Install Dependencies #### 3. Install in Development Mode
```bash ```bash
pip install -r requirements.txt # Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Or install just the package
pip install .
``` ```
### 4. Download Base Model ### 4. Download Base Model
The application will automatically download the YOLOv8s.pt model on first use, or you can download it manually: The application will automatically download the YOLOv8s-seg.pt segmentation model on first use, or you can download it manually:
```bash ```bash
# The model will be downloaded automatically by ultralytics # The model will be downloaded automatically by ultralytics
# Or download manually from: https://github.com/ultralytics/assets/releases # Or download manually from: https://github.com/ultralytics/assets/releases
``` ```
**Note:** YOLOv8s-seg is a segmentation model that provides pixel-accurate masks for detected objects, enabling more precise analysis than standard bounding box detection.
## Quick Start ## Quick Start
### 1. Launch the Application ### 1. Launch the Application
After installation, you can launch the application in two ways:
**Using the GUI launcher:**
```bash
microscopy-detect-gui
```
**Or using Python directly:**
```bash
python -m microscopy_object_detection
```
**If installed from source:**
```bash ```bash
python main.py python main.py
``` ```
@@ -85,11 +114,12 @@ python main.py
### 3. Perform Detection ### 3. Perform Detection
1. Navigate to the **Detection** tab 1. Navigate to the **Detection** tab
2. Select a model (default: yolov8s.pt) 2. Select a model (default: yolov8s-seg.pt)
3. Choose an image or folder 3. Choose an image or folder
4. Set confidence threshold 4. Set confidence threshold
5. Click **Detect** 5. Click **Detect**
6. View results and save to database 6. View results with segmentation masks overlaid
7. Save results to database
### 4. Train Custom Model ### 4. Train Custom Model
@@ -212,8 +242,8 @@ The application uses SQLite with the following main tables:
- **models**: Stores trained model information and metrics - **models**: Stores trained model information and metrics
- **images**: Stores image metadata and paths - **images**: Stores image metadata and paths
- **detections**: Stores detection results with bounding boxes - **detections**: Stores detection results with bounding boxes and segmentation masks (polygon coordinates)
- **annotations**: Stores manual annotations (future feature) - **annotations**: Stores manual annotations with optional segmentation masks (future feature)
See [`ARCHITECTURE.md`](ARCHITECTURE.md) for detailed schema information. See [`ARCHITECTURE.md`](ARCHITECTURE.md) for detailed schema information.
@@ -230,7 +260,7 @@ image_repository:
allowed_extensions: [".jpg", ".jpeg", ".png", ".tif", ".tiff"] allowed_extensions: [".jpg", ".jpeg", ".png", ".tif", ".tiff"]
models: models:
default_base_model: "yolov8s.pt" default_base_model: "yolov8s-seg.pt"
models_directory: "data/models" models_directory: "data/models"
training: training:
@@ -258,7 +288,7 @@ visualization:
from src.model.yolo_wrapper import YOLOWrapper from src.model.yolo_wrapper import YOLOWrapper
# Initialize wrapper # Initialize wrapper
yolo = YOLOWrapper("yolov8s.pt") yolo = YOLOWrapper("yolov8s-seg.pt")
# Train model # Train model
results = yolo.train( results = yolo.train(
@@ -393,10 +423,10 @@ make html
**Issue**: Model not found error **Issue**: Model not found error
**Solution**: Ensure YOLOv8s.pt is downloaded. Run: **Solution**: Ensure YOLOv8s-seg.pt is downloaded. Run:
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8s.pt') # Will auto-download model = YOLO('yolov8s-seg.pt') # Will auto-download
``` ```

View File

@@ -10,7 +10,7 @@ image_repository:
- .tiff - .tiff
- .bmp - .bmp
models: models:
default_base_model: yolov8s.pt default_base_model: yolov8s-seg.pt
models_directory: data/models models_directory: data/models
training: training:
default_epochs: 100 default_epochs: 100

View File

@@ -6,12 +6,13 @@ Main entry point for the application.
import sys import sys
from pathlib import Path from pathlib import Path
# Add src directory to path # Add src directory to path for development mode
sys.path.insert(0, str(Path(__file__).parent)) sys.path.insert(0, str(Path(__file__).parent))
from PySide6.QtWidgets import QApplication from PySide6.QtWidgets import QApplication
from PySide6.QtCore import Qt from PySide6.QtCore import Qt
from src import __version__
from src.gui.main_window import MainWindow from src.gui.main_window import MainWindow
from src.utils.logger import setup_logging from src.utils.logger import setup_logging
from src.utils.config_manager import ConfigManager from src.utils.config_manager import ConfigManager
@@ -37,7 +38,7 @@ def main():
app = QApplication(sys.argv) app = QApplication(sys.argv)
app.setApplicationName("Microscopy Object Detection") app.setApplicationName("Microscopy Object Detection")
app.setOrganizationName("MicroscopyLab") app.setOrganizationName("MicroscopyLab")
app.setApplicationVersion("1.0.0") app.setApplicationVersion(__version__)
# Set application style # Set application style
app.setStyle("Fusion") app.setStyle("Fusion")

103
pyproject.toml Normal file
View File

@@ -0,0 +1,103 @@
[build-system]
requires = ["setuptools>=45", "wheel", "setuptools_scm[toml]>=6.2"]
build-backend = "setuptools.build_meta"
[project]
name = "microscopy-object-detection"
version = "1.0.0"
description = "Desktop application for detecting and segmenting organelles in microscopy images using YOLOv8-seg"
readme = "README.md"
requires-python = ">=3.8"
license = { text = "MIT" }
authors = [{ name = "Your Name", email = "your.email@example.com" }]
keywords = [
"microscopy",
"yolov8",
"object-detection",
"segmentation",
"computer-vision",
"deep-learning",
]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent",
]
dependencies = [
"ultralytics>=8.0.0",
"PySide6>=6.5.0",
"pyqtgraph>=0.13.0",
"numpy>=1.24.0",
"opencv-python>=4.8.0",
"Pillow>=10.0.0",
"PyYAML>=6.0",
"pandas>=2.0.0",
"openpyxl>=3.1.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0.0",
"pytest-cov>=4.0.0",
"black>=23.0.0",
"pylint>=2.17.0",
"mypy>=1.0.0",
]
[project.urls]
Homepage = "https://github.com/yourusername/object_detection"
Documentation = "https://github.com/yourusername/object_detection/blob/main/README.md"
Repository = "https://github.com/yourusername/object_detection"
"Bug Tracker" = "https://github.com/yourusername/object_detection/issues"
[project.scripts]
microscopy-detect = "src.cli:main"
[project.gui-scripts]
microscopy-detect-gui = "main:main"
[tool.setuptools]
package-dir = { "" = "." }
packages = [
"src",
"src.database",
"src.model",
"src.gui",
"src.gui.tabs",
"src.gui.dialogs",
"src.gui.widgets",
"src.utils",
]
[tool.setuptools.package-data]
src = ["database/*.sql"]
"" = ["config/*.yaml"]
[tool.black]
line-length = 88
target-version = ['py38', 'py39', 'py310', 'py311']
include = '\.pyi?$'
[tool.pylint.messages_control]
max-line-length = 88
[tool.mypy]
python_version = "3.8"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = false
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_functions = ["test_*"]
addopts = "-v --cov=src --cov-report=term-missing"

56
setup.py Normal file
View File

@@ -0,0 +1,56 @@
"""Setup script for Microscopy Object Detection Application."""
from setuptools import setup, find_packages
from pathlib import Path
# Read the contents of README file
this_directory = Path(__file__).parent
long_description = (this_directory / "README.md").read_text(encoding="utf-8")
# Read requirements
requirements = (this_directory / "requirements.txt").read_text().splitlines()
requirements = [
req.strip() for req in requirements if req.strip() and not req.startswith("#")
]
setup(
name="microscopy-object-detection",
version="1.0.0",
author="Your Name",
author_email="your.email@example.com",
description="Desktop application for detecting and segmenting organelles in microscopy images using YOLOv8-seg",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/yourusername/object_detection",
packages=find_packages(exclude=["tests", "tests.*", "docs"]),
include_package_data=True,
install_requires=requirements,
python_requires=">=3.8",
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent",
],
entry_points={
"console_scripts": [
"microscopy-detect=src.cli:main",
],
"gui_scripts": [
"microscopy-detect-gui=main:main",
],
},
keywords="microscopy yolov8 object-detection segmentation computer-vision deep-learning",
project_urls={
"Bug Reports": "https://github.com/yourusername/object_detection/issues",
"Source": "https://github.com/yourusername/object_detection",
"Documentation": "https://github.com/yourusername/object_detection/blob/main/README.md",
},
)

View File

@@ -0,0 +1,19 @@
"""
Microscopy Object Detection Application
A desktop application for detecting and segmenting organelles and membrane
branching structures in microscopy images using YOLOv8-seg.
"""
__version__ = "1.0.0"
__author__ = "Your Name"
__email__ = "your.email@example.com"
__license__ = "MIT"
# Package metadata
__all__ = [
"__version__",
"__author__",
"__email__",
"__license__",
]

61
src/cli.py Normal file
View File

@@ -0,0 +1,61 @@
"""
Command-line interface for microscopy object detection application.
"""
import sys
import argparse
from pathlib import Path
from src import __version__
def main():
"""Main CLI entry point."""
parser = argparse.ArgumentParser(
description="Microscopy Object Detection Application - CLI Interface",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Launch GUI
microscopy-detect-gui
# Show version
microscopy-detect --version
# Get help
microscopy-detect --help
""",
)
parser.add_argument(
"--version",
action="version",
version=f"microscopy-object-detection {__version__}",
)
parser.add_argument(
"--gui",
action="store_true",
help="Launch the GUI application (same as microscopy-detect-gui)",
)
args = parser.parse_args()
if args.gui:
# Launch GUI
try:
from main import main as gui_main
gui_main()
except Exception as e:
print(f"Error launching GUI: {e}", file=sys.stderr)
sys.exit(1)
else:
# Show help if no arguments provided
parser.print_help()
print("\nTo launch the GUI, use: microscopy-detect-gui")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -6,7 +6,7 @@ Handles all database operations including CRUD operations, queries, and exports.
import sqlite3 import sqlite3
import json import json
from datetime import datetime from datetime import datetime
from typing import List, Dict, Optional, Tuple, Any from typing import List, Dict, Optional, Tuple, Any, Union
from pathlib import Path from pathlib import Path
import csv import csv
import hashlib import hashlib
@@ -56,7 +56,7 @@ class DatabaseManager:
model_name: str, model_name: str,
model_version: str, model_version: str,
model_path: str, model_path: str,
base_model: str = "yolov8s.pt", base_model: str = "yolov8s-seg.pt",
training_params: Optional[Dict] = None, training_params: Optional[Dict] = None,
metrics: Optional[Dict] = None, metrics: Optional[Dict] = None,
) -> int: ) -> int:
@@ -243,6 +243,7 @@ class DatabaseManager:
class_name: str, class_name: str,
bbox: Tuple[float, float, float, float], # (x_min, y_min, x_max, y_max) bbox: Tuple[float, float, float, float], # (x_min, y_min, x_max, y_max)
confidence: float, confidence: float,
segmentation_mask: Optional[List[List[float]]] = None,
metadata: Optional[Dict] = None, metadata: Optional[Dict] = None,
) -> int: ) -> int:
""" """
@@ -254,6 +255,7 @@ class DatabaseManager:
class_name: Detected object class class_name: Detected object class
bbox: Bounding box coordinates (normalized 0-1) bbox: Bounding box coordinates (normalized 0-1)
confidence: Detection confidence score confidence: Detection confidence score
segmentation_mask: Polygon coordinates for segmentation [[x1,y1], [x2,y2], ...]
metadata: Additional metadata metadata: Additional metadata
Returns: Returns:
@@ -265,8 +267,8 @@ class DatabaseManager:
x_min, y_min, x_max, y_max = bbox x_min, y_min, x_max, y_max = bbox
cursor.execute( cursor.execute(
""" """
INSERT INTO detections (image_id, model_id, class_name, x_min, y_min, x_max, y_max, confidence, metadata) INSERT INTO detections (image_id, model_id, class_name, x_min, y_min, x_max, y_max, confidence, segmentation_mask, metadata)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", """,
( (
image_id, image_id,
@@ -277,6 +279,7 @@ class DatabaseManager:
x_max, x_max,
y_max, y_max,
confidence, confidence,
json.dumps(segmentation_mask) if segmentation_mask else None,
json.dumps(metadata) if metadata else None, json.dumps(metadata) if metadata else None,
), ),
) )
@@ -302,8 +305,8 @@ class DatabaseManager:
bbox = det["bbox"] bbox = det["bbox"]
cursor.execute( cursor.execute(
""" """
INSERT INTO detections (image_id, model_id, class_name, x_min, y_min, x_max, y_max, confidence, metadata) INSERT INTO detections (image_id, model_id, class_name, x_min, y_min, x_max, y_max, confidence, segmentation_mask, metadata)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", """,
( (
det["image_id"], det["image_id"],
@@ -314,6 +317,11 @@ class DatabaseManager:
bbox[2], bbox[2],
bbox[3], bbox[3],
det["confidence"], det["confidence"],
(
json.dumps(det.get("segmentation_mask"))
if det.get("segmentation_mask")
else None
),
( (
json.dumps(det.get("metadata")) json.dumps(det.get("metadata"))
if det.get("metadata") if det.get("metadata")
@@ -385,9 +393,11 @@ class DatabaseManager:
detections = [] detections = []
for row in cursor.fetchall(): for row in cursor.fetchall():
det = dict(row) det = dict(row)
# Parse JSON metadata # Parse JSON fields
if det.get("metadata"): if det.get("metadata"):
det["metadata"] = json.loads(det["metadata"]) det["metadata"] = json.loads(det["metadata"])
if det.get("segmentation_mask"):
det["segmentation_mask"] = json.loads(det["segmentation_mask"])
detections.append(det) detections.append(det)
return detections return detections
@@ -538,6 +548,7 @@ class DatabaseManager:
"x_max", "x_max",
"y_max", "y_max",
"confidence", "confidence",
"segmentation_mask",
"detected_at", "detected_at",
] ]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
@@ -545,6 +556,11 @@ class DatabaseManager:
for det in detections: for det in detections:
row = {k: det[k] for k in fieldnames if k in det} row = {k: det[k] for k in fieldnames if k in det}
# Convert segmentation mask list to JSON string for CSV
if row.get("segmentation_mask") and isinstance(
row["segmentation_mask"], list
):
row["segmentation_mask"] = json.dumps(row["segmentation_mask"])
writer.writerow(row) writer.writerow(row)
return True return True
@@ -580,6 +596,7 @@ class DatabaseManager:
class_name: str, class_name: str,
bbox: Tuple[float, float, float, float], bbox: Tuple[float, float, float, float],
annotator: str, annotator: str,
segmentation_mask: Optional[List[List[float]]] = None,
verified: bool = False, verified: bool = False,
) -> int: ) -> int:
"""Add manual annotation.""" """Add manual annotation."""
@@ -589,10 +606,20 @@ class DatabaseManager:
x_min, y_min, x_max, y_max = bbox x_min, y_min, x_max, y_max = bbox
cursor.execute( cursor.execute(
""" """
INSERT INTO annotations (image_id, class_name, x_min, y_min, x_max, y_max, annotator, verified) INSERT INTO annotations (image_id, class_name, x_min, y_min, x_max, y_max, segmentation_mask, annotator, verified)
VALUES (?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""", """,
(image_id, class_name, x_min, y_min, x_max, y_max, annotator, verified), (
image_id,
class_name,
x_min,
y_min,
x_max,
y_max,
json.dumps(segmentation_mask) if segmentation_mask else None,
annotator,
verified,
),
) )
conn.commit() conn.commit()
return cursor.lastrowid return cursor.lastrowid

View File

@@ -5,7 +5,7 @@ These dataclasses represent the database entities.
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime from datetime import datetime
from typing import Optional, Dict, Tuple from typing import Optional, Dict, Tuple, List
@dataclass @dataclass
@@ -46,6 +46,9 @@ class Detection:
class_name: str class_name: str
bbox: Tuple[float, float, float, float] # (x_min, y_min, x_max, y_max) bbox: Tuple[float, float, float, float] # (x_min, y_min, x_max, y_max)
confidence: float confidence: float
segmentation_mask: Optional[
List[List[float]]
] # List of polygon coordinates [[x1,y1], [x2,y2], ...]
detected_at: datetime detected_at: datetime
metadata: Optional[Dict] metadata: Optional[Dict]
@@ -58,6 +61,9 @@ class Annotation:
image_id: int image_id: int
class_name: str class_name: str
bbox: Tuple[float, float, float, float] # (x_min, y_min, x_max, y_max) bbox: Tuple[float, float, float, float] # (x_min, y_min, x_max, y_max)
segmentation_mask: Optional[
List[List[float]]
] # List of polygon coordinates [[x1,y1], [x2,y2], ...]
annotator: str annotator: str
created_at: datetime created_at: datetime
verified: bool verified: bool

View File

@@ -37,6 +37,7 @@ CREATE TABLE IF NOT EXISTS detections (
x_max REAL NOT NULL CHECK(x_max >= 0 AND x_max <= 1), x_max REAL NOT NULL CHECK(x_max >= 0 AND x_max <= 1),
y_max REAL NOT NULL CHECK(y_max >= 0 AND y_max <= 1), y_max REAL NOT NULL CHECK(y_max >= 0 AND y_max <= 1),
confidence REAL NOT NULL CHECK(confidence >= 0 AND confidence <= 1), confidence REAL NOT NULL CHECK(confidence >= 0 AND confidence <= 1),
segmentation_mask TEXT, -- JSON string of polygon coordinates [[x1,y1], [x2,y2], ...]
detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
metadata TEXT, -- JSON string for additional metadata metadata TEXT, -- JSON string for additional metadata
FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE, FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE,
@@ -52,6 +53,7 @@ CREATE TABLE IF NOT EXISTS annotations (
y_min REAL NOT NULL CHECK(y_min >= 0 AND y_min <= 1), y_min REAL NOT NULL CHECK(y_min >= 0 AND y_min <= 1),
x_max REAL NOT NULL CHECK(x_max >= 0 AND x_max <= 1), x_max REAL NOT NULL CHECK(x_max >= 0 AND x_max <= 1),
y_max REAL NOT NULL CHECK(y_max >= 0 AND y_max <= 1), y_max REAL NOT NULL CHECK(y_max >= 0 AND y_max <= 1),
segmentation_mask TEXT, -- JSON string of polygon coordinates [[x1,y1], [x2,y2], ...]
annotator TEXT, annotator TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
verified BOOLEAN DEFAULT 0, verified BOOLEAN DEFAULT 0,

View File

@@ -121,7 +121,7 @@ class ConfigDialog(QDialog):
models_layout.addRow("Models Directory:", self.models_dir_edit) models_layout.addRow("Models Directory:", self.models_dir_edit)
self.base_model_edit = QLineEdit() self.base_model_edit = QLineEdit()
self.base_model_edit.setPlaceholderText("yolov8s.pt") self.base_model_edit.setPlaceholderText("yolov8s-seg.pt")
models_layout.addRow("Default Base Model:", self.base_model_edit) models_layout.addRow("Default Base Model:", self.base_model_edit)
models_group.setLayout(models_layout) models_group.setLayout(models_layout)
@@ -232,7 +232,7 @@ class ConfigDialog(QDialog):
self.config_manager.get("models.models_directory", "data/models") self.config_manager.get("models.models_directory", "data/models")
) )
self.base_model_edit.setText( self.base_model_edit.setText(
self.config_manager.get("models.default_base_model", "yolov8s.pt") self.config_manager.get("models.default_base_model", "yolov8s-seg.pt")
) )
# Training settings # Training settings

View File

@@ -159,7 +159,7 @@ class DetectionTab(QWidget):
# Add base model option # Add base model option
base_model = self.config_manager.get( base_model = self.config_manager.get(
"models.default_base_model", "yolov8s.pt" "models.default_base_model", "yolov8s-seg.pt"
) )
self.model_combo.addItem( self.model_combo.addItem(
f"Base Model ({base_model})", {"id": 0, "path": base_model} f"Base Model ({base_model})", {"id": 0, "path": base_model}
@@ -256,7 +256,7 @@ class DetectionTab(QWidget):
if model_id == 0: if model_id == 0:
# Create database entry for base model # Create database entry for base model
base_model = self.config_manager.get( base_model = self.config_manager.get(
"models.default_base_model", "yolov8s.pt" "models.default_base_model", "yolov8s-seg.pt"
) )
model_id = self.db_manager.add_model( model_id = self.db_manager.add_model(
model_name="Base Model", model_name="Base Model",

View File

@@ -87,6 +87,7 @@ class InferenceEngine:
"class_name": det["class_name"], "class_name": det["class_name"],
"bbox": tuple(bbox_normalized), "bbox": tuple(bbox_normalized),
"confidence": det["confidence"], "confidence": det["confidence"],
"segmentation_mask": det.get("segmentation_mask"),
"metadata": {"class_id": det["class_id"]}, "metadata": {"class_id": det["class_id"]},
} }
detection_records.append(record) detection_records.append(record)
@@ -160,6 +161,7 @@ class InferenceEngine:
conf: float = 0.25, conf: float = 0.25,
bbox_thickness: int = 2, bbox_thickness: int = 2,
bbox_colors: Optional[Dict[str, str]] = None, bbox_colors: Optional[Dict[str, str]] = None,
draw_masks: bool = True,
) -> tuple: ) -> tuple:
""" """
Detect objects and return annotated image. Detect objects and return annotated image.
@@ -169,6 +171,7 @@ class InferenceEngine:
conf: Confidence threshold conf: Confidence threshold
bbox_thickness: Thickness of bounding boxes bbox_thickness: Thickness of bounding boxes
bbox_colors: Dictionary mapping class names to hex colors bbox_colors: Dictionary mapping class names to hex colors
draw_masks: Whether to draw segmentation masks (if available)
Returns: Returns:
Tuple of (detections, annotated_image_array) Tuple of (detections, annotated_image_array)
@@ -189,12 +192,8 @@ class InferenceEngine:
bbox_colors = {} bbox_colors = {}
default_color = self._hex_to_bgr(bbox_colors.get("default", "#00FF00")) default_color = self._hex_to_bgr(bbox_colors.get("default", "#00FF00"))
# Draw bounding boxes # Draw detections
for det in detections: for det in detections:
# Get absolute coordinates
bbox_abs = det["bbox_absolute"]
x1, y1, x2, y2 = [int(v) for v in bbox_abs]
# Get color for this class # Get color for this class
class_name = det["class_name"] class_name = det["class_name"]
color_hex = bbox_colors.get( color_hex = bbox_colors.get(
@@ -202,7 +201,33 @@ class InferenceEngine:
) )
color = self._hex_to_bgr(color_hex) color = self._hex_to_bgr(color_hex)
# Draw box # Draw segmentation mask if available and requested
if draw_masks and det.get("segmentation_mask"):
mask_normalized = det["segmentation_mask"]
if mask_normalized and len(mask_normalized) > 0:
# Convert normalized coordinates to absolute pixels
mask_points = np.array(
[
[int(pt[0] * width), int(pt[1] * height)]
for pt in mask_normalized
],
dtype=np.int32,
)
# Create a semi-transparent overlay
overlay = img.copy()
cv2.fillPoly(overlay, [mask_points], color)
# Blend with original image (30% opacity)
cv2.addWeighted(overlay, 0.3, img, 0.7, 0, img)
# Draw mask contour
cv2.polylines(img, [mask_points], True, color, bbox_thickness)
# Get absolute coordinates for bounding box
bbox_abs = det["bbox_absolute"]
x1, y1, x2, y2 = [int(v) for v in bbox_abs]
# Draw bounding box
cv2.rectangle(img, (x1, y1), (x2, y2), color, bbox_thickness) cv2.rectangle(img, (x1, y1), (x2, y2), color, bbox_thickness)
# Prepare label # Prepare label

View File

@@ -16,7 +16,7 @@ logger = get_logger(__name__)
class YOLOWrapper: class YOLOWrapper:
"""Wrapper for YOLOv8 model operations.""" """Wrapper for YOLOv8 model operations."""
def __init__(self, model_path: str = "yolov8s.pt"): def __init__(self, model_path: str = "yolov8s-seg.pt"):
""" """
Initialize YOLO model. Initialize YOLO model.
@@ -282,6 +282,10 @@ class YOLOWrapper:
boxes = result.boxes boxes = result.boxes
image_path = str(result.path) image_path = str(result.path)
orig_shape = result.orig_shape # (height, width) orig_shape = result.orig_shape # (height, width)
height, width = orig_shape
# Check if this is a segmentation model with masks
has_masks = hasattr(result, "masks") and result.masks is not None
for i in range(len(boxes)): for i in range(len(boxes)):
# Get normalized coordinates # Get normalized coordinates
@@ -299,6 +303,33 @@ class YOLOWrapper:
float(v) for v in boxes.xyxy[i].cpu().numpy() float(v) for v in boxes.xyxy[i].cpu().numpy()
], # Absolute pixels ], # Absolute pixels
} }
# Extract segmentation mask if available
if has_masks:
try:
# Get the mask for this detection
mask_data = result.masks.xy[
i
] # Polygon coordinates in absolute pixels
# Convert to normalized coordinates
if len(mask_data) > 0:
mask_normalized = []
for point in mask_data:
x_norm = float(point[0]) / width
y_norm = float(point[1]) / height
mask_normalized.append([x_norm, y_norm])
detection["segmentation_mask"] = mask_normalized
else:
detection["segmentation_mask"] = None
except Exception as mask_error:
logger.warning(
f"Error extracting mask for detection {i}: {mask_error}"
)
detection["segmentation_mask"] = None
else:
detection["segmentation_mask"] = None
detections.append(detection) detections.append(detection)
return detections return detections

View File

@@ -56,7 +56,7 @@ class ConfigManager:
], ],
}, },
"models": { "models": {
"default_base_model": "yolov8s.pt", "default_base_model": "yolov8s-seg.pt",
"models_directory": "data/models", "models_directory": "data/models",
}, },
"training": { "training": {