Files
object-segmentation/IMPLEMENTATION_GUIDE.md
2025-12-05 09:39:45 +02:00

33 KiB

Implementation Guide - Microscopy Object Detection Application

This guide provides detailed implementation specifications for each component of the application.

Table of Contents

  1. Development Setup
  2. Database Implementation
  3. Model Wrapper Implementation
  4. GUI Components
  5. Testing Strategy
  6. Deployment

Development Setup

Prerequisites

# Python 3.8 or higher
python3 --version

# pip package manager
pip3 --version

# Git for version control
git --version

Project Initialization

Step 1: Create virtual environment

cd /home/martin/code/object_detection
python3 -m venv venv
source venv/bin/activate  # On Linux/Mac
# or
venv\Scripts\activate  # On Windows

Step 2: Create requirements.txt

# Core ML and Detection
ultralytics>=8.0.0
torch>=2.0.0
torchvision>=0.15.0

# GUI Framework
PySide6>=6.5.0
pyqtgraph>=0.13.0

# Image Processing
opencv-python>=4.8.0
Pillow>=10.0.0
numpy>=1.24.0

# Database
sqlalchemy>=2.0.0

# Data Export
pandas>=2.0.0
openpyxl>=3.1.0

# Configuration
pyyaml>=6.0

# Testing
pytest>=7.4.0
pytest-qt>=4.2.0
pytest-cov>=4.1.0

# Development
black>=23.0.0
pylint>=2.17.0
mypy>=1.4.0

Step 3: Install dependencies

pip install -r requirements.txt

Directory Structure Creation

Create the complete directory structure:

mkdir -p src/{database,model,gui/{tabs,dialogs,widgets},utils}
mkdir -p config data/{models,datasets,results} tests docs logs
touch src/__init__.py
touch src/database/__init__.py
touch src/model/__init__.py
touch src/gui/__init__.py
touch src/gui/tabs/__init__.py
touch src/gui/dialogs/__init__.py
touch src/gui/widgets/__init__.py
touch src/utils/__init__.py
touch tests/__init__.py

Database Implementation

1. Database Schema (src/database/schema.sql)

-- Models table: stores trained model information
CREATE TABLE IF NOT EXISTS models (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    model_name TEXT NOT NULL,
    model_version TEXT NOT NULL,
    model_path TEXT NOT NULL,
    base_model TEXT NOT NULL DEFAULT 'yolov8s.pt',
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    training_params TEXT,  -- JSON string
    metrics TEXT,          -- JSON string
    UNIQUE(model_name, model_version)
);

-- Images table: stores image metadata
CREATE TABLE IF NOT EXISTS images (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    relative_path TEXT NOT NULL UNIQUE,
    filename TEXT NOT NULL,
    width INTEGER,
    height INTEGER,
    captured_at TIMESTAMP,
    added_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    checksum TEXT
);

-- Detections table: stores detection results
CREATE TABLE IF NOT EXISTS detections (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    image_id INTEGER NOT NULL,
    model_id INTEGER NOT NULL,
    class_name TEXT NOT NULL,
    x_min REAL NOT NULL CHECK(x_min >= 0 AND x_min <= 1),
    y_min REAL NOT NULL CHECK(y_min >= 0 AND y_min <= 1),
    x_max REAL NOT NULL CHECK(x_max >= 0 AND x_max <= 1),
    y_max REAL NOT NULL CHECK(y_max >= 0 AND y_max <= 1),
    confidence REAL NOT NULL CHECK(confidence >= 0 AND confidence <= 1),
    detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    metadata TEXT,  -- JSON string
    FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE,
    FOREIGN KEY (model_id) REFERENCES models (id) ON DELETE CASCADE
);

-- Annotations table: stores manual annotations (future feature)
CREATE TABLE IF NOT EXISTS annotations (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    image_id INTEGER NOT NULL,
    class_name TEXT NOT NULL,
    x_min REAL NOT NULL CHECK(x_min >= 0 AND x_min <= 1),
    y_min REAL NOT NULL CHECK(y_min >= 0 AND y_min <= 1),
    x_max REAL NOT NULL CHECK(x_max >= 0 AND x_max <= 1),
    y_max REAL NOT NULL CHECK(y_max >= 0 AND y_max <= 1),
    annotator TEXT,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    verified BOOLEAN DEFAULT 0,
    FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE
);

-- Create indexes for performance
CREATE INDEX IF NOT EXISTS idx_detections_image_id ON detections(image_id);
CREATE INDEX IF NOT EXISTS idx_detections_model_id ON detections(model_id);
CREATE INDEX IF NOT EXISTS idx_detections_class_name ON detections(class_name);
CREATE INDEX IF NOT EXISTS idx_detections_detected_at ON detections(detected_at);
CREATE INDEX IF NOT EXISTS idx_images_relative_path ON images(relative_path);
CREATE INDEX IF NOT EXISTS idx_annotations_image_id ON annotations(image_id);

2. Database Manager (src/database/db_manager.py)

Key Components:

import sqlite3
import json
from datetime import datetime
from typing import List, Dict, Optional, Tuple
from pathlib import Path
import hashlib


class DatabaseManager:
    """Manages all database operations for the application."""
    
    def __init__(self, db_path: str = "data/detections.db"):
        """
        Initialize database manager.
        
        Args:
            db_path: Path to SQLite database file
        """
        self.db_path = db_path
        self._ensure_database_exists()
    
    def _ensure_database_exists(self) -> None:
        """Create database and tables if they don't exist."""
        # Implementation: Read schema.sql and execute
        pass
    
    def get_connection(self) -> sqlite3.Connection:
        """Get database connection with proper settings."""
        conn = sqlite3.Connection(self.db_path)
        conn.row_factory = sqlite3.Row  # Enable column access by name
        conn.execute("PRAGMA foreign_keys = ON")  # Enable foreign keys
        return conn
    
    # ==================== Model Operations ====================
    
    def add_model(
        self,
        model_name: str,
        model_version: str,
        model_path: str,
        base_model: str = "yolov8s.pt",
        training_params: Optional[Dict] = None,
        metrics: Optional[Dict] = None
    ) -> int:
        """
        Add a new model to the database.
        
        Args:
            model_name: Name of the model
            model_version: Version string
            model_path: Path to model weights file
            base_model: Base model used for training
            training_params: Dictionary of training parameters
            metrics: Dictionary of validation metrics
            
        Returns:
            ID of the inserted model
        """
        pass
    
    def get_models(self, filters: Optional[Dict] = None) -> List[Dict]:
        """
        Retrieve models from database.
        
        Args:
            filters: Optional filters (e.g., {'model_name': 'my_model'})
            
        Returns:
            List of model dictionaries
        """
        pass
    
    def get_model_by_id(self, model_id: int) -> Optional[Dict]:
        """Get model by ID."""
        pass
    
    # ==================== Image Operations ====================
    
    def add_image(
        self,
        relative_path: str,
        filename: str,
        width: int,
        height: int,
        captured_at: Optional[datetime] = None
    ) -> int:
        """
        Add a new image to the database.
        
        Args:
            relative_path: Path relative to image repository
            filename: Image filename
            width: Image width in pixels
            height: Image height in pixels
            captured_at: When image was captured (if known)
            
        Returns:
            ID of the inserted image
        """
        pass
    
    def get_image_by_path(self, relative_path: str) -> Optional[Dict]:
        """Get image by relative path."""
        pass
    
    def get_or_create_image(
        self,
        relative_path: str,
        filename: str,
        width: int,
        height: int
    ) -> int:
        """Get existing image or create new one."""
        pass
    
    # ==================== Detection Operations ====================
    
    def add_detection(
        self,
        image_id: int,
        model_id: int,
        class_name: str,
        bbox: Tuple[float, float, float, float],  # (x_min, y_min, x_max, y_max)
        confidence: float,
        metadata: Optional[Dict] = None
    ) -> int:
        """
        Add a new detection to the database.
        
        Args:
            image_id: ID of the image
            model_id: ID of the model used
            class_name: Detected object class
            bbox: Bounding box coordinates (normalized 0-1)
            confidence: Detection confidence score
            metadata: Additional metadata
            
        Returns:
            ID of the inserted detection
        """
        pass
    
    def add_detections_batch(self, detections: List[Dict]) -> int:
        """
        Add multiple detections in a single transaction.
        
        Args:
            detections: List of detection dictionaries
            
        Returns:
            Number of detections inserted
        """
        pass
    
    def get_detections(
        self,
        filters: Optional[Dict] = None,
        limit: Optional[int] = None,
        offset: int = 0
    ) -> List[Dict]:
        """
        Retrieve detections from database.
        
        Args:
            filters: Optional filters for querying
            limit: Maximum number of results
            offset: Number of results to skip
            
        Returns:
            List of detection dictionaries with joined data
        """
        pass
    
    def get_detections_for_image(
        self,
        image_id: int,
        model_id: Optional[int] = None
    ) -> List[Dict]:
        """Get all detections for a specific image."""
        pass
    
    def delete_detections_for_model(self, model_id: int) -> int:
        """Delete all detections for a specific model."""
        pass
    
    # ==================== Statistics Operations ====================
    
    def get_detection_statistics(
        self,
        start_date: Optional[datetime] = None,
        end_date: Optional[datetime] = None
    ) -> Dict:
        """
        Get detection statistics for a date range.
        
        Returns:
            Dictionary with statistics (count by class, confidence distribution, etc.)
        """
        pass
    
    def get_class_distribution(self, model_id: Optional[int] = None) -> Dict[str, int]:
        """Get count of detections per class."""
        pass
    
    # ==================== Export Operations ====================
    
    def export_detections_to_csv(
        self,
        output_path: str,
        filters: Optional[Dict] = None
    ) -> bool:
        """Export detections to CSV file."""
        pass
    
    def export_detections_to_json(
        self,
        output_path: str,
        filters: Optional[Dict] = None
    ) -> bool:
        """Export detections to JSON file."""
        pass
    
    # ==================== Annotation Operations ====================
    
    def add_annotation(
        self,
        image_id: int,
        class_name: str,
        bbox: Tuple[float, float, float, float],
        annotator: str,
        verified: bool = False
    ) -> int:
        """Add manual annotation."""
        pass
    
    def get_annotations_for_image(self, image_id: int) -> List[Dict]:
        """Get all annotations for an image."""
        pass

3. Data Models (src/database/models.py)

from dataclasses import dataclass
from datetime import datetime
from typing import Optional, Dict, Tuple


@dataclass
class Model:
    """Represents a trained model."""
    id: Optional[int]
    model_name: str
    model_version: str
    model_path: str
    base_model: str
    created_at: datetime
    training_params: Optional[Dict]
    metrics: Optional[Dict]


@dataclass
class Image:
    """Represents an image in the database."""
    id: Optional[int]
    relative_path: str
    filename: str
    width: int
    height: int
    captured_at: Optional[datetime]
    added_at: datetime
    checksum: Optional[str]


@dataclass
class Detection:
    """Represents a detection result."""
    id: Optional[int]
    image_id: int
    model_id: int
    class_name: str
    bbox: Tuple[float, float, float, float]  # (x_min, y_min, x_max, y_max)
    confidence: float
    detected_at: datetime
    metadata: Optional[Dict]


@dataclass
class Annotation:
    """Represents a manual annotation."""
    id: Optional[int]
    image_id: int
    class_name: str
    bbox: Tuple[float, float, float, float]
    annotator: str
    created_at: datetime
    verified: bool

Model Wrapper Implementation

YOLO Wrapper (src/model/yolo_wrapper.py)

from ultralytics import YOLO
from pathlib import Path
from typing import Optional, List, Dict, Callable
import torch


class YOLOWrapper:
    """Wrapper for YOLOv8 model operations."""
    
    def __init__(self, model_path: str = "yolov8s.pt"):
        """
        Initialize YOLO model.
        
        Args:
            model_path: Path to model weights (.pt file)
        """
        self.model_path = model_path
        self.model = None
        self.device = "cuda" if torch.cuda.is_available() else "cpu"
    
    def load_model(self) -> None:
        """Load YOLO model from path."""
        self.model = YOLO(self.model_path)
        self.model.to(self.device)
    
    def train(
        self,
        data_yaml: str,
        epochs: int = 100,
        imgsz: int = 640,
        batch: int = 16,
        patience: int = 50,
        save_dir: str = "data/models",
        name: str = "custom_model",
        callbacks: Optional[Dict[str, Callable]] = None,
        **kwargs
    ) -> Dict:
        """
        Train the YOLO model.
        
        Args:
            data_yaml: Path to data.yaml configuration file
            epochs: Number of training epochs
            imgsz: Input image size
            batch: Batch size
            patience: Early stopping patience
            save_dir: Directory to save trained model
            name: Name for the training run
            callbacks: Dictionary of callback functions
            **kwargs: Additional training arguments
            
        Returns:
            Dictionary with training results
        """
        if self.model is None:
            self.load_model()
        
        # Train the model
        results = self.model.train(
            data=data_yaml,
            epochs=epochs,
            imgsz=imgsz,
            batch=batch,
            patience=patience,
            project=save_dir,
            name=name,
            device=self.device,
            **kwargs
        )
        
        return self._format_training_results(results)
    
    def validate(self, data_yaml: str, **kwargs) -> Dict:
        """
        Validate the model.
        
        Args:
            data_yaml: Path to data.yaml configuration file
            **kwargs: Additional validation arguments
            
        Returns:
            Dictionary with validation metrics
        """
        if self.model is None:
            self.load_model()
        
        results = self.model.val(data=data_yaml, **kwargs)
        return self._format_validation_results(results)
    
    def predict(
        self,
        source: str,
        conf: float = 0.25,
        iou: float = 0.45,
        save: bool = False,
        **kwargs
    ) -> List[Dict]:
        """
        Perform inference on image(s).
        
        Args:
            source: Path to image or directory
            conf: Confidence threshold
            iou: IoU threshold for NMS
            save: Whether to save annotated images
            **kwargs: Additional prediction arguments
            
        Returns:
            List of detection dictionaries
        """
        if self.model is None:
            self.load_model()
        
        results = self.model.predict(
            source=source,
            conf=conf,
            iou=iou,
            save=save,
            device=self.device,
            **kwargs
        )
        
        return self._format_prediction_results(results)
    
    def export(
        self,
        format: str = "onnx",
        output_path: Optional[str] = None
    ) -> str:
        """
        Export model to different format.
        
        Args:
            format: Export format (onnx, torchscript, etc.)
            output_path: Path for exported model
            
        Returns:
            Path to exported model
        """
        if self.model is None:
            self.load_model()
        
        export_path = self.model.export(format=format)
        return str(export_path)
    
    def _format_training_results(self, results) -> Dict:
        """Format training results into dictionary."""
        return {
            'final_epoch': results.epoch,
            'metrics': {
                'mAP50': float(results.results_dict.get('metrics/mAP50(B)', 0)),
                'mAP50-95': float(results.results_dict.get('metrics/mAP50-95(B)', 0)),
                'precision': float(results.results_dict.get('metrics/precision(B)', 0)),
                'recall': float(results.results_dict.get('metrics/recall(B)', 0)),
            },
            'best_model_path': str(results.save_dir / 'weights' / 'best.pt')
        }
    
    def _format_validation_results(self, results) -> Dict:
        """Format validation results into dictionary."""
        return {
            'mAP50': float(results.box.map50),
            'mAP50-95': float(results.box.map),
            'precision': float(results.box.mp),
            'recall': float(results.box.mr),
            'class_metrics': {
                name: {
                    'ap': float(ap),
                    'precision': float(p),
                    'recall': float(r)
                }
                for name, ap, p, r in zip(
                    results.names.values(),
                    results.box.ap,
                    results.box.p,
                    results.box.r
                )
            }
        }
    
    def _format_prediction_results(self, results) -> List[Dict]:
        """Format prediction results into list of dictionaries."""
        detections = []
        
        for result in results:
            boxes = result.boxes
            
            for i in range(len(boxes)):
                detection = {
                    'image_path': str(result.path),
                    'class_id': int(boxes.cls[i]),
                    'class_name': result.names[int(boxes.cls[i])],
                    'confidence': float(boxes.conf[i]),
                    'bbox': boxes.xywhn[i].tolist(),  # Normalized [x_center, y_center, width, height]
                    'bbox_xyxy': boxes.xyxy[i].tolist(),  # Absolute [x1, y1, x2, y2]
                }
                detections.append(detection)
        
        return detections
    
    @staticmethod
    def convert_bbox_format(
        bbox: List[float],
        format_from: str = "xywh",
        format_to: str = "xyxy"
    ) -> List[float]:
        """
        Convert bounding box between formats.
        
        Formats:
        - xywh: [x_center, y_center, width, height]
        - xyxy: [x_min, y_min, x_max, y_max]
        """
        if format_from == "xywh" and format_to == "xyxy":
            x, y, w, h = bbox
            return [x - w/2, y - h/2, x + w/2, y + h/2]
        elif format_from == "xyxy" and format_to == "xywh":
            x1, y1, x2, y2 = bbox
            return [(x1 + x2)/2, (y1 + y2)/2, x2 - x1, y2 - y1]
        else:
            return bbox

Inference Engine (src/model/inference.py)

from typing import List, Dict, Optional, Callable
from pathlib import Path
import cv2
from PIL import Image

from .yolo_wrapper import YOLOWrapper
from ..database.db_manager import DatabaseManager


class InferenceEngine:
    """Handles detection inference and result storage."""
    
    def __init__(
        self,
        model_path: str,
        db_manager: DatabaseManager,
        model_id: int
    ):
        """
        Initialize inference engine.
        
        Args:
            model_path: Path to YOLO model weights
            db_manager: Database manager instance
            model_id: ID of the model in database
        """
        self.yolo = YOLOWrapper(model_path)
        self.yolo.load_model()
        self.db_manager = db_manager
        self.model_id = model_id
    
    def detect_single(
        self,
        image_path: str,
        relative_path: str,
        conf: float = 0.25,
        save_to_db: bool = True
    ) -> Dict:
        """
        Detect objects in a single image.
        
        Args:
            image_path: Absolute path to image file
            relative_path: Relative path from repository root
            conf: Confidence threshold
            save_to_db: Whether to save results to database
            
        Returns:
            Dictionary with detection results
        """
        # Get image dimensions
        img = Image.open(image_path)
        width, height = img.size
        
        # Perform detection
        detections = self.yolo.predict(image_path, conf=conf)
        
        # Add/get image in database
        image_id = self.db_manager.get_or_create_image(
            relative_path=relative_path,
            filename=Path(image_path).name,
            width=width,
            height=height
        )
        
        # Save detections to database
        if save_to_db and detections:
            detection_records = []
            for det in detections:
                # Convert bbox to xyxy normalized format
                bbox_xyxy = YOLOWrapper.convert_bbox_format(
                    det['bbox'], 'xywh', 'xyxy'
                )
                
                record = {
                    'image_id': image_id,
                    'model_id': self.model_id,
                    'class_name': det['class_name'],
                    'bbox': tuple(bbox_xyxy),
                    'confidence': det['confidence'],
                    'metadata': {'class_id': det['class_id']}
                }
                detection_records.append(record)
            
            self.db_manager.add_detections_batch(detection_records)
        
        return {
            'image_path': image_path,
            'image_id': image_id,
            'detections': detections,
            'count': len(detections)
        }
    
    def detect_batch(
        self,
        image_paths: List[str],
        repository_root: str,
        conf: float = 0.25,
        progress_callback: Optional[Callable[[int, int, str], None]] = None
    ) -> List[Dict]:
        """
        Detect objects in multiple images.
        
        Args:
            image_paths: List of absolute image paths
            repository_root: Root directory for relative paths
            conf: Confidence threshold
            progress_callback: Optional callback(current, total, message)
            
        Returns:
            List of detection result dictionaries
        """
        results = []
        total = len(image_paths)
        
        for i, image_path in enumerate(image_paths, 1):
            # Calculate relative path
            rel_path = str(Path(image_path).relative_to(repository_root))
            
            # Perform detection
            result = self.detect_single(image_path, rel_path, conf)
            results.append(result)
            
            # Update progress
            if progress_callback:
                progress_callback(i, total, f"Processed {rel_path}")
        
        return results
    
    def detect_with_visualization(
        self,
        image_path: str,
        conf: float = 0.25
    ) -> tuple:
        """
        Detect objects and return annotated image.
        
        Returns:
            Tuple of (detections, annotated_image_array)
        """
        detections = self.yolo.predict(image_path, conf=conf, save=False)
        
        # Load image
        img = cv2.imread(image_path)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        
        # Draw bounding boxes
        for det in detections:
            x1, y1, x2, y2 = [int(v) for v in det['bbox_xyxy']]
            label = f"{det['class_name']} {det['confidence']:.2f}"
            
            # Draw box
            cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 0), 2)
            
            # Draw label background
            (label_w, label_h), _ = cv2.getTextSize(
                label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1
            )
            cv2.rectangle(
                img, (x1, y1 - label_h - 5), (x1 + label_w, y1), (255, 0, 0), -1
            )
            
            # Draw label text
            cv2.putText(
                img, label, (x1, y1 - 5),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1
            )
        
        return detections, img

GUI Components

Main Application (main.py)

import sys
from PySide6.QtWidgets import QApplication
from src.gui.main_window import MainWindow
from src.utils.logger import setup_logging


def main():
    """Application entry point."""
    # Setup logging
    setup_logging()
    
    # Create Qt application
    app = QApplication(sys.argv)
    app.setApplicationName("Microscopy Object Detection")
    app.setOrganizationName("YourOrganization")
    
    # Create and show main window
    window = MainWindow()
    window.show()
    
    # Run application
    sys.exit(app.exec())


if __name__ == "__main__":
    main()

Main Window (src/gui/main_window.py)

from PySide6.QtWidgets import (
    QMainWindow, QTabWidget, QMenuBar, QMenu,
    QStatusBar, QMessageBox
)
from PySide6.QtCore import Qt
from PySide6.QtGui import QAction

from .tabs.training_tab import TrainingTab
from .tabs.validation_tab import ValidationTab
from .tabs.detection_tab import DetectionTab
from .tabs.results_tab import ResultsTab
from .tabs.annotation_tab import AnnotationTab
from .dialogs.config_dialog import ConfigDialog
from ..database.db_manager import DatabaseManager
from ..utils.config_manager import ConfigManager


class MainWindow(QMainWindow):
    """Main application window."""
    
    def __init__(self):
        super().__init__()
        
        # Initialize managers
        self.config_manager = ConfigManager()
        self.db_manager = DatabaseManager(
            self.config_manager.get('database.path', 'data/detections.db')
        )
        
        # Setup UI
        self.setWindowTitle("Microscopy Object Detection")
        self.setMinimumSize(1200, 800)
        
        self._create_menu_bar()
        self._create_tab_widget()
        self._create_status_bar()
    
    def _create_menu_bar(self):
        """Create application menu bar."""
        menubar = self.menuBar()
        
        # File menu
        file_menu = menubar.addMenu("&File")
        
        settings_action = QAction("&Settings", self)
        settings_action.triggered.connect(self._show_settings)
        file_menu.addAction(settings_action)
        
        file_menu.addSeparator()
        
        exit_action = QAction("E&xit", self)
        exit_action.setShortcut("Ctrl+Q")
        exit_action.triggered.connect(self.close)
        file_menu.addAction(exit_action)
        
        # Tools menu
        tools_menu = menubar.addMenu("&Tools")
        
        # Help menu
        help_menu = menubar.addMenu("&Help")
        
        about_action = QAction("&About", self)
        about_action.triggered.connect(self._show_about)
        help_menu.addAction(about_action)
    
    def _create_tab_widget(self):
        """Create main tab widget with all tabs."""
        self.tab_widget = QTabWidget()
        
        # Create tabs
        self.training_tab = TrainingTab(self.db_manager, self.config_manager)
        self.validation_tab = ValidationTab(self.db_manager, self.config_manager)
        self.detection_tab = DetectionTab(self.db_manager, self.config_manager)
        self.results_tab = ResultsTab(self.db_manager, self.config_manager)
        self.annotation_tab = AnnotationTab(self.db_manager, self.config_manager)
        
        # Add tabs
        self.tab_widget.addTab(self.detection_tab, "Detection")
        self.tab_widget.addTab(self.training_tab, "Training")
        self.tab_widget.addTab(self.validation_tab, "Validation")
        self.tab_widget.addTab(self.results_tab, "Results")
        self.tab_widget.addTab(self.annotation_tab, "Annotation")
        
        self.setCentralWidget(self.tab_widget)
    
    def _create_status_bar(self):
        """Create status bar."""
        self.status_bar = QStatusBar()
        self.setStatusBar(self.status_bar)
        self.status_bar.showMessage("Ready")
    
    def _show_settings(self):
        """Show settings dialog."""
        dialog = ConfigDialog(self.config_manager, self)
        if dialog.exec():
            self._apply_settings()
    
    def _apply_settings(self):
        """Apply changed settings."""
        # Reload configuration in all tabs
        pass
    
    def _show_about(self):
        """Show about dialog."""
        QMessageBox.about(
            self,
            "About",
            "Microscopy Object Detection Application\n\n"
            "Version 1.0\n\n"
            "Powered by YOLOv8 and PySide6"
        )

Tab Structure

Each tab should follow this pattern:

from PySide6.QtWidgets import QWidget, QVBoxLayout
from ..database.db_manager import DatabaseManager
from ..utils.config_manager import ConfigManager


class TabName(QWidget):
    """Tab description."""
    
    def __init__(
        self,
        db_manager: DatabaseManager,
        config_manager: ConfigManager,
        parent=None
    ):
        super().__init__(parent)
        self. db_manager = db_manager
        self.config_manager = config_manager
        
        self._setup_ui()
        self._connect_signals()
    
    def _setup_ui(self):
        """Setup user interface."""
        layout = QVBoxLayout()
        # Add widgets
        self.setLayout(layout)
    
    def _connect_signals(self):
        """Connect signals and slots."""
        pass

Testing Strategy

Unit Tests Example (tests/test_database.py)

import pytest
from src.database.db_manager import DatabaseManager
import tempfile
import os


@pytest.fixture
def db_manager():
    """Create temporary database for testing."""
    fd, path = tempfile.mkstemp(suffix='.db')
    os.close(fd)
    
    manager = DatabaseManager(path)
    yield manager
    
    os.unlink(path)


def test_add_model(db_manager):
    """Test adding a model to database."""
    model_id = db_manager.add_model(
        model_name="test_model",
        model_version="v1.0",
        model_path="/path/to/model.pt",
        base_model="yolov8s.pt"
    )
    
    assert model_id > 0
    
    model = db_manager.get_model_by_id(model_id)
    assert model['model_name'] == "test_model"
    assert model['model_version'] == "v1.0"


def test_add_detection(db_manager):
    """Test adding detection with foreign key constraints."""
    # First add model and image
    model_id = db_manager.add_model(
        "test", "v1", "/path", "yolov8s.pt"
    )
    image_id = db_manager.add_image(
        "test.jpg", "test.jpg", 1024, 768
    )
    
    # Add detection
    det_id = db_manager.add_detection(
        image_id=image_id,
        model_id=model_id,
        class_name="organelle",
        bbox=(0.1, 0.2, 0.3, 0.4),
        confidence=0.95
    )
    
    assert det_id > 0

Configuration Files

Application Config (config/app_config.yaml)

database:
  path: "data/detections.db"

image_repository:
  base_path: ""
  allowed_extensions:
    - ".jpg"
    - ".jpeg"
    - ".png"
    - ".tif"
    - ".tiff"

models:
  default_base_model: "yolov8s.pt"
  models_directory: "data/models"

training:
  default_epochs: 100
  default_batch_size: 16
  default_imgsz: 640
  default_patience: 50
  default_lr0: 0.01

detection:
  default_confidence: 0.25
  default_iou: 0.45
  max_batch_size: 100

visualization:
  bbox_colors:
    organelle: "#FF6B6B"
    membrane_branch: "#4ECDC4"
  bbox_thickness: 2
  font_size: 12

export:
  formats:
    - csv
    - json
    - excel
  default_format: "csv"

logging:
  level: "INFO"
  file: "logs/app.log"
  format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"

Dataset Config Example (data.yaml)

# YOLOv8 dataset configuration
path: /path/to/dataset  # Root directory
train: train/images     # Training images relative to path
val: val/images         # Validation images relative to path
test: test/images       # Test images (optional)

# Classes
names:
  0: organelle
  1: membrane_branch

# Number of classes
nc: 2

Deployment Checklist

  • Install all dependencies from requirements.txt
  • Create necessary directories (data/, logs/, config/)
  • Initialize database with schema
  • Download YOLOv8s.pt base model
  • Configure app_config.yaml
  • Set image repository path
  • Test database operations
  • Test model loading and inference
  • Run unit tests
  • Build application icon and resources
  • Create user documentation
  • Package application (PyInstaller or similar)

This implementation guide provides detailed specifications for building each component of the application. The actual implementation in Code mode will follow these specifications to create a fully functional microscopy object detection system.