2025-12-05 09:39:45 +02:00
|
|
|
# Quick Start Guide
|
|
|
|
|
|
|
|
|
|
This guide will help you get the Microscopy Object Detection Application up and running quickly.
|
|
|
|
|
|
|
|
|
|
## Prerequisites
|
|
|
|
|
|
|
|
|
|
- Python 3.8 or higher
|
|
|
|
|
- pip package manager
|
|
|
|
|
- (Optional) CUDA-capable GPU for faster training and inference
|
|
|
|
|
|
|
|
|
|
## Installation
|
|
|
|
|
|
|
|
|
|
### 1. Clone or Navigate to Project Directory
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
cd /home/martin/code/object_detection
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 2. Create Virtual Environment
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
python3 -m venv venv
|
|
|
|
|
source venv/bin/activate # On Linux/Mac
|
|
|
|
|
# or
|
|
|
|
|
venv\Scripts\activate # On Windows
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 3. Install Dependencies
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
pip install -r requirements.txt
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
This will install:
|
|
|
|
|
- ultralytics (YOLOv8)
|
|
|
|
|
- PySide6 (GUI framework)
|
|
|
|
|
- pyqtgraph (visualization)
|
|
|
|
|
- OpenCV and Pillow (image processing)
|
|
|
|
|
- And other dependencies
|
|
|
|
|
|
2025-12-05 15:51:16 +02:00
|
|
|
**Note:** The first run will automatically download the YOLOv8s-seg.pt segmentation model (~23MB).
|
2025-12-05 09:39:45 +02:00
|
|
|
|
|
|
|
|
### 4. Verify Installation
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
python -c "import PySide6; import ultralytics; print('Installation successful!')"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## First Run
|
|
|
|
|
|
|
|
|
|
### Launch the Application
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
python main.py
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
The application window should open with 5 tabs:
|
|
|
|
|
- **Detection**: For running object detection
|
|
|
|
|
- **Training**: For training custom models (placeholder)
|
|
|
|
|
- **Validation**: For validating models (placeholder)
|
|
|
|
|
- **Results**: For viewing detection history (placeholder)
|
|
|
|
|
- **Annotation**: Future feature (placeholder)
|
|
|
|
|
|
|
|
|
|
## Configuration
|
|
|
|
|
|
|
|
|
|
### Set Up Image Repository
|
|
|
|
|
|
|
|
|
|
1. Go to **File → Settings**
|
|
|
|
|
2. Under "General" tab, click "Browse..." next to "Base Path"
|
|
|
|
|
3. Select your microscopy images directory
|
|
|
|
|
4. Click "Save"
|
|
|
|
|
|
|
|
|
|
This tells the application where your images are stored.
|
|
|
|
|
|
|
|
|
|
### Adjust Detection Settings
|
|
|
|
|
|
|
|
|
|
In the Settings dialog:
|
|
|
|
|
- **Detection tab**: Adjust default confidence threshold (0.25 default)
|
|
|
|
|
- **Training tab**: Configure default training parameters
|
|
|
|
|
- All settings are saved to [`config/app_config.yaml`](config/app_config.yaml)
|
|
|
|
|
|
|
|
|
|
## Running Detection
|
|
|
|
|
|
|
|
|
|
### Single Image Detection
|
|
|
|
|
|
|
|
|
|
1. Go to the **Detection** tab
|
2025-12-05 15:51:16 +02:00
|
|
|
2. Select a model from the dropdown (default: Base Model yolov8s-seg.pt)
|
2025-12-05 09:39:45 +02:00
|
|
|
3. Adjust confidence threshold with the slider
|
|
|
|
|
4. Click "Detect Single Image"
|
|
|
|
|
5. Select an image file
|
2025-12-05 15:51:16 +02:00
|
|
|
6. View results with segmentation masks overlaid on the image
|
2025-12-05 09:39:45 +02:00
|
|
|
|
|
|
|
|
### Batch Detection
|
|
|
|
|
|
|
|
|
|
1. Go to the **Detection** tab
|
|
|
|
|
2. Select a model
|
|
|
|
|
3. Click "Detect Batch (Folder)"
|
|
|
|
|
4. Select a folder containing images
|
|
|
|
|
5. Confirm the number of images to process
|
|
|
|
|
6. Wait for processing to complete
|
|
|
|
|
7. Results are automatically saved to the database
|
|
|
|
|
|
|
|
|
|
## Understanding the Results
|
|
|
|
|
|
|
|
|
|
Detection results include:
|
|
|
|
|
- **Image path**: Location of the processed image
|
|
|
|
|
- **Detections**: Number of objects found
|
|
|
|
|
- **Class names**: Types of objects detected (e.g., organelle, membrane_branch)
|
|
|
|
|
- **Confidence scores**: Detection confidence (0-1)
|
|
|
|
|
- **Bounding boxes**: Object locations (stored in database)
|
2025-12-05 15:51:16 +02:00
|
|
|
- **Segmentation masks**: Pixel-accurate polygon coordinates for each detected object
|
2025-12-05 09:39:45 +02:00
|
|
|
|
|
|
|
|
All results are stored in the SQLite database at [`data/detections.db`](data/detections.db).
|
|
|
|
|
|
2025-12-05 15:51:16 +02:00
|
|
|
### Segmentation Visualization
|
|
|
|
|
|
|
|
|
|
The application automatically displays segmentation masks when available:
|
|
|
|
|
- Semi-transparent colored overlay (30% opacity) showing the exact shape of detected objects
|
|
|
|
|
- Polygon contours outlining each segmentation
|
|
|
|
|
- Color-coded by object class
|
|
|
|
|
- Toggle-able in future versions
|
|
|
|
|
|
2025-12-05 09:39:45 +02:00
|
|
|
## Database
|
|
|
|
|
|
|
|
|
|
The application uses SQLite to store:
|
|
|
|
|
- **Models**: Information about trained models
|
|
|
|
|
- **Images**: Metadata about processed images
|
|
|
|
|
- **Detections**: All detection results with bounding boxes
|
|
|
|
|
- **Annotations**: Manual annotations (future feature)
|
|
|
|
|
|
|
|
|
|
### View Database Statistics
|
|
|
|
|
|
|
|
|
|
Go to **Tools → Database Statistics** to see:
|
|
|
|
|
- Total number of detections
|
|
|
|
|
- Detections per class
|
|
|
|
|
- Average confidence scores
|
|
|
|
|
|
|
|
|
|
## Next Steps
|
|
|
|
|
|
|
|
|
|
### Training Custom Models (Coming Soon)
|
|
|
|
|
|
|
|
|
|
The Training tab will allow you to:
|
|
|
|
|
1. Select a YOLO-format dataset
|
|
|
|
|
2. Configure training parameters
|
|
|
|
|
3. Monitor training progress
|
|
|
|
|
4. Save trained models
|
|
|
|
|
|
|
|
|
|
### Preparing Your Dataset
|
|
|
|
|
|
|
|
|
|
For training, you'll need data in YOLO format:
|
|
|
|
|
```
|
|
|
|
|
dataset/
|
|
|
|
|
├── train/
|
|
|
|
|
│ ├── images/
|
|
|
|
|
│ │ ├── img001.png
|
|
|
|
|
│ │ └── ...
|
|
|
|
|
│ └── labels/
|
|
|
|
|
│ ├── img001.txt
|
|
|
|
|
│ └── ...
|
|
|
|
|
├── val/
|
|
|
|
|
│ ├── images/
|
|
|
|
|
│ └── labels/
|
|
|
|
|
└── data.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
See [`README.md`](README.md) for detailed dataset format information.
|
|
|
|
|
|
|
|
|
|
## Troubleshooting
|
|
|
|
|
|
|
|
|
|
### Application Won't Start
|
|
|
|
|
|
|
|
|
|
**Error: Module not found**
|
|
|
|
|
```bash
|
|
|
|
|
# Make sure virtual environment is activated
|
|
|
|
|
source venv/bin/activate
|
|
|
|
|
pip install -r requirements.txt
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
**Error: Qt platform plugin**
|
|
|
|
|
```bash
|
|
|
|
|
# Install system Qt dependencies (Linux)
|
|
|
|
|
sudo apt-get install libxcb-xinerama0
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Detection Not Working
|
|
|
|
|
|
|
|
|
|
**No models available**
|
2025-12-05 15:51:16 +02:00
|
|
|
- The base YOLOv8s-seg segmentation model will be downloaded automatically on first use
|
2025-12-05 09:39:45 +02:00
|
|
|
- Make sure you have internet connection for the first run
|
|
|
|
|
|
|
|
|
|
**Images not found**
|
|
|
|
|
- Verify the image repository path in Settings
|
|
|
|
|
- Check that image files have supported extensions (.jpg, .png, .tif, etc.)
|
|
|
|
|
|
|
|
|
|
### Performance Issues
|
|
|
|
|
|
|
|
|
|
**Slow detection**
|
|
|
|
|
- Use a GPU if available (CUDA will be auto-detected)
|
|
|
|
|
- Reduce batch size in settings
|
|
|
|
|
- Process fewer images at once
|
|
|
|
|
|
|
|
|
|
**Out of memory**
|
|
|
|
|
- Reduce batch size
|
|
|
|
|
- Use CPU instead of GPU for large images
|
|
|
|
|
- Process images sequentially rather than in batch
|
|
|
|
|
|
|
|
|
|
## File Locations
|
|
|
|
|
|
|
|
|
|
- **Database**: `data/detections.db`
|
|
|
|
|
- **Configuration**: `config/app_config.yaml`
|
|
|
|
|
- **Logs**: `logs/app.log`
|
|
|
|
|
- **Models**: `data/models/`
|
|
|
|
|
- **Results**: Stored in database
|
|
|
|
|
|
|
|
|
|
## Command Line Tips
|
|
|
|
|
|
|
|
|
|
### Check GPU Availability
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### View Logs
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
tail -f logs/app.log
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Backup Database
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
cp data/detections.db data/detections_backup_$(date +%Y%m%d).db
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Getting Help
|
|
|
|
|
|
|
|
|
|
- Read the full documentation in [`README.md`](README.md)
|
|
|
|
|
- Check architecture details in [`ARCHITECTURE.md`](ARCHITECTURE.md)
|
|
|
|
|
- Review implementation guide in [`IMPLEMENTATION_GUIDE.md`](IMPLEMENTATION_GUIDE.md)
|
|
|
|
|
- Check logs in `logs/app.log` for error messages
|
|
|
|
|
|
|
|
|
|
## What's Implemented
|
|
|
|
|
|
|
|
|
|
✅ **Core Infrastructure**
|
|
|
|
|
- Project structure and configuration
|
|
|
|
|
- Database schema and operations
|
|
|
|
|
- YOLO model wrapper
|
|
|
|
|
- Inference engine with batch processing
|
|
|
|
|
- Configuration management
|
|
|
|
|
- Logging system
|
|
|
|
|
|
|
|
|
|
✅ **GUI Components**
|
|
|
|
|
- Main window with menu bar
|
|
|
|
|
- Settings dialog
|
|
|
|
|
- Detection tab with single/batch detection
|
|
|
|
|
- Placeholder tabs for future features
|
|
|
|
|
|
|
|
|
|
🚧 **In Progress / Planned**
|
|
|
|
|
- Training tab implementation
|
|
|
|
|
- Validation tab with metrics
|
|
|
|
|
- Results browser and visualization
|
|
|
|
|
- Annotation tool
|
|
|
|
|
- Advanced filtering and export
|
|
|
|
|
- Real-time camera detection
|
|
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
**Happy detecting! 🔬🔍**
|
|
|
|
|
|
|
|
|
|
For questions or issues, please refer to the documentation or check the application logs.
|