QGIS Plugin for GeoAI¶
A QGIS plugin that brings the geoai models into dockable panels (Moondream VLM, segmentation training/inference, SamGeo) so you can keep QGIS as your main workspace while experimenting with GeoAI.
Quick Start¶
- Create a Pixi project and install the dependencies.
- Install the QGIS plugin from the QGIS Plugin Manager.
- Enable the GeoAI plugin in QGIS.
- Restart QGIS.
- Open a GeoAI toolbar panel and try the sample datasets below.
Video Tutorials¶
Installation Tutorial¶
You can follow this video tutorial to install the GeoAI QGIS Plugin on Linux/Windows:
Usage Tutorial¶
Check out this short video demo and full video tutorial on how to use the GeoAI plugin in QGIS.
Requirements¶
- QGIS 3.28 or later
- Python 3.10+ (Pixi recommended)
- PyTorch (CUDA if you want GPU acceleration)
geoaiandsamgeopackages
Features¶
Each tool lives inside a dockable panel that can be attached to either side of the QGIS interface, so you can keep layers, maps, and models visible simultaneously.
Moondream Vision-Language Model Panel¶
- Caption: Generate descriptions of geospatial imagery (short, normal, or long)
- Query: Ask questions about images using natural language
- Detect: Detect and locate objects with bounding boxes
- Point: Locate specific objects with point markers
Segmentation Panel (Combined Training & Inference)¶
- Tab 1 - Create Training Data: Export GeoTIFF tiles from raster and vector data
- Tab 2 - Train Models: Train custom segmentation models (U-Net, DeepLabV3+, FPN, etc.)
- Tab 3 - Run Inference: Apply trained models to new imagery and vectorize results. Vector outputs can optionally be smoothed or simplified for immediate use in GIS workflows.
SamGeo Panel (Segment Anything Model)¶
- Model Tab: Load SAM models (SAM1, SAM2, or SAM3) with configurable backend and device settings
- Text Tab: Segment objects using text prompts (e.g., "tree", "building", "road")
- Interactive Tab: Segment using point prompts (foreground/background) or box prompts drawn on the map
- Batch Tab: Process multiple points interactively or from vector files/layers
- Output Tab: Save results as raster (GeoTIFF) or vector (GeoPackage, Shapefile) with optional regularization (orthogonalize polygons, filter by minimum area)
GPU Memory Management¶
- Clear GPU Memory: Release GPU memory and clear CUDA cache for all loaded models
Installation¶
1. Set up the environment¶
Installing the GeoAI QGIS plugin on can be challenging due to the complicated pytorch/cuda dependencies. Conda or mamba might take a while to resolve the dependencies, while pip might fail to install the dependencies properly. It is recommended to use pixi to install the dependencies to avoid these issues.
1) Install Pixi¶
Linux/macOS (bash/zsh)¶
1 | |
Close and re-open your terminal (or reload your shell) so pixi is on your PATH. Then confirm:
1 | |
Windows (PowerShell)¶
Open PowerShell (preferably as a normal user, Admin not required), then run:
1 | |
Close and re-open PowerShell, then confirm:
1 | |
2) Create a Pixi project¶
Navigate to a directory where you want to create the project and run:
1 2 | |
3) Configure pixi.toml¶
Open pixi.toml in the geo directory and replace its contents with the following depending on your system.
If you have a NVIDIA GPU with CUDA, run nvidia-smi to check the CUDA version.
- For GPU with CUDA 12.x:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | |
- For GPU with CUDA 13.x:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | |
- For CPU:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | |
4) Install the environment¶
From the geo folder:
1 | |
This step may take several minutes on first install depending on your internet connection and system.
5) Verify PyTorch + CUDA¶
If you have a NVIDIA GPU with CUDA, run the following command to verify the PyTorch + CUDA installation:
1 | |
Expected output should be like this:
PyTorch: 2.9.1CUDA available: TrueGPU: NVIDIA RTX 4090
If CUDA is False, check:
nvidia-smiworks in PowerShell- NVIDIA driver is up to date
Request access to SAM 3¶
To use SAM 3, you will need to request access by filling out this form on Hugging Face at https://huggingface.co/facebook/sam3. Once your request has been approved, run the following command in the terminal to authenticate:
1 | |
After authentication, you can download the SAM 3 model from Hugging Face:
1 | |
Important Note: SAM 3 currently requires a NVIDIA GPU with CUDA support. You won't be able to use SAM 3 if you have a CPU only system (source). You will get an error message like this: Failed to load model: Torch not compiled with CUDA enabled.
2. Install the QGIS plugin¶
Option A — use QGIS Plugin Manager (recommended):
GeoAI is available as a QGIS plugin in the official QGIS plugin repository. To install:
- Launch QGIS:
pixi run qgis - Go to
Plugins→Manage and Install Plugins... - Switch to the
Alltab, search forGeoAI, select it, and clickInstall Plugin
If you encounter an error message like this after installing the plugin, click Close to dismiss the dialog. Next, in the QGIS Plugin Manager on the Installed tab, toggle the checkbox next to the GeoAI plugin to enable it. If the error dialog appears again, close it once more, then restart QGIS to reload the plugin. After restarting, the GeoAI plugin should appear in the QGIS toolbar.
Option B — use the helper script:
1 2 3 | |
This links/copies the plugin into your active QGIS profile. Re-run after pulling updates. Remove with:
1 | |
Option C — manual copy:
- Copy the
qgis_pluginfolder to your QGIS plugins directory:- Linux:
~/.local/share/QGIS/QGIS3/profiles/default/python/plugins/ - Windows:
C:\Users\<username>\AppData\Roaming\QGIS\QGIS3\profiles\default\python\plugins\ - macOS:
~/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/
- Linux:
3. Enable in QGIS¶
Launch QGIS: pixi run qgis
QGIS → Plugins → Manage and Install Plugins... → enable GeoAI. After updates, toggle the plugin off/on or restart QGIS to reload.
Usage¶
Moondream Vision-Language Model¶
Sample dataset: parking_lot.tif
Steps:
- Click the Moondream button in the GeoAI toolbar (or
GeoAImenu →Moondream VLM) - Load a Moondream model (default: vikhyatk/moondream2)
- Select a raster layer or browse for an image file
- Choose a mode:
- Caption: Generate a description of the image
- Query: Ask a question about the image
- Detect: Detect objects by type (e.g., "building", "car")
- Point: Locate specific objects
- Click "Run"
-
Results are displayed and optionally added to the map. You can drag the panel to any side of QGIS to keep it out of the way while browsing results. Save the output table or vector layer if you want to reuse detections later.
Segmentation Panel (Create Data, Train, Inference)¶
Sample datasets:
Steps:
- Download the sample datasets (links above) or prepare your own imagery/vector labels. Store them in a folder that is accessible to pixi project.
- Click the Segmentation button in the GeoAI toolbar (or
GeoAImenu →Segmentation) -
Use the tabs at the top of the panel to switch between:
- Create Training Data: Select input raster and vector labels, configure tile size and stride, and export tiles to a directory.
- Train Model: Select the images and labels directories, choose model architecture (U-Net, DeepLabV3+, etc.), configure training parameters, and start training.
- Run Inference: Select input raster layer or file, specify the trained model path, configure inference parameters, run inference, and optionally vectorize the results.
SamGeo Panel (Segment Anything Model)¶
Sample dataset:
Steps:
- Click the SamGeo button in the GeoAI toolbar (or
GeoAImenu →SamGeo) -
In the Model tab:
- Select the SAM model version (SamGeo3/SAM3, SamGeo2/SAM2, or SamGeo/SAM1)
- Configure backend (meta or transformers) and device (auto, cuda, cpu)
- Click "Load Model" to initialize the model
- Select a raster layer or browse for an image file and click "Set Image"
-
Choose a segmentation method:
-
Text Tab: Enter text prompts describing objects to segment (e.g., "tree, building")
-
Interactive Tab:
- Click "Add Foreground Points" or "Add Background Points" and click on the map
- Or click "Draw Box" and drag a rectangle on the map
- Click "Segment by Points" or "Segment by Box"
-
Batch Tab: Add multiple points interactively or load from a vector file/layer
-
-
In the Output tab:
- Select output format (Raster GeoTIFF, Vector GeoPackage, or Vector Shapefile)
- For vector output, optionally enable regularization:
- Check "Regularize polygons (orthogonalize)"
- Set Epsilon (simplification tolerance) and Min Area (filter small polygons)
- Click "Save Masks" to export results
Clear GPU Memory¶
Click the GPU button in the GeoAI toolbar to release GPU memory from all loaded models (Moondream, SamGeo, etc.) and clear CUDA cache. Use this frequently when switching between large models to prevent out-of-memory errors.
Plugin Update Checker¶
Go to GeoAI menu → Check for Updates... to see if a newer version of the GeoAI plugin is available. Click on the Check for Updates button to fetch the latest version info from GitHub. If an update is found, click the Download and Install Update button to download and install the latest version automatically. Restart QGIS to apply the update.
Supported Model Architectures (Segmentation)¶
The QGIS plugin supports any models supported by Pytorch Segmentation Models, including:
- U-Net
- U-Net++
- DeepLabV3
- DeepLabV3+
- FPN (Feature Pyramid Network)
- PSPNet
- LinkNet
- MANet
- PAN
- UperNet
- SegFormer
- DPT
Supported Encoders (Segmentation)¶
- ResNet (34, 50, 101, 152)
- EfficientNet (b0-b4)
- MobileNetV2
- VGG (16, 19)
Supported SAM Models (SamGeo)¶
- SamGeo3 (SAM3): Latest version with text prompts, point prompts, and box prompts
- SamGeo2 (SAM2): Improved version with better performance
- SamGeo (SAM1): Original Segment Anything Model
Troubleshooting¶
- Plugin missing after install: confirm the plugin folder exists in your QGIS profile path and that you restarted QGIS.
- CUDA OOM: use the GPU button to clear cache, lower batch sizes, or switch to CPU for smaller runs.
- Model download failures: check network/firewall, then retry loading models from the panel.
License¶
MIT License - see LICENSE for details.