mirror of
https://github.com/FoggedLens/deflock-app.git
synced 2026-02-12 16:52:51 +00:00
Fix changesets not getting closed, other updates to queue mechanism
This commit is contained in:
85
DEVELOPER.md
85
DEVELOPER.md
@@ -182,20 +182,47 @@ class AddNodeSession {
|
||||
**Why no delete session:**
|
||||
Deletions don't need position dragging or tag editing - they just need confirmation and queuing. A session would add complexity without benefit.
|
||||
|
||||
### 3. Upload Queue System
|
||||
### 3. Upload Queue System & Three-Stage Upload Process
|
||||
|
||||
**Design principles:**
|
||||
- **Operation-agnostic**: Same queue handles create/modify/delete
|
||||
- **Offline-capable**: Queue persists between app sessions
|
||||
- **Visual feedback**: Each operation type has distinct UI state
|
||||
- **Error recovery**: Retry mechanism with exponential backoff
|
||||
- **Three explicit stages**: Create changeset → Upload node → Close changeset
|
||||
- **Operation-agnostic**: Same queue handles create/modify/delete/extract
|
||||
- **Offline-capable**: Queue persists between app sessions
|
||||
- **Visual feedback**: Each operation type and stage has distinct UI state
|
||||
- **Stage-specific error recovery**: Appropriate retry logic for each of the 3 stages
|
||||
|
||||
**Queue workflow:**
|
||||
1. User action (add/edit/delete) → `PendingUpload` created
|
||||
**Three-stage upload workflow:**
|
||||
1. **Stage 1 - Create Changeset**: Generate changeset XML and create on OSM
|
||||
- Retries: Up to 3 attempts with 20s delays
|
||||
- Failures: Reset to pending for full retry
|
||||
2. **Stage 2 - Node Operation**: Create/modify/delete the surveillance node
|
||||
- Retries: Up to 3 attempts with 20s delays
|
||||
- Failures: Close orphaned changeset, then retry from stage 1
|
||||
3. **Stage 3 - Close Changeset**: Close the changeset to finalize
|
||||
- Retries: Exponential backoff up to 59 minutes
|
||||
- Failures: OSM auto-closes after 60 minutes, so we eventually give up
|
||||
|
||||
**Queue processing workflow:**
|
||||
1. User action (add/edit/delete) → `PendingUpload` created with `UploadState.pending`
|
||||
2. Immediate visual feedback (cache updated with temp markers)
|
||||
3. Background uploader processes queue when online
|
||||
3. Background uploader processes queue when online:
|
||||
- **Pending** → Create changeset → **CreatingChangeset** → **Uploading**
|
||||
- **Uploading** → Upload node → **ClosingChangeset**
|
||||
- **ClosingChangeset** → Close changeset → **Complete**
|
||||
4. Success → cache updated with real data, temp markers removed
|
||||
5. Failure → error state, retry available
|
||||
5. Failures → appropriate retry logic based on which stage failed
|
||||
|
||||
**Why three explicit stages:**
|
||||
The previous implementation conflated changeset creation + node operation as one step, making error handling unclear. The new approach:
|
||||
- **Tracks which stage failed**: Users see exactly what went wrong
|
||||
- **Handles step 2 failures correctly**: Node operation failures now properly close orphaned changesets
|
||||
- **Provides clear UI feedback**: "Creating changeset...", "Uploading...", "Closing changeset..."
|
||||
- **Enables appropriate retry logic**: Different stages have different retry needs
|
||||
|
||||
**Stage-specific error handling:**
|
||||
- **Stage 1 failure**: Simple retry (no cleanup needed)
|
||||
- **Stage 2 failure**: Close orphaned changeset, then retry from stage 1
|
||||
- **Stage 3 failure**: Keep retrying with exponential backoff (most important for OSM data integrity)
|
||||
|
||||
**Why immediate visual feedback:**
|
||||
Users expect instant response to their actions. By immediately updating the cache with temporary markers (e.g., `_pending_deletion`), the UI stays responsive while the actual API calls happen in background.
|
||||
@@ -259,7 +286,39 @@ These are internal app tags, not OSM tags. The underscore prefix makes this expl
|
||||
**Why this approach:**
|
||||
Dense urban areas (SF, NYC) with many profiles enabled can easily exceed both 50k node limits and 25s timeouts. Splitting reduces query complexity while surgical error detection avoids unnecessary API load from network issues.
|
||||
|
||||
### 6. Offline vs Online Mode Behavior
|
||||
### 6. Uploader Service Architecture (Refactored v1.5.3)
|
||||
|
||||
**Three-method approach:**
|
||||
The `Uploader` class now provides three distinct methods matching the OSM API workflow:
|
||||
|
||||
```dart
|
||||
// Step 1: Create changeset
|
||||
Future<UploadResult> createChangeset(PendingUpload p) async
|
||||
|
||||
// Step 2: Perform node operation (create/modify/delete/extract)
|
||||
Future<UploadResult> performNodeOperation(PendingUpload p, String changesetId) async
|
||||
|
||||
// Step 3: Close changeset
|
||||
Future<UploadResult> closeChangeset(String changesetId) async
|
||||
```
|
||||
|
||||
**Simplified UploadResult:**
|
||||
Replaced complex boolean flags with simple success/failure:
|
||||
```dart
|
||||
UploadResult.success({changesetId, nodeId}) // Operation succeeded
|
||||
UploadResult.failure({errorMessage, ...}) // Operation failed with details
|
||||
```
|
||||
|
||||
**Legacy compatibility:**
|
||||
The `upload()` method still exists for simulate mode and backwards compatibility, but now internally calls the three-step methods in sequence.
|
||||
|
||||
**Why this architecture:**
|
||||
- **Brutalist simplicity**: Each method does exactly one thing
|
||||
- **Clear failure points**: No confusion about which step failed
|
||||
- **Easier testing**: Each stage can be unit tested independently
|
||||
- **Better error messages**: Specific failure context for each stage
|
||||
|
||||
### 7. Offline vs Online Mode Behavior
|
||||
|
||||
**Mode combinations:**
|
||||
```
|
||||
@@ -272,7 +331,7 @@ Sandbox + Offline → No nodes (cache is production data)
|
||||
**Why sandbox + offline = no nodes:**
|
||||
Local cache contains production data. Showing production nodes in sandbox mode would be confusing and could lead to users trying to edit production nodes with sandbox credentials.
|
||||
|
||||
### 7. Proximity Alerts & Background Monitoring
|
||||
### 8. Proximity Alerts & Background Monitoring
|
||||
|
||||
**Design approach:**
|
||||
- **Simple cooldown system**: In-memory tracking to prevent notification spam
|
||||
@@ -285,7 +344,7 @@ Local cache contains production data. Showing production nodes in sandbox mode w
|
||||
- Simple RecentAlert tracking prevents duplicate notifications
|
||||
- Visual callback system for in-app alerts when app is active
|
||||
|
||||
### 8. Compass Indicator & North Lock
|
||||
### 9. Compass Indicator & North Lock
|
||||
|
||||
**Purpose**: Visual compass showing map orientation with optional north-lock functionality
|
||||
|
||||
@@ -309,7 +368,7 @@ Local cache contains production data. Showing production nodes in sandbox mode w
|
||||
**Why separate from follow mode:**
|
||||
Users often want to follow their location while keeping the map oriented north. Previous "north up" follow mode was confusing because it didn't actually keep north up. This separation provides clear, predictable behavior.
|
||||
|
||||
### 9. Network Status Indicator (Simplified in v1.5.2+)
|
||||
### 10. Network Status Indicator (Simplified in v1.5.2+)
|
||||
|
||||
**Purpose**: Show loading and error states for surveillance data fetching only
|
||||
|
||||
|
||||
125
UPLOAD_REFACTOR_SUMMARY.md
Normal file
125
UPLOAD_REFACTOR_SUMMARY.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# Upload System Refactor - v1.5.3
|
||||
|
||||
## Overview
|
||||
Refactored the upload queue processing and OSM submission logic to properly handle the three distinct phases of OSM node operations, fixing the core issue where step 2 failures (node operations) weren't handled correctly.
|
||||
|
||||
## Problem Analysis
|
||||
The previous implementation incorrectly treated OSM interaction as a 2-step process:
|
||||
1. ~~Open changeset + submit node~~ (conflated)
|
||||
2. Close changeset
|
||||
|
||||
But OSM actually requires 3 distinct steps:
|
||||
1. **Create changeset**
|
||||
2. **Perform node operation** (create/modify/delete)
|
||||
3. **Close changeset**
|
||||
|
||||
### Issues Fixed:
|
||||
- **Step 2 failure handling**: Node operation failures now properly close orphaned changesets and retry appropriately
|
||||
- **State confusion**: Users now see exactly which of the 3 stages is happening or failed
|
||||
- **Error tracking**: Each stage has appropriate retry logic and error messages
|
||||
- **UI clarity**: Displays "Creating changeset...", "Uploading...", "Closing changeset..." with progress info
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Uploader Service (`lib/services/uploader.dart`)
|
||||
- **Simplified UploadResult**: Replaced complex boolean flags with simple `success/failure` pattern
|
||||
- **Three explicit methods**:
|
||||
- `createChangeset(PendingUpload)` → Returns changeset ID
|
||||
- `performNodeOperation(PendingUpload, changesetId)` → Returns node ID
|
||||
- `closeChangeset(changesetId)` → Returns success/failure
|
||||
- **Legacy compatibility**: `upload()` method still exists for simulate mode
|
||||
- **Better error context**: Each method provides specific error messages for its stage
|
||||
|
||||
### 2. Upload Queue State (`lib/state/upload_queue_state.dart`)
|
||||
- **Three processing methods**:
|
||||
- `_processCreateChangeset()` - Stage 1
|
||||
- `_processNodeOperation()` - Stage 2
|
||||
- `_processChangesetClose()` - Stage 3
|
||||
- **Proper state transitions**: Clear progression through `pending` → `creatingChangeset` → `uploading` → `closingChangeset` → `complete`
|
||||
- **Stage-specific retry logic**:
|
||||
- Stage 1 failure: Simple retry (no cleanup)
|
||||
- Stage 2 failure: Close orphaned changeset, retry from stage 1
|
||||
- Stage 3 failure: Exponential backoff up to 59 minutes
|
||||
- **Simulate mode support**: All three stages work in simulate mode
|
||||
|
||||
### 3. Upload Queue UI (`lib/screens/upload_queue_screen.dart`)
|
||||
- **Enhanced status display**: Shows retry attempts and time remaining (only when changeset close has failed)
|
||||
- **Better error visibility**: Tap error icon to see detailed failure messages
|
||||
- **Stage progression**: Clear visual feedback for each of the 3 stages
|
||||
- **Cleaner progress display**: Time countdown only shows when there have been changeset close issues
|
||||
|
||||
### 4. Cache Cleanup (`lib/state/upload_queue_state.dart`, `lib/services/node_cache.dart`)
|
||||
- **Fixed orphaned pending nodes**: Removing or clearing queue items now properly cleans up temporary cache markers
|
||||
- **Operation-specific cleanup**:
|
||||
- **Creates**: Remove temporary nodes with `_pending_upload` markers
|
||||
- **Edits**: Remove temp nodes + `_pending_edit` markers from originals
|
||||
- **Deletes**: Remove `_pending_deletion` markers from originals
|
||||
- **Extracts**: Remove temp extracted nodes (leave originals unchanged)
|
||||
- **Added NodeCache methods**: `removePendingDeletionMarker()` for deletion cancellation cleanup
|
||||
|
||||
### 5. Documentation Updates
|
||||
- **DEVELOPER.md**: Added detailed explanation of three-stage architecture
|
||||
- **Changelog**: Updated v1.5.3 release notes to highlight the fix
|
||||
- **Code comments**: Improved throughout for clarity
|
||||
|
||||
## Architecture Benefits
|
||||
|
||||
### Brutalist Code Principles Applied:
|
||||
1. **Explicit over implicit**: Three methods instead of one complex method
|
||||
2. **Simple error handling**: Success/failure instead of multiple boolean flags
|
||||
3. **Clear responsibilities**: Each method does exactly one thing
|
||||
4. **Minimal state complexity**: Straightforward state machine progression
|
||||
|
||||
### User Experience Improvements:
|
||||
- **Transparent progress**: Users see exactly what stage is happening
|
||||
- **Better error messages**: Specific context about which stage failed
|
||||
- **Proper retry behavior**: Stage 2 failures no longer leave orphaned changesets
|
||||
- **Time awareness**: Countdown shows when OSM will auto-close changesets
|
||||
|
||||
### Maintainability Gains:
|
||||
- **Easier debugging**: Each stage can be tested independently
|
||||
- **Clear failure points**: No confusion about which step failed
|
||||
- **Simpler testing**: Individual stages are unit-testable
|
||||
- **Future extensibility**: Easy to add new upload operations or modify stages
|
||||
|
||||
## Refined Retry Logic (Post-Testing Updates)
|
||||
|
||||
After initial testing feedback, the retry logic was refined to properly handle the 59-minute changeset window:
|
||||
|
||||
### Three-Phase Retry Strategy:
|
||||
- **Phase 1 (Create Changeset)**: Up to 3 attempts with 20s delays → Error state (user retry required)
|
||||
- **Phase 2 (Submit Node)**: Unlimited attempts within 59-minute window → Error if time expires
|
||||
- **Phase 3 (Close Changeset)**: Unlimited attempts within 59-minute window → Auto-complete if time expires (trust OSM auto-close)
|
||||
|
||||
### Key Behavioral Changes:
|
||||
- **59-minute timer starts** when changeset creation succeeds (not when node operation completes)
|
||||
- **Node submission failures** retry indefinitely within the 59-minute window
|
||||
- **Changeset close failures** retry indefinitely but never error out (always eventually complete)
|
||||
- **UI countdown** only shows when there have been failures in phases 2 or 3
|
||||
- **Proper error messages**: "Failed to create changeset after 3 attempts" vs "Could not submit node within 59 minutes"
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
When testing this refactor:
|
||||
|
||||
1. **Normal uploads**: Verify all three stages show proper progression
|
||||
2. **Network interruption**:
|
||||
- Test failure at each stage individually
|
||||
- Verify orphaned changesets are properly closed
|
||||
- Check retry logic works appropriately
|
||||
3. **Error handling**:
|
||||
- Tap error icons to see detailed messages
|
||||
- Verify different error types show stage-specific context
|
||||
4. **Simulate mode**: Confirm all three stages work in simulate mode
|
||||
5. **Queue management**: Verify queue continues processing when individual items fail
|
||||
6. **Changeset closing**: Test that changeset close retries work with exponential backoff
|
||||
|
||||
## Rollback Plan
|
||||
If issues are discovered, the legacy `upload()` method can be restored by:
|
||||
1. Reverting `_processCreateChangeset()` to call `up.upload(item)` directly
|
||||
2. Removing `_processNodeOperation()` and `_processChangesetClose()` calls
|
||||
3. This would restore the old 2-stage behavior while keeping the UI improvements
|
||||
|
||||
---
|
||||
|
||||
The core fix addresses the main issue you identified: **step 2 failures (node operations) are now properly tracked and handled with appropriate cleanup and retry logic**.
|
||||
@@ -1,4 +1,21 @@
|
||||
{
|
||||
"1.5.3": {
|
||||
"content": [
|
||||
"• MAJOR: Proper three-stage upload process - uploads now correctly track changeset creation, node operation, and changeset closing as separate steps",
|
||||
"• NEW: Enhanced upload error handling - failures in each stage (create changeset, upload node, close changeset) are now handled appropriately",
|
||||
"• NEW: Improved upload status display - shows 'Creating changeset...', 'Uploading...', and 'Closing changeset...' with time remaining for changeset close",
|
||||
"• NEW: Error message details - tap the error icon (!) on failed uploads to see exactly what went wrong and at which stage",
|
||||
"• IMPROVED: Proper 59-minute changeset window handling - node submission and changeset closing share the same timer from successful changeset creation",
|
||||
"• IMPROVED: Step 2 failures (node operations) retry indefinitely within the 59-minute window instead of giving up after 3 attempts",
|
||||
"• IMPROVED: Changeset close retry logic - continues trying for up to 59 minutes, then trusts OSM auto-close (never errors out once node is submitted)",
|
||||
"• IMPROVED: Orphaned changeset cleanup - if network fails after changeset creation but before node upload, we properly close the changeset before retrying",
|
||||
"• IMPROVED: Upload queue processing is now more robust and continues when individual items encounter errors",
|
||||
"• IMPROVED: Real-time UI updates for all upload status changes, attempt counts, and time remaining (only shows countdown when changeset close has failed)",
|
||||
"• FIXED: Queue processing no longer gets stuck when individual uploads fail",
|
||||
"• FIXED: Orphaned pending nodes - removing queue items or clearing the queue now properly removes temporary markers from the map",
|
||||
"• IMPROVED: Better debugging support throughout the upload pipeline for easier troubleshooting"
|
||||
]
|
||||
},
|
||||
"1.5.2": {
|
||||
"content": [
|
||||
"• MAJOR: Fixed severe tile loading performance issue - eliminated expensive filesystem searches on every tile request",
|
||||
|
||||
@@ -478,6 +478,14 @@ class AppState extends ChangeNotifier {
|
||||
await _settingsState.setNetworkStatusIndicatorEnabled(enabled);
|
||||
}
|
||||
|
||||
/// Migrate upload queue to new two-stage changeset system (v1.5.3)
|
||||
Future<void> migrateUploadQueueToTwoStageSystem() async {
|
||||
// Migration is handled automatically in PendingUpload.fromJson via _migrateFromLegacyFields
|
||||
// This method triggers a queue reload to apply migrations
|
||||
await _uploadQueueState.reloadQueue();
|
||||
debugPrint('[AppState] Upload queue migration completed');
|
||||
}
|
||||
|
||||
/// Set suspected location minimum distance from real nodes
|
||||
Future<void> setSuspectedLocationMinDistance(int distance) async {
|
||||
await _settingsState.setSuspectedLocationMinDistance(distance);
|
||||
|
||||
@@ -53,11 +53,18 @@ double topPositionWithSafeArea(double baseTop, EdgeInsets safeArea) {
|
||||
const String kClientName = 'DeFlock';
|
||||
// Note: Version is now dynamically retrieved from VersionService
|
||||
|
||||
// Upload and changeset configuration
|
||||
const Duration kUploadHttpTimeout = Duration(seconds: 30); // HTTP request timeout for uploads
|
||||
const Duration kChangesetCloseInitialRetryDelay = Duration(seconds: 10);
|
||||
const Duration kChangesetCloseMaxRetryDelay = Duration(minutes: 5); // Cap at 5 minutes
|
||||
const Duration kChangesetAutoCloseTimeout = Duration(minutes: 59); // Give up and trust OSM auto-close
|
||||
const double kChangesetCloseBackoffMultiplier = 2.0;
|
||||
|
||||
// Suspected locations CSV URL
|
||||
const String kSuspectedLocationsCsvUrl = 'https://alprwatch.org/suspected-locations/deflock-latest.csv';
|
||||
|
||||
// Development/testing features - set to false for production builds
|
||||
const bool kEnableDevelopmentModes = false; // Set to false to hide sandbox/simulate modes and force production mode
|
||||
const bool kEnableDevelopmentModes = true; // Set to false to hide sandbox/simulate modes and force production mode
|
||||
|
||||
// Navigation features - set to false to hide navigation UI elements while in development
|
||||
const bool kEnableNavigationFeatures = kEnableDevelopmentModes; // Hide navigation until fully implemented
|
||||
|
||||
@@ -193,7 +193,7 @@
|
||||
"queueCleared": "Warteschlange geleert",
|
||||
"uploadQueueTitle": "Upload-Warteschlange ({} Elemente)",
|
||||
"queueIsEmpty": "Warteschlange ist leer",
|
||||
"cameraWithIndex": "Kamera {}",
|
||||
"itemWithIndex": "Objekt {}",
|
||||
"error": " (Fehler)",
|
||||
"completing": " (Wird abgeschlossen...)",
|
||||
"destination": "Ziel: {}",
|
||||
@@ -203,7 +203,10 @@
|
||||
"attempts": "Versuche: {}",
|
||||
"uploadFailedRetry": "Upload fehlgeschlagen. Zum Wiederholen antippen.",
|
||||
"retryUpload": "Upload wiederholen",
|
||||
"clearAll": "Alle Löschen"
|
||||
"clearAll": "Alle Löschen",
|
||||
"errorDetails": "Fehlerdetails",
|
||||
"uploading": " (Uploading...)",
|
||||
"closingChangeset": " (Changeset schließen...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "Kachel-Anbieter",
|
||||
|
||||
@@ -225,7 +225,7 @@
|
||||
"queueCleared": "Queue cleared",
|
||||
"uploadQueueTitle": "Upload Queue ({} items)",
|
||||
"queueIsEmpty": "Queue is empty",
|
||||
"cameraWithIndex": "Camera {}",
|
||||
"itemWithIndex": "Item {}",
|
||||
"error": " (Error)",
|
||||
"completing": " (Completing...)",
|
||||
"destination": "Dest: {}",
|
||||
@@ -235,7 +235,11 @@
|
||||
"attempts": "Attempts: {}",
|
||||
"uploadFailedRetry": "Upload failed. Tap retry to try again.",
|
||||
"retryUpload": "Retry upload",
|
||||
"clearAll": "Clear All"
|
||||
"clearAll": "Clear All",
|
||||
"errorDetails": "Error Details",
|
||||
"creatingChangeset": " (Creating changeset...)",
|
||||
"uploading": " (Uploading...)",
|
||||
"closingChangeset": " (Closing changeset...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "Tile Providers",
|
||||
|
||||
@@ -225,7 +225,7 @@
|
||||
"queueCleared": "Cola limpiada",
|
||||
"uploadQueueTitle": "Cola de Subida ({} elementos)",
|
||||
"queueIsEmpty": "La cola está vacía",
|
||||
"cameraWithIndex": "Cámara {}",
|
||||
"itemWithIndex": "Elemento {}",
|
||||
"error": " (Error)",
|
||||
"completing": " (Completando...)",
|
||||
"destination": "Dest: {}",
|
||||
@@ -235,7 +235,10 @@
|
||||
"attempts": "Intentos: {}",
|
||||
"uploadFailedRetry": "Subida falló. Toque reintentar para intentar de nuevo.",
|
||||
"retryUpload": "Reintentar subida",
|
||||
"clearAll": "Limpiar Todo"
|
||||
"clearAll": "Limpiar Todo",
|
||||
"errorDetails": "Detalles del Error",
|
||||
"uploading": " (Subiendo...)",
|
||||
"closingChangeset": " (Cerrando changeset...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "Proveedores de Tiles",
|
||||
|
||||
@@ -225,7 +225,7 @@
|
||||
"queueCleared": "File vidée",
|
||||
"uploadQueueTitle": "File de Téléchargement ({} éléments)",
|
||||
"queueIsEmpty": "La file est vide",
|
||||
"cameraWithIndex": "Caméra {}",
|
||||
"itemWithIndex": "Élément {}",
|
||||
"error": " (Erreur)",
|
||||
"completing": " (Finalisation...)",
|
||||
"destination": "Dest: {}",
|
||||
@@ -235,7 +235,10 @@
|
||||
"attempts": "Tentatives: {}",
|
||||
"uploadFailedRetry": "Téléchargement échoué. Appuyer pour réessayer.",
|
||||
"retryUpload": "Réessayer téléchargement",
|
||||
"clearAll": "Tout Vider"
|
||||
"clearAll": "Tout Vider",
|
||||
"errorDetails": "Détails de l'Erreur",
|
||||
"uploading": " (Téléchargement...)",
|
||||
"closingChangeset": " (Fermeture du changeset...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "Fournisseurs de Tuiles",
|
||||
|
||||
@@ -225,7 +225,7 @@
|
||||
"queueCleared": "Coda pulita",
|
||||
"uploadQueueTitle": "Coda Upload ({} elementi)",
|
||||
"queueIsEmpty": "La coda è vuota",
|
||||
"cameraWithIndex": "Telecamera {}",
|
||||
"itemWithIndex": "Elemento {}",
|
||||
"error": " (Errore)",
|
||||
"completing": " (Completamento...)",
|
||||
"destination": "Dest: {}",
|
||||
@@ -235,7 +235,10 @@
|
||||
"attempts": "Tentativi: {}",
|
||||
"uploadFailedRetry": "Upload fallito. Tocca riprova per tentare di nuovo.",
|
||||
"retryUpload": "Riprova upload",
|
||||
"clearAll": "Pulisci Tutto"
|
||||
"clearAll": "Pulisci Tutto",
|
||||
"errorDetails": "Dettagli dell'Errore",
|
||||
"uploading": " (Caricamento...)",
|
||||
"closingChangeset": " (Chiusura changeset...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "Fornitori di Tile",
|
||||
|
||||
@@ -225,7 +225,7 @@
|
||||
"queueCleared": "Fila limpa",
|
||||
"uploadQueueTitle": "Fila de Upload ({} itens)",
|
||||
"queueIsEmpty": "A fila está vazia",
|
||||
"cameraWithIndex": "Câmera {}",
|
||||
"itemWithIndex": "Item {}",
|
||||
"error": " (Erro)",
|
||||
"completing": " (Completando...)",
|
||||
"destination": "Dest: {}",
|
||||
@@ -235,7 +235,10 @@
|
||||
"attempts": "Tentativas: {}",
|
||||
"uploadFailedRetry": "Upload falhou. Toque em tentar novamente para tentar novamente.",
|
||||
"retryUpload": "Tentar upload novamente",
|
||||
"clearAll": "Limpar Tudo"
|
||||
"clearAll": "Limpar Tudo",
|
||||
"errorDetails": "Detalhes do Erro",
|
||||
"uploading": " (Enviando...)",
|
||||
"closingChangeset": " (Fechando changeset...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "Provedores de Tiles",
|
||||
|
||||
@@ -225,7 +225,7 @@
|
||||
"queueCleared": "队列已清空",
|
||||
"uploadQueueTitle": "上传队列({} 项)",
|
||||
"queueIsEmpty": "队列为空",
|
||||
"cameraWithIndex": "摄像头 {}",
|
||||
"itemWithIndex": "项目 {}",
|
||||
"error": "(错误)",
|
||||
"completing": "(完成中...)",
|
||||
"destination": "目标:{}",
|
||||
@@ -235,7 +235,10 @@
|
||||
"attempts": "尝试次数:{}",
|
||||
"uploadFailedRetry": "上传失败。点击重试再次尝试。",
|
||||
"retryUpload": "重试上传",
|
||||
"clearAll": "全部清空"
|
||||
"clearAll": "全部清空",
|
||||
"errorDetails": "错误详情",
|
||||
"uploading": " (上传中...)",
|
||||
"closingChangeset": " (关闭变更集...)"
|
||||
},
|
||||
"tileProviders": {
|
||||
"title": "瓦片提供商",
|
||||
|
||||
@@ -1,10 +1,21 @@
|
||||
import 'dart:math' as math;
|
||||
import 'package:latlong2/latlong.dart';
|
||||
import 'node_profile.dart';
|
||||
import 'operator_profile.dart';
|
||||
import '../state/settings_state.dart';
|
||||
import '../dev_config.dart';
|
||||
|
||||
enum UploadOperation { create, modify, delete, extract }
|
||||
|
||||
enum UploadState {
|
||||
pending, // Not started yet
|
||||
creatingChangeset, // Creating changeset
|
||||
uploading, // Node operation (create/modify/delete)
|
||||
closingChangeset, // Closing changeset
|
||||
error, // Upload failed (needs user retry) OR changeset not found
|
||||
complete // Everything done
|
||||
}
|
||||
|
||||
class PendingUpload {
|
||||
final LatLng coord;
|
||||
final dynamic direction; // Can be double or String for multiple directions
|
||||
@@ -15,8 +26,16 @@ class PendingUpload {
|
||||
final int? originalNodeId; // If this is modify/delete, the ID of the original OSM node
|
||||
int? submittedNodeId; // The actual node ID returned by OSM after successful submission
|
||||
int attempts;
|
||||
bool error;
|
||||
bool completing; // True when upload succeeded but item is showing checkmark briefly
|
||||
bool error; // DEPRECATED: Use uploadState instead
|
||||
String? errorMessage; // Detailed error message for debugging
|
||||
bool completing; // DEPRECATED: Use uploadState instead
|
||||
UploadState uploadState; // Current state in the upload pipeline
|
||||
String? changesetId; // ID of changeset that needs closing
|
||||
DateTime? nodeOperationCompletedAt; // When node operation completed (start of 59-minute countdown)
|
||||
int changesetCloseAttempts; // Number of changeset close attempts
|
||||
DateTime? lastChangesetCloseAttemptAt; // When we last tried to close changeset (for retry timing)
|
||||
int nodeSubmissionAttempts; // Number of node submission attempts (separate from overall attempts)
|
||||
DateTime? lastNodeSubmissionAttemptAt; // When we last tried to submit node (for retry timing)
|
||||
|
||||
PendingUpload({
|
||||
required this.coord,
|
||||
@@ -29,7 +48,15 @@ class PendingUpload {
|
||||
this.submittedNodeId,
|
||||
this.attempts = 0,
|
||||
this.error = false,
|
||||
this.errorMessage,
|
||||
this.completing = false,
|
||||
this.uploadState = UploadState.pending,
|
||||
this.changesetId,
|
||||
this.nodeOperationCompletedAt,
|
||||
this.changesetCloseAttempts = 0,
|
||||
this.lastChangesetCloseAttemptAt,
|
||||
this.nodeSubmissionAttempts = 0,
|
||||
this.lastNodeSubmissionAttemptAt,
|
||||
}) : assert(
|
||||
(operation == UploadOperation.create && originalNodeId == null) ||
|
||||
(operation == UploadOperation.create) || (originalNodeId != null),
|
||||
@@ -48,6 +75,53 @@ class PendingUpload {
|
||||
|
||||
// True if this is an extract operation (new node with tags from constrained node)
|
||||
bool get isExtraction => operation == UploadOperation.extract;
|
||||
|
||||
// New state-based helpers
|
||||
bool get needsUserRetry => uploadState == UploadState.error;
|
||||
bool get isActivelyProcessing => uploadState == UploadState.creatingChangeset || uploadState == UploadState.uploading || uploadState == UploadState.closingChangeset;
|
||||
bool get isComplete => uploadState == UploadState.complete;
|
||||
bool get isPending => uploadState == UploadState.pending;
|
||||
bool get isCreatingChangeset => uploadState == UploadState.creatingChangeset;
|
||||
bool get isUploading => uploadState == UploadState.uploading;
|
||||
bool get isClosingChangeset => uploadState == UploadState.closingChangeset;
|
||||
|
||||
// Calculate time until OSM auto-closes changeset (for UI display)
|
||||
// This uses nodeOperationCompletedAt (when changeset was created) as the reference
|
||||
Duration? get timeUntilAutoClose {
|
||||
if (nodeOperationCompletedAt == null) return null;
|
||||
final elapsed = DateTime.now().difference(nodeOperationCompletedAt!);
|
||||
final remaining = kChangesetAutoCloseTimeout - elapsed;
|
||||
return remaining.isNegative ? Duration.zero : remaining;
|
||||
}
|
||||
|
||||
// Check if the 59-minute window has expired (for phases 2 & 3)
|
||||
// This uses nodeOperationCompletedAt (when changeset was created) as the reference
|
||||
bool get hasChangesetExpired {
|
||||
if (nodeOperationCompletedAt == null) return false;
|
||||
return DateTime.now().difference(nodeOperationCompletedAt!) >= kChangesetAutoCloseTimeout;
|
||||
}
|
||||
|
||||
// Legacy method name for backward compatibility
|
||||
bool get shouldGiveUpOnChangeset => hasChangesetExpired;
|
||||
|
||||
// Calculate next retry delay for changeset close using exponential backoff
|
||||
Duration get nextChangesetCloseRetryDelay {
|
||||
final delay = Duration(
|
||||
milliseconds: (kChangesetCloseInitialRetryDelay.inMilliseconds *
|
||||
math.pow(kChangesetCloseBackoffMultiplier, changesetCloseAttempts)).round()
|
||||
);
|
||||
return delay > kChangesetCloseMaxRetryDelay
|
||||
? kChangesetCloseMaxRetryDelay
|
||||
: delay;
|
||||
}
|
||||
|
||||
// Check if it's time to retry changeset close
|
||||
bool get isReadyForChangesetCloseRetry {
|
||||
if (lastChangesetCloseAttemptAt == null) return true; // First attempt
|
||||
|
||||
final nextRetryTime = lastChangesetCloseAttemptAt!.add(nextChangesetCloseRetryDelay);
|
||||
return DateTime.now().isAfter(nextRetryTime);
|
||||
}
|
||||
|
||||
// Get display name for the upload destination
|
||||
String get uploadModeDisplayName {
|
||||
@@ -61,6 +135,88 @@ class PendingUpload {
|
||||
}
|
||||
}
|
||||
|
||||
// Set error state with detailed message
|
||||
void setError(String message) {
|
||||
error = true; // Keep for backward compatibility
|
||||
uploadState = UploadState.error;
|
||||
errorMessage = message;
|
||||
}
|
||||
|
||||
// Clear error state
|
||||
void clearError() {
|
||||
error = false; // Keep for backward compatibility
|
||||
uploadState = UploadState.pending;
|
||||
errorMessage = null;
|
||||
attempts = 0;
|
||||
changesetCloseAttempts = 0;
|
||||
changesetId = null;
|
||||
nodeOperationCompletedAt = null;
|
||||
lastChangesetCloseAttemptAt = null;
|
||||
nodeSubmissionAttempts = 0;
|
||||
lastNodeSubmissionAttemptAt = null;
|
||||
}
|
||||
|
||||
// Mark as creating changeset
|
||||
void markAsCreatingChangeset() {
|
||||
uploadState = UploadState.creatingChangeset;
|
||||
error = false;
|
||||
completing = false;
|
||||
errorMessage = null;
|
||||
}
|
||||
|
||||
// Mark changeset created, start node operation
|
||||
void markChangesetCreated(String csId) {
|
||||
uploadState = UploadState.uploading;
|
||||
changesetId = csId;
|
||||
nodeOperationCompletedAt = DateTime.now(); // Track when changeset was created for 59-minute timeout
|
||||
}
|
||||
|
||||
// Mark node operation as complete, start changeset close phase
|
||||
void markNodeOperationComplete() {
|
||||
uploadState = UploadState.closingChangeset;
|
||||
changesetCloseAttempts = 0;
|
||||
// Note: nodeSubmissionAttempts preserved for debugging/stats
|
||||
}
|
||||
|
||||
// Mark entire upload as complete
|
||||
void markAsComplete() {
|
||||
uploadState = UploadState.complete;
|
||||
completing = true; // Keep for UI compatibility
|
||||
error = false;
|
||||
errorMessage = null;
|
||||
}
|
||||
|
||||
// Increment changeset close attempt counter and record attempt time
|
||||
void incrementChangesetCloseAttempts() {
|
||||
changesetCloseAttempts++;
|
||||
lastChangesetCloseAttemptAt = DateTime.now();
|
||||
}
|
||||
|
||||
// Increment node submission attempt counter and record attempt time
|
||||
void incrementNodeSubmissionAttempts() {
|
||||
nodeSubmissionAttempts++;
|
||||
lastNodeSubmissionAttemptAt = DateTime.now();
|
||||
}
|
||||
|
||||
// Calculate next retry delay for node submission using exponential backoff
|
||||
Duration get nextNodeSubmissionRetryDelay {
|
||||
final delay = Duration(
|
||||
milliseconds: (kChangesetCloseInitialRetryDelay.inMilliseconds *
|
||||
math.pow(kChangesetCloseBackoffMultiplier, nodeSubmissionAttempts)).round()
|
||||
);
|
||||
return delay > kChangesetCloseMaxRetryDelay
|
||||
? kChangesetCloseMaxRetryDelay
|
||||
: delay;
|
||||
}
|
||||
|
||||
// Check if it's time to retry node submission
|
||||
bool get isReadyForNodeSubmissionRetry {
|
||||
if (lastNodeSubmissionAttemptAt == null) return true; // First attempt
|
||||
|
||||
final nextRetryTime = lastNodeSubmissionAttemptAt!.add(nextNodeSubmissionRetryDelay);
|
||||
return DateTime.now().isAfter(nextRetryTime);
|
||||
}
|
||||
|
||||
// Get combined tags from node profile and operator profile
|
||||
Map<String, String> getCombinedTags() {
|
||||
// Deletions don't need tags
|
||||
@@ -101,7 +257,15 @@ class PendingUpload {
|
||||
'submittedNodeId': submittedNodeId,
|
||||
'attempts': attempts,
|
||||
'error': error,
|
||||
'errorMessage': errorMessage,
|
||||
'completing': completing,
|
||||
'uploadState': uploadState.index,
|
||||
'changesetId': changesetId,
|
||||
'nodeOperationCompletedAt': nodeOperationCompletedAt?.millisecondsSinceEpoch,
|
||||
'changesetCloseAttempts': changesetCloseAttempts,
|
||||
'lastChangesetCloseAttemptAt': lastChangesetCloseAttemptAt?.millisecondsSinceEpoch,
|
||||
'nodeSubmissionAttempts': nodeSubmissionAttempts,
|
||||
'lastNodeSubmissionAttemptAt': lastNodeSubmissionAttemptAt?.millisecondsSinceEpoch,
|
||||
};
|
||||
|
||||
factory PendingUpload.fromJson(Map<String, dynamic> j) => PendingUpload(
|
||||
@@ -123,7 +287,33 @@ class PendingUpload {
|
||||
submittedNodeId: j['submittedNodeId'],
|
||||
attempts: j['attempts'] ?? 0,
|
||||
error: j['error'] ?? false,
|
||||
errorMessage: j['errorMessage'], // Can be null for legacy entries
|
||||
completing: j['completing'] ?? false, // Default to false for legacy entries
|
||||
uploadState: j['uploadState'] != null
|
||||
? UploadState.values[j['uploadState']]
|
||||
: _migrateFromLegacyFields(j), // Migrate from legacy error/completing fields
|
||||
changesetId: j['changesetId'],
|
||||
nodeOperationCompletedAt: j['nodeOperationCompletedAt'] != null
|
||||
? DateTime.fromMillisecondsSinceEpoch(j['nodeOperationCompletedAt'])
|
||||
: null,
|
||||
changesetCloseAttempts: j['changesetCloseAttempts'] ?? 0,
|
||||
lastChangesetCloseAttemptAt: j['lastChangesetCloseAttemptAt'] != null
|
||||
? DateTime.fromMillisecondsSinceEpoch(j['lastChangesetCloseAttemptAt'])
|
||||
: null,
|
||||
nodeSubmissionAttempts: j['nodeSubmissionAttempts'] ?? 0,
|
||||
lastNodeSubmissionAttemptAt: j['lastNodeSubmissionAttemptAt'] != null
|
||||
? DateTime.fromMillisecondsSinceEpoch(j['lastNodeSubmissionAttemptAt'])
|
||||
: null,
|
||||
);
|
||||
|
||||
// Helper to migrate legacy queue items to new state system
|
||||
static UploadState _migrateFromLegacyFields(Map<String, dynamic> j) {
|
||||
final error = j['error'] ?? false;
|
||||
final completing = j['completing'] ?? false;
|
||||
|
||||
if (completing) return UploadState.complete;
|
||||
if (error) return UploadState.error;
|
||||
return UploadState.pending;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,12 +1,34 @@
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:provider/provider.dart';
|
||||
import '../app_state.dart';
|
||||
import '../models/pending_upload.dart';
|
||||
import '../services/localization_service.dart';
|
||||
import '../state/settings_state.dart';
|
||||
|
||||
class UploadQueueScreen extends StatelessWidget {
|
||||
const UploadQueueScreen({super.key});
|
||||
|
||||
void _showErrorDialog(BuildContext context, PendingUpload upload, LocalizationService locService) {
|
||||
showDialog(
|
||||
context: context,
|
||||
builder: (context) => AlertDialog(
|
||||
title: Text(locService.t('queue.errorDetails')),
|
||||
content: SingleChildScrollView(
|
||||
child: Text(
|
||||
upload.errorMessage ?? 'Unknown error',
|
||||
style: Theme.of(context).textTheme.bodyMedium,
|
||||
),
|
||||
),
|
||||
actions: [
|
||||
TextButton(
|
||||
onPressed: () => Navigator.pop(context),
|
||||
child: Text(locService.ok),
|
||||
),
|
||||
],
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
String _getUploadModeDisplayName(UploadMode mode) {
|
||||
final locService = LocalizationService.instance;
|
||||
switch (mode) {
|
||||
@@ -19,6 +41,39 @@ class UploadQueueScreen extends StatelessWidget {
|
||||
}
|
||||
}
|
||||
|
||||
String _getUploadStateText(PendingUpload upload, LocalizationService locService) {
|
||||
switch (upload.uploadState) {
|
||||
case UploadState.pending:
|
||||
return upload.attempts > 0 ? ' (Retry ${upload.attempts + 1})' : '';
|
||||
case UploadState.creatingChangeset:
|
||||
return locService.t('queue.creatingChangeset');
|
||||
case UploadState.uploading:
|
||||
// Only show time remaining and attempt count if there have been node submission failures
|
||||
if (upload.nodeSubmissionAttempts > 0) {
|
||||
final timeLeft = upload.timeUntilAutoClose;
|
||||
if (timeLeft != null && timeLeft.inMinutes > 0) {
|
||||
return '${locService.t('queue.uploading')} (${upload.nodeSubmissionAttempts} attempts, ${timeLeft.inMinutes}m left)';
|
||||
} else {
|
||||
return '${locService.t('queue.uploading')} (${upload.nodeSubmissionAttempts} attempts)';
|
||||
}
|
||||
}
|
||||
return locService.t('queue.uploading');
|
||||
case UploadState.closingChangeset:
|
||||
// Only show time remaining if we've had changeset close failures
|
||||
if (upload.changesetCloseAttempts > 0) {
|
||||
final timeLeft = upload.timeUntilAutoClose;
|
||||
if (timeLeft != null && timeLeft.inMinutes > 0) {
|
||||
return '${locService.t('queue.closingChangeset')} (${timeLeft.inMinutes}m left)';
|
||||
}
|
||||
}
|
||||
return locService.t('queue.closingChangeset');
|
||||
case UploadState.error:
|
||||
return locService.t('queue.error');
|
||||
case UploadState.complete:
|
||||
return locService.t('queue.completing');
|
||||
}
|
||||
}
|
||||
|
||||
Color _getUploadModeColor(UploadMode mode) {
|
||||
switch (mode) {
|
||||
case UploadMode.production:
|
||||
@@ -130,16 +185,23 @@ class UploadQueueScreen extends StatelessWidget {
|
||||
return Card(
|
||||
margin: const EdgeInsets.only(bottom: 8),
|
||||
child: ListTile(
|
||||
leading: Icon(
|
||||
upload.error ? Icons.error : Icons.camera_alt,
|
||||
color: upload.error
|
||||
? Colors.red
|
||||
: _getUploadModeColor(upload.uploadMode),
|
||||
),
|
||||
leading: upload.uploadState == UploadState.error
|
||||
? GestureDetector(
|
||||
onTap: () {
|
||||
_showErrorDialog(context, upload, locService);
|
||||
},
|
||||
child: Icon(
|
||||
Icons.error,
|
||||
color: Colors.red,
|
||||
),
|
||||
)
|
||||
: Icon(
|
||||
Icons.camera_alt,
|
||||
color: _getUploadModeColor(upload.uploadMode),
|
||||
),
|
||||
title: Text(
|
||||
locService.t('queue.cameraWithIndex', params: [(index + 1).toString()]) +
|
||||
(upload.error ? locService.t('queue.error') : "") +
|
||||
(upload.completing ? locService.t('queue.completing') : "")
|
||||
locService.t('queue.itemWithIndex', params: [(index + 1).toString()]) +
|
||||
_getUploadStateText(upload, locService)
|
||||
),
|
||||
subtitle: Text(
|
||||
locService.t('queue.destination', params: [_getUploadModeDisplayName(upload.uploadMode)]) + '\n' +
|
||||
@@ -151,12 +213,12 @@ class UploadQueueScreen extends StatelessWidget {
|
||||
: upload.direction.round().toString()
|
||||
]) + '\n' +
|
||||
locService.t('queue.attempts', params: [upload.attempts.toString()]) +
|
||||
(upload.error ? "\n${locService.t('queue.uploadFailedRetry')}" : "")
|
||||
(upload.uploadState == UploadState.error ? "\n${locService.t('queue.uploadFailedRetry')}" : "")
|
||||
),
|
||||
trailing: Row(
|
||||
mainAxisSize: MainAxisSize.min,
|
||||
children: [
|
||||
if (upload.error && !upload.completing)
|
||||
if (upload.uploadState == UploadState.error)
|
||||
IconButton(
|
||||
icon: const Icon(Icons.refresh),
|
||||
color: Colors.orange,
|
||||
@@ -165,7 +227,7 @@ class UploadQueueScreen extends StatelessWidget {
|
||||
appState.retryUpload(upload);
|
||||
},
|
||||
),
|
||||
if (upload.completing)
|
||||
if (upload.uploadState == UploadState.complete)
|
||||
const Icon(Icons.check_circle, color: Colors.green)
|
||||
else
|
||||
IconButton(
|
||||
|
||||
@@ -203,6 +203,10 @@ class ChangelogService {
|
||||
versionsNeedingMigration.add('1.3.1');
|
||||
}
|
||||
|
||||
if (needsMigration(lastSeenVersion, currentVersion, '1.5.3')) {
|
||||
versionsNeedingMigration.add('1.5.3');
|
||||
}
|
||||
|
||||
// Future versions can be added here
|
||||
// if (needsMigration(lastSeenVersion, currentVersion, '2.0.0')) {
|
||||
// versionsNeedingMigration.add('2.0.0');
|
||||
@@ -268,6 +272,12 @@ class ChangelogService {
|
||||
debugPrint('[ChangelogService] 1.3.1 migration completed: enabled network status indicator');
|
||||
break;
|
||||
|
||||
case '1.5.3':
|
||||
// Migrate upload queue to new two-stage changeset system
|
||||
await appState.migrateUploadQueueToTwoStageSystem();
|
||||
debugPrint('[ChangelogService] 1.5.3 migration completed: migrated upload queue to two-stage system');
|
||||
break;
|
||||
|
||||
// Future migrations can be added here
|
||||
// case '2.0.0':
|
||||
// await appState.doSomethingNew();
|
||||
|
||||
@@ -66,6 +66,22 @@ class NodeCache {
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove the _pending_deletion marker from a specific node (when deletion is cancelled)
|
||||
void removePendingDeletionMarker(int nodeId) {
|
||||
final node = _nodes[nodeId];
|
||||
if (node != null && node.tags.containsKey('_pending_deletion')) {
|
||||
final cleanTags = Map<String, String>.from(node.tags);
|
||||
cleanTags.remove('_pending_deletion');
|
||||
|
||||
_nodes[nodeId] = OsmNode(
|
||||
id: node.id,
|
||||
coord: node.coord,
|
||||
tags: cleanTags,
|
||||
isConstrained: node.isConstrained, // Preserve constraint information
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove a node by ID from the cache (used for successful deletions)
|
||||
void removeNodeById(int nodeId) {
|
||||
if (_nodes.remove(nodeId) != null) {
|
||||
|
||||
@@ -1,29 +1,60 @@
|
||||
import 'dart:async';
|
||||
import 'package:flutter/foundation.dart';
|
||||
import 'package:http/http.dart' as http;
|
||||
|
||||
import '../models/pending_upload.dart';
|
||||
import '../dev_config.dart';
|
||||
import '../state/settings_state.dart';
|
||||
import 'version_service.dart';
|
||||
import '../app_state.dart';
|
||||
|
||||
class UploadResult {
|
||||
final bool success;
|
||||
final String? errorMessage;
|
||||
final String? changesetId; // For changeset creation results
|
||||
final int? nodeId; // For node operation results
|
||||
final bool changesetNotFound; // Special flag for 404 case during close
|
||||
|
||||
UploadResult.success({
|
||||
this.changesetId,
|
||||
this.nodeId,
|
||||
}) : success = true, errorMessage = null, changesetNotFound = false;
|
||||
|
||||
UploadResult.failure({
|
||||
required this.errorMessage,
|
||||
this.changesetNotFound = false,
|
||||
this.changesetId,
|
||||
this.nodeId,
|
||||
}) : success = false;
|
||||
|
||||
// Legacy compatibility for simulate mode and full upload method
|
||||
bool get isFullySuccessful => success;
|
||||
bool get changesetCreateSuccess => success;
|
||||
bool get nodeOperationSuccess => success;
|
||||
bool get changesetCloseSuccess => success;
|
||||
bool get hasOrphanedChangeset => changesetId != null && !success;
|
||||
}
|
||||
|
||||
class Uploader {
|
||||
Uploader(this.accessToken, this.onSuccess, {this.uploadMode = UploadMode.production});
|
||||
Uploader(this.accessToken, this.onSuccess, this.onError, {this.uploadMode = UploadMode.production});
|
||||
|
||||
final String accessToken;
|
||||
final void Function(int nodeId) onSuccess;
|
||||
final void Function(String errorMessage) onError;
|
||||
final UploadMode uploadMode;
|
||||
|
||||
Future<bool> upload(PendingUpload p) async {
|
||||
// Create changeset (step 1 of 3)
|
||||
Future<UploadResult> createChangeset(PendingUpload p) async {
|
||||
try {
|
||||
print('Uploader: Starting upload for node at ${p.coord.latitude}, ${p.coord.longitude}');
|
||||
debugPrint('[Uploader] Creating changeset for ${p.operation.name} operation...');
|
||||
|
||||
// Safety check: create, modify, and extract operations MUST have profiles
|
||||
if ((p.operation == UploadOperation.create || p.operation == UploadOperation.modify || p.operation == UploadOperation.extract) && p.profile == null) {
|
||||
print('Uploader: ERROR - ${p.operation.name} operation attempted without profile data');
|
||||
return false;
|
||||
final errorMsg = 'Missing profile data for ${p.operation.name} operation';
|
||||
debugPrint('[Uploader] ERROR - $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
}
|
||||
|
||||
// 1. open changeset
|
||||
// Generate changeset XML
|
||||
String action;
|
||||
switch (p.operation) {
|
||||
case UploadOperation.create:
|
||||
@@ -39,7 +70,7 @@ class Uploader {
|
||||
action = 'Extract';
|
||||
break;
|
||||
}
|
||||
// Generate appropriate comment based on operation type
|
||||
|
||||
final profileName = p.profile?.name ?? 'surveillance';
|
||||
final csXml = '''
|
||||
<osm>
|
||||
@@ -48,17 +79,38 @@ class Uploader {
|
||||
<tag k="comment" v="$action $profileName surveillance node"/>
|
||||
</changeset>
|
||||
</osm>''';
|
||||
print('Uploader: Creating changeset...');
|
||||
|
||||
debugPrint('[Uploader] Creating changeset...');
|
||||
final csResp = await _put('/api/0.6/changeset/create', csXml);
|
||||
print('Uploader: Changeset response: ${csResp.statusCode} - ${csResp.body}');
|
||||
debugPrint('[Uploader] Changeset response: ${csResp.statusCode} - ${csResp.body}');
|
||||
|
||||
if (csResp.statusCode != 200) {
|
||||
print('Uploader: Failed to create changeset');
|
||||
return false;
|
||||
final errorMsg = 'Failed to create changeset: HTTP ${csResp.statusCode} - ${csResp.body}';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
}
|
||||
|
||||
final csId = csResp.body.trim();
|
||||
print('Uploader: Created changeset ID: $csId');
|
||||
debugPrint('[Uploader] Created changeset ID: $csId');
|
||||
|
||||
return UploadResult.success(changesetId: csId);
|
||||
|
||||
} on TimeoutException catch (e) {
|
||||
final errorMsg = 'Changeset creation timed out after ${kUploadHttpTimeout.inSeconds}s: $e';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
} catch (e) {
|
||||
final errorMsg = 'Changeset creation failed with unexpected error: $e';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. create, update, or delete node
|
||||
// Perform node operation (step 2 of 3)
|
||||
Future<UploadResult> performNodeOperation(PendingUpload p, String changesetId) async {
|
||||
try {
|
||||
debugPrint('[Uploader] Performing ${p.operation.name} operation with changeset $changesetId');
|
||||
|
||||
final http.Response nodeResp;
|
||||
final String nodeId;
|
||||
|
||||
@@ -70,34 +122,36 @@ class Uploader {
|
||||
'<tag k="${e.key}" v="${e.value}"/>').join('\n ');
|
||||
final nodeXml = '''
|
||||
<osm>
|
||||
<node changeset="$csId" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
<node changeset="$changesetId" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
$tagsXml
|
||||
</node>
|
||||
</osm>''';
|
||||
print('Uploader: Creating new node...');
|
||||
debugPrint('[Uploader] Creating new node...');
|
||||
nodeResp = await _put('/api/0.6/node/create', nodeXml);
|
||||
nodeId = nodeResp.body.trim();
|
||||
break;
|
||||
|
||||
case UploadOperation.modify:
|
||||
// First, fetch the current node to get its version
|
||||
print('Uploader: Fetching current node ${p.originalNodeId} to get version...');
|
||||
debugPrint('[Uploader] Fetching current node ${p.originalNodeId} to get version...');
|
||||
final currentNodeResp = await _get('/api/0.6/node/${p.originalNodeId}');
|
||||
print('Uploader: Current node response: ${currentNodeResp.statusCode}');
|
||||
debugPrint('[Uploader] Current node response: ${currentNodeResp.statusCode}');
|
||||
if (currentNodeResp.statusCode != 200) {
|
||||
print('Uploader: Failed to fetch current node');
|
||||
return false;
|
||||
final errorMsg = 'Failed to fetch node ${p.originalNodeId}: HTTP ${currentNodeResp.statusCode} - ${currentNodeResp.body}';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
}
|
||||
|
||||
// Parse version from the response XML
|
||||
final currentNodeXml = currentNodeResp.body;
|
||||
final versionMatch = RegExp(r'version="(\d+)"').firstMatch(currentNodeXml);
|
||||
if (versionMatch == null) {
|
||||
print('Uploader: Could not parse version from current node XML');
|
||||
return false;
|
||||
final errorMsg = 'Could not parse version from node XML: ${currentNodeXml.length > 200 ? currentNodeXml.substring(0, 200) + "..." : currentNodeXml}';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
}
|
||||
final currentVersion = versionMatch.group(1)!;
|
||||
print('Uploader: Current node version: $currentVersion');
|
||||
debugPrint('[Uploader] Current node version: $currentVersion');
|
||||
|
||||
// Update existing node with version
|
||||
final mergedTags = p.getCombinedTags();
|
||||
@@ -105,86 +159,181 @@ class Uploader {
|
||||
'<tag k="${e.key}" v="${e.value}"/>').join('\n ');
|
||||
final nodeXml = '''
|
||||
<osm>
|
||||
<node changeset="$csId" id="${p.originalNodeId}" version="$currentVersion" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
<node changeset="$changesetId" id="${p.originalNodeId}" version="$currentVersion" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
$tagsXml
|
||||
</node>
|
||||
</osm>''';
|
||||
print('Uploader: Updating node ${p.originalNodeId}...');
|
||||
debugPrint('[Uploader] Updating node ${p.originalNodeId}...');
|
||||
nodeResp = await _put('/api/0.6/node/${p.originalNodeId}', nodeXml);
|
||||
nodeId = p.originalNodeId.toString();
|
||||
break;
|
||||
|
||||
case UploadOperation.delete:
|
||||
// First, fetch the current node to get its version and coordinates
|
||||
print('Uploader: Fetching current node ${p.originalNodeId} for deletion...');
|
||||
// First, fetch the current node to get its version
|
||||
debugPrint('[Uploader] Fetching current node ${p.originalNodeId} for deletion...');
|
||||
final currentNodeResp = await _get('/api/0.6/node/${p.originalNodeId}');
|
||||
print('Uploader: Current node response: ${currentNodeResp.statusCode}');
|
||||
debugPrint('[Uploader] Current node response: ${currentNodeResp.statusCode}');
|
||||
if (currentNodeResp.statusCode != 200) {
|
||||
print('Uploader: Failed to fetch current node');
|
||||
return false;
|
||||
final errorMsg = 'Failed to fetch node ${p.originalNodeId} for deletion: HTTP ${currentNodeResp.statusCode} - ${currentNodeResp.body}';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
}
|
||||
|
||||
// Parse version and tags from the response XML
|
||||
// Parse version from the response XML
|
||||
final currentNodeXml = currentNodeResp.body;
|
||||
final versionMatch = RegExp(r'version="(\d+)"').firstMatch(currentNodeXml);
|
||||
if (versionMatch == null) {
|
||||
print('Uploader: Could not parse version from current node XML');
|
||||
return false;
|
||||
final errorMsg = 'Could not parse version from node XML for deletion: ${currentNodeXml.length > 200 ? currentNodeXml.substring(0, 200) + "..." : currentNodeXml}';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
}
|
||||
final currentVersion = versionMatch.group(1)!;
|
||||
print('Uploader: Current node version: $currentVersion');
|
||||
debugPrint('[Uploader] Current node version: $currentVersion');
|
||||
|
||||
// Delete node - OSM requires current tags and coordinates
|
||||
// Delete node - OSM requires current coordinates but empty tags
|
||||
final nodeXml = '''
|
||||
<osm>
|
||||
<node changeset="$csId" id="${p.originalNodeId}" version="$currentVersion" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
<node changeset="$changesetId" id="${p.originalNodeId}" version="$currentVersion" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
</node>
|
||||
</osm>''';
|
||||
print('Uploader: Deleting node ${p.originalNodeId}...');
|
||||
debugPrint('[Uploader] Deleting node ${p.originalNodeId}...');
|
||||
nodeResp = await _delete('/api/0.6/node/${p.originalNodeId}', nodeXml);
|
||||
nodeId = p.originalNodeId.toString();
|
||||
break;
|
||||
|
||||
case UploadOperation.extract:
|
||||
// Extract creates a new node with tags from the original node
|
||||
// The new node is created at the session's target coordinates
|
||||
final mergedTags = p.getCombinedTags();
|
||||
final tagsXml = mergedTags.entries.map((e) =>
|
||||
'<tag k="${e.key}" v="${e.value}"/>').join('\n ');
|
||||
final nodeXml = '''
|
||||
<osm>
|
||||
<node changeset="$csId" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
<node changeset="$changesetId" lat="${p.coord.latitude}" lon="${p.coord.longitude}">
|
||||
$tagsXml
|
||||
</node>
|
||||
</osm>''';
|
||||
print('Uploader: Extracting node from ${p.originalNodeId} to create new node...');
|
||||
debugPrint('[Uploader] Extracting node from ${p.originalNodeId} to create new node...');
|
||||
nodeResp = await _put('/api/0.6/node/create', nodeXml);
|
||||
nodeId = nodeResp.body.trim();
|
||||
break;
|
||||
}
|
||||
|
||||
print('Uploader: Node response: ${nodeResp.statusCode} - ${nodeResp.body}');
|
||||
debugPrint('[Uploader] Node response: ${nodeResp.statusCode} - ${nodeResp.body}');
|
||||
if (nodeResp.statusCode != 200) {
|
||||
print('Uploader: Failed to ${p.operation.name} node');
|
||||
return false;
|
||||
final errorMsg = 'Failed to ${p.operation.name} node: HTTP ${nodeResp.statusCode} - ${nodeResp.body}';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
// Note: changeset is included so caller knows to close it
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
}
|
||||
print('Uploader: ${p.operation.name.capitalize()} node ID: $nodeId');
|
||||
|
||||
// 3. close changeset
|
||||
print('Uploader: Closing changeset...');
|
||||
final closeResp = await _put('/api/0.6/changeset/$csId/close', '');
|
||||
print('Uploader: Close response: ${closeResp.statusCode}');
|
||||
|
||||
print('Uploader: Upload successful!');
|
||||
|
||||
final nodeIdInt = int.parse(nodeId);
|
||||
debugPrint('[Uploader] ${p.operation.name.capitalize()} node ID: $nodeIdInt');
|
||||
|
||||
// Notify success callback for immediate UI feedback
|
||||
onSuccess(nodeIdInt);
|
||||
return true;
|
||||
|
||||
return UploadResult.success(nodeId: nodeIdInt);
|
||||
|
||||
} on TimeoutException catch (e) {
|
||||
final errorMsg = 'Node operation timed out after ${kUploadHttpTimeout.inSeconds}s: $e';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
} catch (e) {
|
||||
print('Uploader: Upload failed with error: $e');
|
||||
return false;
|
||||
final errorMsg = 'Node operation failed with unexpected error: $e';
|
||||
debugPrint('[Uploader] $errorMsg');
|
||||
return UploadResult.failure(errorMessage: errorMsg, changesetId: changesetId);
|
||||
}
|
||||
}
|
||||
|
||||
// Close changeset (step 3 of 3)
|
||||
Future<UploadResult> closeChangeset(String changesetId) async {
|
||||
try {
|
||||
debugPrint('[Uploader] Closing changeset $changesetId...');
|
||||
final closeResp = await _put('/api/0.6/changeset/$changesetId/close', '');
|
||||
debugPrint('[Uploader] Close response: ${closeResp.statusCode} - ${closeResp.body}');
|
||||
|
||||
switch (closeResp.statusCode) {
|
||||
case 200:
|
||||
debugPrint('[Uploader] Changeset closed successfully');
|
||||
return UploadResult.success();
|
||||
|
||||
case 409:
|
||||
// Conflict - check if changeset is already closed
|
||||
if (closeResp.body.toLowerCase().contains('already closed') ||
|
||||
closeResp.body.toLowerCase().contains('closed at')) {
|
||||
debugPrint('[Uploader] Changeset already closed');
|
||||
return UploadResult.success();
|
||||
} else {
|
||||
// Other conflict - keep retrying
|
||||
final errorMsg = 'Changeset close conflict: HTTP ${closeResp.statusCode} - ${closeResp.body}';
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
}
|
||||
|
||||
case 404:
|
||||
// Changeset not found - this suggests the upload may not have worked
|
||||
debugPrint('[Uploader] Changeset not found - marking for full retry');
|
||||
return UploadResult.failure(
|
||||
errorMessage: 'Changeset not found: HTTP 404',
|
||||
changesetNotFound: true,
|
||||
);
|
||||
|
||||
default:
|
||||
// Other errors - keep retrying
|
||||
final errorMsg = 'Failed to close changeset $changesetId: HTTP ${closeResp.statusCode} - ${closeResp.body}';
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
}
|
||||
} on TimeoutException catch (e) {
|
||||
final errorMsg = 'Changeset close timed out after ${kUploadHttpTimeout.inSeconds}s: $e';
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
} catch (e) {
|
||||
final errorMsg = 'Changeset close failed with unexpected error: $e';
|
||||
return UploadResult.failure(errorMessage: errorMsg);
|
||||
}
|
||||
}
|
||||
|
||||
// Legacy full upload method (primarily for simulate mode compatibility)
|
||||
Future<UploadResult> upload(PendingUpload p) async {
|
||||
debugPrint('[Uploader] Starting full upload for ${p.operation.name} at ${p.coord.latitude}, ${p.coord.longitude}');
|
||||
|
||||
// Step 1: Create changeset
|
||||
final createResult = await createChangeset(p);
|
||||
if (!createResult.success) {
|
||||
onError(createResult.errorMessage!);
|
||||
return createResult;
|
||||
}
|
||||
|
||||
final changesetId = createResult.changesetId!;
|
||||
|
||||
// Step 2: Perform node operation
|
||||
final nodeResult = await performNodeOperation(p, changesetId);
|
||||
if (!nodeResult.success) {
|
||||
onError(nodeResult.errorMessage!);
|
||||
// Note: nodeResult includes changesetId for caller to close if needed
|
||||
return nodeResult;
|
||||
}
|
||||
|
||||
// Step 3: Close changeset
|
||||
final closeResult = await closeChangeset(changesetId);
|
||||
if (!closeResult.success) {
|
||||
// Node operation succeeded but changeset close failed
|
||||
// Don't call onError since node operation worked
|
||||
debugPrint('[Uploader] Node operation succeeded but changeset close failed');
|
||||
return UploadResult.failure(
|
||||
errorMessage: closeResult.errorMessage,
|
||||
changesetNotFound: closeResult.changesetNotFound,
|
||||
changesetId: changesetId,
|
||||
nodeId: nodeResult.nodeId,
|
||||
);
|
||||
}
|
||||
|
||||
// All steps successful
|
||||
debugPrint('[Uploader] Full upload completed successfully');
|
||||
return UploadResult.success(
|
||||
changesetId: changesetId,
|
||||
nodeId: nodeResult.nodeId,
|
||||
);
|
||||
}
|
||||
|
||||
String get _host {
|
||||
switch (uploadMode) {
|
||||
case UploadMode.sandbox:
|
||||
@@ -198,25 +347,25 @@ class Uploader {
|
||||
Future<http.Response> _get(String path) => http.get(
|
||||
Uri.https(_host, path),
|
||||
headers: _headers,
|
||||
);
|
||||
).timeout(kUploadHttpTimeout);
|
||||
|
||||
Future<http.Response> _post(String path, String body) => http.post(
|
||||
Uri.https(_host, path),
|
||||
headers: _headers,
|
||||
body: body,
|
||||
);
|
||||
).timeout(kUploadHttpTimeout);
|
||||
|
||||
Future<http.Response> _put(String path, String body) => http.put(
|
||||
Uri.https(_host, path),
|
||||
headers: _headers,
|
||||
body: body,
|
||||
);
|
||||
).timeout(kUploadHttpTimeout);
|
||||
|
||||
Future<http.Response> _delete(String path, String body) => http.delete(
|
||||
Uri.https(_host, path),
|
||||
headers: _headers,
|
||||
body: body,
|
||||
);
|
||||
).timeout(kUploadHttpTimeout);
|
||||
|
||||
Map<String, String> get _headers => {
|
||||
'Authorization': 'Bearer $accessToken',
|
||||
@@ -228,5 +377,4 @@ extension StringExtension on String {
|
||||
String capitalize() {
|
||||
return "${this[0].toUpperCase()}${substring(1)}";
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
@@ -172,19 +172,33 @@ class UploadQueueState extends ChangeNotifier {
|
||||
}
|
||||
|
||||
void clearQueue() {
|
||||
// Clean up all pending nodes from cache before clearing queue
|
||||
for (final upload in _queue) {
|
||||
_cleanupPendingNodeFromCache(upload);
|
||||
}
|
||||
|
||||
_queue.clear();
|
||||
_saveQueue();
|
||||
|
||||
// Notify node provider to update the map
|
||||
CameraProviderWithCache.instance.notifyListeners();
|
||||
notifyListeners();
|
||||
}
|
||||
|
||||
void removeFromQueue(PendingUpload upload) {
|
||||
// Clean up pending node from cache before removing from queue
|
||||
_cleanupPendingNodeFromCache(upload);
|
||||
|
||||
_queue.remove(upload);
|
||||
_saveQueue();
|
||||
|
||||
// Notify node provider to update the map
|
||||
CameraProviderWithCache.instance.notifyListeners();
|
||||
notifyListeners();
|
||||
}
|
||||
|
||||
void retryUpload(PendingUpload upload) {
|
||||
upload.error = false;
|
||||
upload.clearError();
|
||||
upload.attempts = 0;
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
@@ -208,53 +222,283 @@ class UploadQueueState extends ChangeNotifier {
|
||||
return;
|
||||
}
|
||||
|
||||
// Find the first queue item that is NOT in error state and act on that
|
||||
final item = _queue.where((pu) => !pu.error).cast<PendingUpload?>().firstOrNull;
|
||||
if (item == null) return;
|
||||
// Find next item to process based on state
|
||||
final pendingItems = _queue.where((pu) => pu.uploadState == UploadState.pending).toList();
|
||||
final creatingChangesetItems = _queue.where((pu) => pu.uploadState == UploadState.creatingChangeset).toList();
|
||||
final uploadingItems = _queue.where((pu) => pu.uploadState == UploadState.uploading).toList();
|
||||
final closingItems = _queue.where((pu) => pu.uploadState == UploadState.closingChangeset).toList();
|
||||
|
||||
// Process any expired items
|
||||
for (final uploadingItem in uploadingItems) {
|
||||
if (uploadingItem.hasChangesetExpired) {
|
||||
debugPrint('[UploadQueue] Changeset expired during node submission - marking as failed');
|
||||
uploadingItem.setError('Could not submit node within 59 minutes - changeset expired');
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
}
|
||||
}
|
||||
|
||||
for (final closingItem in closingItems) {
|
||||
if (closingItem.hasChangesetExpired) {
|
||||
debugPrint('[UploadQueue] Changeset expired during close - trusting OSM auto-close (node was submitted successfully)');
|
||||
_markAsCompleting(closingItem, submittedNodeId: closingItem.submittedNodeId!);
|
||||
// Continue processing loop - don't return here
|
||||
}
|
||||
}
|
||||
|
||||
// Find next item to process (process in stage order)
|
||||
PendingUpload? item;
|
||||
if (pendingItems.isNotEmpty) {
|
||||
item = pendingItems.first;
|
||||
} else if (creatingChangesetItems.isNotEmpty) {
|
||||
// Already in progress, skip
|
||||
return;
|
||||
} else if (uploadingItems.isNotEmpty) {
|
||||
// Check if any uploading items are ready for retry
|
||||
final readyToRetry = uploadingItems.where((ui) =>
|
||||
!ui.hasChangesetExpired && ui.isReadyForNodeSubmissionRetry
|
||||
).toList();
|
||||
|
||||
if (readyToRetry.isNotEmpty) {
|
||||
item = readyToRetry.first;
|
||||
}
|
||||
} else {
|
||||
// No active items, check if any changeset close items are ready for retry
|
||||
final readyToRetry = closingItems.where((ci) =>
|
||||
!ci.hasChangesetExpired && ci.isReadyForChangesetCloseRetry
|
||||
).toList();
|
||||
|
||||
if (readyToRetry.isNotEmpty) {
|
||||
item = readyToRetry.first;
|
||||
}
|
||||
}
|
||||
|
||||
if (item == null) {
|
||||
// No items ready for processing - check if queue is effectively empty
|
||||
final hasActiveItems = _queue.any((pu) =>
|
||||
pu.uploadState == UploadState.pending ||
|
||||
pu.uploadState == UploadState.creatingChangeset ||
|
||||
(pu.uploadState == UploadState.uploading && !pu.hasChangesetExpired) ||
|
||||
(pu.uploadState == UploadState.closingChangeset && !pu.hasChangesetExpired)
|
||||
);
|
||||
|
||||
if (!hasActiveItems) {
|
||||
debugPrint('[UploadQueue] No active items remaining, stopping uploader');
|
||||
_uploadTimer?.cancel();
|
||||
}
|
||||
return; // Nothing to process right now
|
||||
}
|
||||
|
||||
// Retrieve access after every tick (accounts for re-login)
|
||||
final access = await getAccessToken();
|
||||
if (access == null) return; // not logged in
|
||||
|
||||
bool ok;
|
||||
debugPrint('[UploadQueue] Processing item with uploadMode: ${item.uploadMode}');
|
||||
if (item.uploadMode == UploadMode.simulate) {
|
||||
// Simulate successful upload without calling real API
|
||||
debugPrint('[UploadQueue] Simulating upload (no real API call)');
|
||||
await Future.delayed(const Duration(seconds: 1)); // Simulate network delay
|
||||
ok = true;
|
||||
// Simulate a node ID for simulate mode
|
||||
_markAsCompleting(item, simulatedNodeId: DateTime.now().millisecondsSinceEpoch);
|
||||
} else {
|
||||
// Real upload -- use the upload mode that was saved when this item was queued
|
||||
debugPrint('[UploadQueue] Real upload to: ${item.uploadMode}');
|
||||
final up = Uploader(access, (nodeId) {
|
||||
_markAsCompleting(item, submittedNodeId: nodeId);
|
||||
}, uploadMode: item.uploadMode);
|
||||
ok = await up.upload(item);
|
||||
}
|
||||
if (!ok) {
|
||||
item.attempts++;
|
||||
if (item.attempts >= 3) {
|
||||
// Mark as error and stop the uploader. User can manually retry.
|
||||
item.error = true;
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
_uploadTimer?.cancel();
|
||||
} else {
|
||||
await Future.delayed(const Duration(seconds: 20));
|
||||
}
|
||||
debugPrint('[UploadQueue] Processing item in state: ${item.uploadState} with uploadMode: ${item.uploadMode}');
|
||||
|
||||
if (item.uploadState == UploadState.pending) {
|
||||
await _processCreateChangeset(item, access);
|
||||
} else if (item.uploadState == UploadState.creatingChangeset) {
|
||||
// Already in progress, skip (shouldn't happen due to filtering above)
|
||||
debugPrint('[UploadQueue] Changeset creation already in progress, skipping');
|
||||
return;
|
||||
} else if (item.uploadState == UploadState.uploading) {
|
||||
await _processNodeOperation(item, access);
|
||||
} else if (item.uploadState == UploadState.closingChangeset) {
|
||||
await _processChangesetClose(item, access);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Process changeset creation (step 1 of 3)
|
||||
Future<void> _processCreateChangeset(PendingUpload item, String access) async {
|
||||
item.markAsCreatingChangeset();
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show "Creating changeset..." immediately
|
||||
|
||||
if (item.uploadMode == UploadMode.simulate) {
|
||||
// Simulate successful upload without calling real API
|
||||
debugPrint('[UploadQueue] Simulating changeset creation (no real API call)');
|
||||
await Future.delayed(const Duration(milliseconds: 500)); // Simulate network delay
|
||||
|
||||
// Move to node operation phase
|
||||
item.markChangesetCreated('simulate-changeset-${DateTime.now().millisecondsSinceEpoch}');
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
return;
|
||||
}
|
||||
|
||||
// Real changeset creation
|
||||
debugPrint('[UploadQueue] Creating changeset for ${item.operation.name} operation');
|
||||
final up = Uploader(access, (nodeId) {}, (errorMessage) {}, uploadMode: item.uploadMode);
|
||||
final result = await up.createChangeset(item);
|
||||
|
||||
if (result.success) {
|
||||
// Changeset created successfully - move to node operation phase
|
||||
debugPrint('[UploadQueue] Changeset ${result.changesetId} created successfully');
|
||||
item.markChangesetCreated(result.changesetId!);
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show "Uploading node..." next
|
||||
} else {
|
||||
// Changeset creation failed
|
||||
item.attempts++;
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show attempt count immediately
|
||||
|
||||
if (item.attempts >= 3) {
|
||||
item.setError(result.errorMessage ?? 'Changeset creation failed after 3 attempts');
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show error state immediately
|
||||
} else {
|
||||
// Reset to pending for retry
|
||||
item.uploadState = UploadState.pending;
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show pending state for retry
|
||||
await Future.delayed(const Duration(seconds: 20));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process node operation (step 2 of 3)
|
||||
Future<void> _processNodeOperation(PendingUpload item, String access) async {
|
||||
if (item.changesetId == null) {
|
||||
debugPrint('[UploadQueue] ERROR: No changeset ID for node operation');
|
||||
item.setError('Missing changeset ID for node operation');
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if 59-minute window has expired
|
||||
if (item.hasChangesetExpired) {
|
||||
debugPrint('[UploadQueue] Changeset expired, could not submit node within 59 minutes');
|
||||
item.setError('Could not submit node within 59 minutes - changeset expired');
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
return;
|
||||
}
|
||||
|
||||
debugPrint('[UploadQueue] Processing node operation with changeset ${item.changesetId} (attempt ${item.nodeSubmissionAttempts + 1})');
|
||||
|
||||
if (item.uploadMode == UploadMode.simulate) {
|
||||
// Simulate successful node operation without calling real API
|
||||
debugPrint('[UploadQueue] Simulating node operation (no real API call)');
|
||||
await Future.delayed(const Duration(milliseconds: 500)); // Simulate network delay
|
||||
|
||||
// Store simulated node ID and move to changeset close phase
|
||||
item.submittedNodeId = DateTime.now().millisecondsSinceEpoch;
|
||||
item.markNodeOperationComplete();
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
return;
|
||||
}
|
||||
|
||||
// Real node operation
|
||||
final up = Uploader(access, (nodeId) {
|
||||
// This callback is called when node operation succeeds
|
||||
item.submittedNodeId = nodeId;
|
||||
}, (errorMessage) {
|
||||
// Error handling is done below
|
||||
}, uploadMode: item.uploadMode);
|
||||
|
||||
final result = await up.performNodeOperation(item, item.changesetId!);
|
||||
|
||||
item.incrementNodeSubmissionAttempts(); // Record this attempt
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show attempt count immediately
|
||||
|
||||
if (result.success) {
|
||||
// Node operation succeeded - move to changeset close phase
|
||||
debugPrint('[UploadQueue] Node operation succeeded after ${item.nodeSubmissionAttempts} attempts, node ID: ${result.nodeId}');
|
||||
item.submittedNodeId = result.nodeId;
|
||||
item.markNodeOperationComplete();
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show "Closing changeset..." next
|
||||
} else {
|
||||
// Node operation failed - will retry within 59-minute window
|
||||
debugPrint('[UploadQueue] Node operation failed (attempt ${item.nodeSubmissionAttempts}): ${result.errorMessage}');
|
||||
|
||||
// Check if we have time for another retry
|
||||
if (item.hasChangesetExpired) {
|
||||
debugPrint('[UploadQueue] Changeset expired during retry, marking as failed');
|
||||
item.setError('Could not submit node within 59 minutes - ${result.errorMessage}');
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
} else {
|
||||
// Still have time, will retry after backoff delay
|
||||
final nextDelay = item.nextNodeSubmissionRetryDelay;
|
||||
final timeLeft = item.timeUntilAutoClose;
|
||||
debugPrint('[UploadQueue] Will retry node submission in ${nextDelay}, ${timeLeft?.inMinutes}m remaining');
|
||||
// No state change needed - attempt count was already updated above
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process changeset close operation (step 3 of 3)
|
||||
Future<void> _processChangesetClose(PendingUpload item, String access) async {
|
||||
if (item.changesetId == null) {
|
||||
debugPrint('[UploadQueue] ERROR: No changeset ID for closing');
|
||||
item.setError('Missing changeset ID');
|
||||
_saveQueue();
|
||||
notifyListeners();
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if 59-minute window has expired - if so, mark as complete (trust OSM auto-close)
|
||||
if (item.hasChangesetExpired) {
|
||||
debugPrint('[UploadQueue] Changeset expired - trusting OSM auto-close (node was submitted successfully)');
|
||||
_markAsCompleting(item, submittedNodeId: item.submittedNodeId!);
|
||||
return;
|
||||
}
|
||||
|
||||
debugPrint('[UploadQueue] Attempting to close changeset ${item.changesetId} (attempt ${item.changesetCloseAttempts + 1})');
|
||||
|
||||
if (item.uploadMode == UploadMode.simulate) {
|
||||
// Simulate successful changeset close without calling real API
|
||||
debugPrint('[UploadQueue] Simulating changeset close (no real API call)');
|
||||
await Future.delayed(const Duration(milliseconds: 300)); // Simulate network delay
|
||||
|
||||
// Mark as complete
|
||||
_markAsCompleting(item, submittedNodeId: item.submittedNodeId!);
|
||||
return;
|
||||
}
|
||||
|
||||
// Real changeset close
|
||||
final up = Uploader(access, (nodeId) {}, (errorMessage) {}, uploadMode: item.uploadMode);
|
||||
final result = await up.closeChangeset(item.changesetId!);
|
||||
|
||||
item.incrementChangesetCloseAttempts(); // This records the attempt time
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show attempt count immediately
|
||||
|
||||
if (result.success) {
|
||||
// Changeset closed successfully
|
||||
debugPrint('[UploadQueue] Changeset close succeeded after ${item.changesetCloseAttempts} attempts');
|
||||
_markAsCompleting(item, submittedNodeId: item.submittedNodeId!);
|
||||
// _markAsCompleting handles its own save/notify
|
||||
} else if (result.changesetNotFound) {
|
||||
// Changeset not found - this suggests the upload may not have worked, start over with full retry
|
||||
debugPrint('[UploadQueue] Changeset not found during close, marking for full retry');
|
||||
item.setError(result.errorMessage ?? 'Changeset not found');
|
||||
_saveQueue();
|
||||
notifyListeners(); // Show error state immediately
|
||||
} else {
|
||||
// Changeset close failed - will retry after exponential backoff delay
|
||||
// Note: This will NEVER error out - will keep trying until 59-minute window expires
|
||||
final nextDelay = item.nextChangesetCloseRetryDelay;
|
||||
final timeLeft = item.timeUntilAutoClose;
|
||||
debugPrint('[UploadQueue] Changeset close failed (attempt ${item.changesetCloseAttempts}), will retry in ${nextDelay}, ${timeLeft?.inMinutes}m remaining');
|
||||
debugPrint('[UploadQueue] Error: ${result.errorMessage}');
|
||||
// No additional state change needed - attempt count was already updated above
|
||||
}
|
||||
}
|
||||
|
||||
void stopUploader() {
|
||||
_uploadTimer?.cancel();
|
||||
}
|
||||
|
||||
// Mark an item as completing (shows checkmark) and schedule removal after 1 second
|
||||
void _markAsCompleting(PendingUpload item, {int? submittedNodeId, int? simulatedNodeId}) {
|
||||
item.completing = true;
|
||||
item.markAsComplete();
|
||||
|
||||
// Store the submitted node ID for cleanup purposes
|
||||
if (submittedNodeId != null) {
|
||||
@@ -357,6 +601,28 @@ class UploadQueueState extends ChangeNotifier {
|
||||
return '${start.round()}-${end.round()}';
|
||||
}
|
||||
|
||||
// Clean up pending nodes from cache when queue items are deleted/cleared
|
||||
void _cleanupPendingNodeFromCache(PendingUpload upload) {
|
||||
if (upload.isDeletion) {
|
||||
// For deletions: remove the _pending_deletion marker from the original node
|
||||
if (upload.originalNodeId != null) {
|
||||
NodeCache.instance.removePendingDeletionMarker(upload.originalNodeId!);
|
||||
}
|
||||
} else if (upload.isEdit) {
|
||||
// For edits: remove both the temp node and the _pending_edit marker from original
|
||||
NodeCache.instance.removeTempNodesByCoordinate(upload.coord);
|
||||
if (upload.originalNodeId != null) {
|
||||
NodeCache.instance.removePendingEditMarker(upload.originalNodeId!);
|
||||
}
|
||||
} else if (upload.operation == UploadOperation.extract) {
|
||||
// For extracts: remove the temp node (leave original unchanged)
|
||||
NodeCache.instance.removeTempNodesByCoordinate(upload.coord);
|
||||
} else {
|
||||
// For creates: remove the temp node
|
||||
NodeCache.instance.removeTempNodesByCoordinate(upload.coord);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------- Queue persistence ----------
|
||||
Future<void> _saveQueue() async {
|
||||
final prefs = await SharedPreferences.getInstance();
|
||||
@@ -374,6 +640,12 @@ class UploadQueueState extends ChangeNotifier {
|
||||
..addAll(list.map((e) => PendingUpload.fromJson(e)));
|
||||
}
|
||||
|
||||
// Public method for migration purposes
|
||||
Future<void> reloadQueue() async {
|
||||
await _loadQueue();
|
||||
notifyListeners();
|
||||
}
|
||||
|
||||
@override
|
||||
void dispose() {
|
||||
_uploadTimer?.cancel();
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
name: deflockapp
|
||||
description: Map public surveillance infrastructure with OpenStreetMap
|
||||
publish_to: "none"
|
||||
version: 1.5.2+23 # The thing after the + is the version code, incremented with each release
|
||||
version: 1.5.3+24 # The thing after the + is the version code, incremented with each release
|
||||
|
||||
environment:
|
||||
sdk: ">=3.5.0 <4.0.0" # oauth2_client 4.x needs Dart 3.5+
|
||||
|
||||
Reference in New Issue
Block a user