DysonNetwork Drive - Persistent Task System
A comprehensive, production-ready generic task system with support for file uploads, background operations, real-time progress tracking, and dynamic notifications powered by RingService.
When using with the Gateway, use the /drive to replace /api.
The realtime messages are from the websocket gateway.
🚀 Features
Core Task Features
- Generic Task System: Support for various background operations beyond file uploads
- Resumable Uploads: Pause and resume uploads across app restarts
- Chunked Uploads: Efficient large file handling with configurable chunk sizes
- Progress Persistence: Task state survives server restarts and network interruptions
- Duplicate Detection: Automatic detection of already uploaded files via hash checking
- Quota Management: Integration with user quota and billing systems
- Pool-based Storage: Support for multiple storage pools with different policies
Real-Time Features
- Live Progress Updates: WebSocket-based real-time progress tracking for all task types
- Task Lifecycle Notifications: Instant notifications for task creation, progress, completion, and failure
- Failure Alerts: Immediate notification of task failures with error details
- Push Notifications: Cross-platform push notifications for mobile/desktop
- Smart Throttling: Optimized update frequency to prevent network spam
Management Features
- Task Listing: Comprehensive API for listing and filtering all task types
- Task Statistics: Detailed analytics and usage statistics for all operations
- Cleanup Operations: Automatic and manual cleanup of failed/stale tasks
- Ownership Verification: Secure access control for all operations
- Detailed Task Info: Rich metadata including progress, parameters, and results
- Task Lifecycle Management: Full control over task states (pause, resume, cancel)
📋 Table of Contents
- Quick Start
- API Reference
- WebSocket Events
- Database Schema
- Configuration
- Usage Examples
- Error Handling
- Performance
- Security
- Troubleshooting
🚀 Quick Start
1. Create Upload Task
POST /api/files/upload/create
Content-Type: application/json
{
"fileName": "large-video.mp4",
"fileSize": 1073741824,
"contentType": "video/mp4",
"poolId": "550e8400-e29b-41d4-a716-446655440000",
"chunkSize": 8388608
}
Response:
{
"taskId": "abc123def456ghi789",
"chunkSize": 8388608,
"chunksCount": 128
}
2. Upload Chunks
POST /api/files/upload/chunk/abc123def456ghi789/0
Content-Type: multipart/form-data
(chunk data as form file)
3. Complete Upload
POST /api/files/upload/complete/abc123def456ghi789
📚 API Reference
Upload Task Management
POST /api/files/upload/create
Creates a new resumable upload task.
Request Body:
{
"fileName": "string",
"fileSize": "long",
"contentType": "string",
"poolId": "uuid",
"bundleId": "uuid",
"chunkSize": "long",
"encryptPassword": "string",
"expiredAt": "datetime",
"hash": "string"
}
Field Descriptions:
fileName: Required - Name of the filefileSize: Required - Size in bytescontentType: Required - MIME typepoolId: Optional - Storage pool IDbundleId: Optional - File bundle IDchunkSize: Optional - Chunk size (default: 5MB)encryptPassword: Optional - Encryption passwordexpiredAt: Optional - Expiration datehash: Required - File hash for deduplication
Response:
{
"fileExists": false,
"taskId": "string",
"chunkSize": 5242880,
"chunksCount": 10
}
POST /api/files/upload/chunk/{taskId}/{chunkIndex}
Uploads a specific chunk of the file.
Parameters:
taskId: Upload task identifierchunkIndex: Zero-based chunk index
Request: Multipart form data with chunk file
Response: 200 OK or 409 Conflict (chunk already uploaded)
POST /api/files/upload/complete/{taskId}
Completes the upload and processes the file.
Response: CloudFile object with file metadata
Task Information & Management
GET /api/files/upload/tasks
Lists user's upload tasks with filtering and pagination.
Query Parameters:
status: Filter by status (InProgress,Completed,Failed,Expired)sortBy: Sort field (filename,filesize,createdAt,updatedAt,lastActivity)sortDescending: Sort direction (default:true)offset: Pagination offset (default:0)limit: Page size (default:50)
Response Headers:
X-Total: Total number of tasks matching filters
GET /api/files/upload/progress/{taskId}
Gets current progress for a specific task.
GET /api/files/upload/resume/{taskId}
Gets task information needed to resume an interrupted upload.
DELETE /api/files/upload/task/{taskId}
Cancels an upload task and cleans up resources.
GET /api/files/upload/tasks/{taskId}/details
Gets comprehensive details about a specific task including:
- Full task metadata
- Pool and bundle information
- Estimated time remaining
- Current upload speed
GET /api/files/upload/stats
Gets upload statistics for the current user.
Response:
{
"totalTasks": 25,
"inProgressTasks": 3,
"completedTasks": 20,
"failedTasks": 1,
"expiredTasks": 1,
"totalUploadedBytes": 5368709120,
"averageProgress": 67.5,
"recentActivity": []
}
DELETE /api/files/upload/tasks/cleanup
Cleans up all failed and expired tasks for the current user.
GET /api/files/upload/tasks/recent?limit=10
Gets the most recent upload tasks.
🔌 WebSocket Events
The system sends real-time updates via WebSocket using RingService. Connect to the WebSocket endpoint and listen for task-related events.
Event Types
task.created
Sent when a new task is created.
{
"type": "task.created",
"data": {
"taskId": "task123",
"name": "Upload File",
"type": "FileUpload",
"createdAt": "2025-11-09T02:00:00Z"
}
}
task.progress
Sent when task progress changes significantly (every 5% or major milestones).
{
"type": "task.progress",
"data": {
"taskId": "task123",
"name": "Upload File",
"type": "FileUpload",
"progress": 67.5,
"status": "InProgress",
"lastActivity": "2025-11-09T02:05:00Z"
}
}
task.completed
Sent when a task completes successfully.
{
"type": "task.completed",
"data": {
"taskId": "task123",
"name": "Upload File",
"type": "FileUpload",
"completedAt": "2025-11-09T02:10:00Z",
"results": {
"fileId": "file456",
"fileName": "document.pdf",
"fileSize": 10485760
}
}
}
task.failed
Sent when a task fails.
{
"type": "task.failed",
"data": {
"taskId": "task123",
"name": "Upload File",
"type": "FileUpload",
"failedAt": "2025-11-09T02:15:00Z",
"errorMessage": "File processing failed: invalid format"
}
}
Client Integration Example
// WebSocket connection
const ws = new WebSocket('wss://api.dysonnetwork.com/ws');
// Authentication (implement based on your auth system)
ws.onopen = () => {
ws.send(JSON.stringify({
type: 'auth',
token: 'your-jwt-token'
}));
};
// Handle task events
ws.onmessage = (event) => {
const packet = JSON.parse(event.data);
switch (packet.type) {
case 'task.progress':
updateProgressBar(packet.data);
break;
case 'task.completed':
showSuccessNotification(packet.data);
break;
case 'task.failed':
showErrorNotification(packet.data);
break;
}
};
function updateProgressBar(data) {
const progressBar = document.getElementById(`progress-${data.taskId}`);
if (progressBar) {
progressBar.style.width = `${data.progress}%`;
progressBar.textContent = `${data.progress.toFixed(1)}%`;
}
}
Note on Upload-Specific Notifications
The system also includes upload-specific notifications (upload.progress, upload.completed, upload.failed) for backward compatibility. However, for new implementations, it's recommended to use the generic task notifications as they provide the same functionality with less object allocation overhead. Since users are typically in the foreground during upload operations, the generic task notifications provide sufficient progress visibility.
🗄️ Database Schema
upload_tasks Table
CREATE TABLE upload_tasks (
id UUID PRIMARY KEY,
task_id VARCHAR NOT NULL UNIQUE,
file_name VARCHAR NOT NULL,
file_size BIGINT NOT NULL,
content_type VARCHAR NOT NULL,
chunk_size BIGINT NOT NULL,
chunks_count INTEGER NOT NULL,
chunks_uploaded INTEGER NOT NULL DEFAULT 0,
pool_id UUID NOT NULL,
bundle_id UUID,
encrypt_password VARCHAR,
expired_at TIMESTAMPTZ,
hash VARCHAR NOT NULL,
account_id UUID NOT NULL,
status INTEGER NOT NULL DEFAULT 0,
uploaded_chunks JSONB NOT NULL DEFAULT '[]'::jsonb,
last_activity TIMESTAMPTZ NOT NULL,
created_at TIMESTAMPTZ NOT NULL,
updated_at TIMESTAMPTZ NOT NULL,
deleted_at TIMESTAMPTZ
);
-- Indexes for performance
CREATE INDEX idx_upload_tasks_account_id ON upload_tasks(account_id);
CREATE INDEX idx_upload_tasks_status ON upload_tasks(status);
CREATE INDEX idx_upload_tasks_last_activity ON upload_tasks(last_activity);
CREATE INDEX idx_upload_tasks_hash ON upload_tasks(hash);
Status Enum Values
0: InProgress1: Completed2: Failed3: Expired
⚙️ Configuration
Environment Variables
# Storage configuration
STORAGE_UPLOADS_PATH=/tmp/uploads
STORAGE_PREFERRED_REMOTE=550e8400-e29b-41d4-a716-446655440000
# Chunk size settings
UPLOAD_DEFAULT_CHUNK_SIZE=5242880 # 5MB
UPLOAD_MAX_CHUNK_SIZE=16777216 # 16MB
# Cleanup settings
UPLOAD_STALE_THRESHOLD_HOURS=24
UPLOAD_CLEANUP_INTERVAL_MINUTES=60
# Cache settings
UPLOAD_CACHE_DURATION_MINUTES=30
Dependency Injection
// In Program.cs or Startup.cs
builder.Services.AddScoped<PersistentTaskService>();
builder.Services.AddSingleton<RingService.RingServiceClient>(sp => {
// Configure gRPC client for RingService
var channel = GrpcChannel.ForAddress("https://ring-service:50051");
return new RingService.RingServiceClient(channel);
});
💡 Usage Examples
Basic Upload Flow
class UploadManager {
constructor() {
this.ws = new WebSocket('wss://api.dysonnetwork.com/ws');
this.tasks = new Map();
}
async uploadFile(file, poolId) {
// 1. Create upload task
const taskResponse = await fetch('/api/files/upload/create', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileName: file.name,
fileSize: file.size,
contentType: file.type,
poolId: poolId,
hash: await this.calculateHash(file)
})
});
const task = await taskResponse.json();
if (task.fileExists) {
return task.file; // File already exists
}
// 2. Upload chunks
const chunks = this.splitFileIntoChunks(file, task.chunkSize);
for (let i = 0; i < chunks.length; i++) {
await this.uploadChunk(task.taskId, i, chunks[i]);
}
// 3. Complete upload
const result = await fetch(`/api/files/upload/complete/${task.taskId}`, {
method: 'POST'
});
return await result.json();
}
async uploadChunk(taskId, chunkIndex, chunkData) {
const formData = new FormData();
formData.append('chunk', chunkData);
const response = await fetch(`/api/files/upload/chunk/${taskId}/${chunkIndex}`, {
method: 'POST',
body: formData
});
if (response.status === 409) {
// Chunk already uploaded, skip
return;
}
if (!response.ok) {
throw new Error(`Upload failed: ${response.statusText}`);
}
}
splitFileIntoChunks(file, chunkSize) {
const chunks = [];
for (let offset = 0; offset < file.size; offset += chunkSize) {
chunks.push(file.slice(offset, offset + chunkSize));
}
return chunks;
}
async calculateHash(file) {
// Implement file hashing (SHA-256 recommended)
const buffer = await file.arrayBuffer();
const hashBuffer = await crypto.subtle.digest('SHA-256', buffer);
return Array.from(new Uint8Array(hashBuffer))
.map(b => b.toString(16).padStart(2, '0'))
.join('');
}
}
Resume Interrupted Upload
async resumeUpload(taskId) {
// Get task information
const resumeResponse = await fetch(`/api/files/upload/resume/${taskId}`);
const taskInfo = await resumeResponse.json();
// Get uploaded chunks
const uploadedChunks = new Set(taskInfo.uploadedChunks);
// Upload missing chunks
for (let i = 0; i < taskInfo.chunksCount; i++) {
if (!uploadedChunks.has(i)) {
await this.uploadChunk(taskId, i, this.getChunkData(i));
}
}
// Complete upload
await fetch(`/api/files/upload/complete/${taskId}`, {
method: 'POST'
});
}
Monitor Upload Progress
function setupProgressMonitoring(taskId) {
// Listen for WebSocket progress events
this.ws.addEventListener('message', (event) => {
const packet = JSON.parse(event.data);
if (packet.type === 'upload.progress' && packet.data.taskId === taskId) {
updateProgressUI(packet.data);
}
});
}
function updateProgressUI(progressData) {
const progressBar = document.getElementById('upload-progress');
const progressText = document.getElementById('progress-text');
const speedText = document.getElementById('upload-speed');
progressBar.style.width = `${progressData.progress}%`;
progressText.textContent = `${progressData.progress.toFixed(1)}%`;
// Calculate speed if we have timing data
if (this.lastProgress) {
const timeDiff = Date.now() - this.lastUpdate;
const progressDiff = progressData.progress - this.lastProgress.progress;
const speed = (progressDiff / 100) * (progressData.fileSize / 1024 / 1024) / (timeDiff / 1000);
speedText.textContent = `${speed.toFixed(1)} MB/s`;
}
this.lastProgress = progressData;
this.lastUpdate = Date.now();
}
🚨 Error Handling
Common Error Codes
400 Bad Request: Invalid request parameters401 Unauthorized: Authentication required403 Forbidden: Insufficient permissions or quota exceeded404 Not Found: Task or resource not found409 Conflict: Chunk already uploaded (resumable upload)413 Payload Too Large: File exceeds size limits429 Too Many Requests: Rate limit exceeded
Error Response Format
{
"code": "UPLOAD_FAILED",
"message": "Failed to complete file upload",
"status": 500,
"details": {
"taskId": "abc123def456",
"error": "File processing failed: invalid format"
}
}
Handling Upload Failures
try {
const result = await completeUpload(taskId);
showSuccess(result);
} catch (error) {
if (error.status === 500) {
// Server error, can retry
showRetryButton(taskId);
} else if (error.status === 403) {
// Permission/quota error
showQuotaExceeded();
} else {
// Other error
showGenericError(error.message);
}
}
⚡ Performance
Optimizations
- Chunked Uploads: Reduces memory usage for large files
- Progress Throttling: Prevents WebSocket spam during fast uploads
- Caching Layer: Redis-based caching for task metadata
- Database Indexing: Optimized queries for task listing and filtering
- Async Processing: Non-blocking I/O operations throughout
Benchmarks
- Small Files (< 10MB): ~2-5 seconds total upload time
- Large Files (1GB+): Maintains consistent throughput
- Concurrent Uploads: Supports 100+ simultaneous uploads per server
- WebSocket Updates: < 10ms latency for progress notifications
Scaling Considerations
- Horizontal Scaling: Stateless design supports multiple instances
- Load Balancing: Session affinity not required for uploads
- Storage Backend: Compatible with S3, local storage, and distributed systems
- Database: PostgreSQL with connection pooling recommended
🔒 Security
Authentication & Authorization
- JWT Tokens: All endpoints require valid authentication
- Ownership Verification: Users can only access their own tasks
- Permission Checks: Integration with role-based access control
- Rate Limiting: Built-in protection against abuse
Data Protection
- Encryption Support: Optional client-side encryption
- Secure Storage: Files stored with proper access controls
- Hash Verification: Integrity checking via SHA-256 hashes
- Audit Logging: Comprehensive logging of all operations
Network Security
- HTTPS Only: All communications encrypted in transit
- CORS Configuration: Proper cross-origin resource sharing
- Input Validation: Comprehensive validation of all inputs
- SQL Injection Prevention: Parameterized queries throughout
🔧 Troubleshooting
Common Issues
Upload Stuck at 99%
Problem: Final chunk fails to upload or process Solution: Check server logs, verify file integrity, retry completion
WebSocket Not Connecting
Problem: Real-time updates not working Solution: Check WebSocket server configuration, verify client authentication
Progress Not Updating
Problem: UI not reflecting upload progress Solution: Verify WebSocket connection, check for JavaScript errors
Upload Fails with 403
Problem: Permission denied errors Solution: Check user permissions, quota limits, and pool access
Debug Mode
Enable detailed logging by setting environment variable:
LOG_LEVEL=DysonNetwork.Drive.Storage:Debug
Health Checks
Monitor system health via:
GET /health/uploads
Returns status of upload service, database connectivity, and queue lengths.
📞 Support
For issues and questions:
- Check the troubleshooting section above
- Review server logs for error details
- Verify client implementation against examples
- Contact the development team with specific error messages
📝 Changelog
Version 1.0.0
- Initial release with resumable uploads
- Real-time progress tracking via WebSocket
- Push notification integration
- Comprehensive task management APIs
- Automatic cleanup and quota management
🎯 Generic Task System (v2.0)
The upload system has been extended with a powerful generic task framework that supports various types of background operations beyond just file uploads.
Supported Task Types
File Operations
- FileUpload: Resumable file uploads (original functionality)
- FileMove: Move files between storage pools or bundles
- FileCompress: Compress multiple files into archives
- FileDecompress: Extract compressed archives
- FileEncrypt: Encrypt files with passwords
- FileDecrypt: Decrypt encrypted files
Bulk Operations
- BulkOperation: Custom bulk operations on multiple files
- StorageMigration: Migrate files between storage pools
- FileConversion: Convert files between formats
Custom Operations
- Custom: Extensible framework for custom task types
Task Architecture
Core Classes
// Base task class with common functionality
public class PersistentTask : ModelBase
{
public Guid Id { get; set; }
public string TaskId { get; set; } = null!;
public string Name { get; set; } = null!;
public string? Description { get; set; }
public TaskType Type { get; set; }
public TaskStatus Status { get; set; }
public Guid AccountId { get; set; }
public double Progress { get; set; }
public Dictionary<string, object?> Parameters { get; set; } = new();
public Dictionary<string, object?> Results { get; set; } = new();
public string? ErrorMessage { get; set; }
public Instant LastActivity { get; set; }
public int Priority { get; set; }
public long? EstimatedDurationSeconds { get; set; }
}
// Specialized task implementations
public class FileMoveTask : PersistentTask
{
public FileMoveTask() { Type = TaskType.FileMove; Name = "Move Files"; }
public List<string> FileIds { get; set; } = new();
public Guid TargetPoolId { get; set; }
public Guid? TargetBundleId { get; set; }
public int FilesProcessed { get; set; }
}
public class FileCompressTask : PersistentTask
{
public FileCompressTask() { Type = TaskType.FileCompress; Name = "Compress Files"; }
public List<string> FileIds { get; set; } = new();
public string CompressionFormat { get; set; } = "zip";
public int CompressionLevel { get; set; } = 6;
public string? OutputFileName { get; set; }
public int FilesProcessed { get; set; }
public string? ResultFileId { get; set; }
}
Task Service
public class PersistentTaskService(
AppDatabase db,
ICacheService cache,
ILogger<PersistentTaskService> logger,
RingService.RingServiceClient ringService
)
{
// Create any type of task
public async Task<T> CreateTaskAsync<T>(T task) where T : PersistentTask
// Update progress with automatic notifications
public async Task UpdateTaskProgressAsync(string taskId, double progress, string? statusMessage = null)
// Mark tasks as completed/failed with results
public async Task MarkTaskCompletedAsync(string taskId, Dictionary<string, object?>? results = null)
public async Task MarkTaskFailedAsync(string taskId, string? errorMessage = null)
// Task lifecycle management
public async Task PauseTaskAsync(string taskId)
public async Task ResumeTaskAsync(string taskId)
public async Task CancelTaskAsync(string taskId)
// Query tasks with filtering and pagination
public async Task<(List<PersistentTask> Items, int TotalCount)> GetUserTasksAsync(
Guid accountId,
TaskType? type = null,
TaskStatus? status = null,
string? sortBy = "lastActivity",
bool sortDescending = true,
int offset = 0,
int limit = 50
)
}
Real-Time Task Notifications
All task operations send WebSocket notifications via RingService using the shared GrpcTypeHelper for consistent JSON serialization:
Task Created
{
"type": "task.created",
"data": {
"taskId": "task123",
"name": "Compress Files",
"type": "FileCompress",
"createdAt": "2025-11-09T02:00:00Z"
}
}
Task Progress
{
"type": "task.progress",
"data": {
"taskId": "task123",
"name": "Compress Files",
"type": "FileCompress",
"progress": 67.5,
"status": "InProgress",
"lastActivity": "2025-11-09T02:05:00Z"
}
}
Task Completed
{
"type": "task.completed",
"data": {
"taskId": "task123",
"name": "Compress Files",
"type": "FileCompress",
"completedAt": "2025-11-09T02:10:00Z",
"results": {
"resultFileId": "file456",
"compressedSize": 10485760,
"compressionRatio": 0.75
}
}
}
Usage Examples
Create a File Compression Task
var compressTask = new FileCompressTask
{
Name = "Compress Project Files",
Description = "Compress all project files into a ZIP archive",
AccountId = userId,
FileIds = new List<string> { "file1", "file2", "file3" },
CompressionFormat = "zip",
CompressionLevel = 9,
OutputFileName = "project-backup.zip"
};
var createdTask = await taskService.CreateTaskAsync(compressTask);
// Task ID: createdTask.TaskId
Monitor Task Progress
// WebSocket monitoring
ws.onmessage = (event) => {
const packet = JSON.parse(event.data);
if (packet.type === 'task.progress') {
const { taskId, progress, name } = packet.data;
updateTaskProgress(taskId, progress, name);
} else if (packet.type === 'task.completed') {
const { taskId, results } = packet.data;
handleTaskCompletion(taskId, results);
}
};
Bulk File Operations
var bulkTask = new BulkOperationTask
{
Name = "Bulk Delete Old Files",
OperationType = "delete",
TargetIds = fileIds,
OperationParameters = new Dictionary<string, object?> {
{ "olderThanDays", 30 },
{ "confirm", true }
}
};
await taskService.CreateTaskAsync(bulkTask);
Task Status Management
Tasks support multiple statuses:
- Pending: Queued for execution
- InProgress: Currently executing
- Paused: Temporarily suspended
- Completed: Successfully finished
- Failed: Execution failed
- Cancelled: Manually cancelled
- Expired: Timed out or expired
Available Service Methods
Based on the PersistentTaskService implementation, the following methods are available:
Core Task Operations
CreateTaskAsync<T>(T task): Creates any type of persistent taskGetTaskAsync<T>(string taskId): Retrieves a task by ID with cachingUpdateTaskProgressAsync(string taskId, double progress, string? statusMessage): Updates task progress with automatic notificationsMarkTaskCompletedAsync(string taskId, Dictionary<string, object?>? results): Marks task as completed with optional resultsMarkTaskFailedAsync(string taskId, string? errorMessage): Marks task as failed with error messagePauseTaskAsync(string taskId): Pauses an in-progress taskResumeTaskAsync(string taskId): Resumes a paused taskCancelTaskAsync(string taskId): Cancels a task
Task Querying & Statistics
GetUserTasksAsync(): Gets tasks for a user with filtering and paginationGetUserTaskStatsAsync(Guid accountId): Gets comprehensive task statisticsCleanupOldTasksAsync(Guid accountId, Duration maxAge): Cleans up old completed/failed tasks
Upload-Specific Operations
CreateUploadTaskAsync(): Creates a new persistent upload taskGetUploadTaskAsync(string taskId): Gets an existing upload taskUpdateChunkProgressAsync(string taskId, int chunkIndex): Updates chunk upload progressIsChunkUploadedAsync(string taskId, int chunkIndex): Checks if a chunk has been uploadedGetUploadProgressAsync(string taskId): Gets upload progress as percentageGetUserUploadTasksAsync(): Gets user upload tasks with filteringGetUserUploadStatsAsync(Guid accountId): Gets upload statistics for a userCleanupUserFailedTasksAsync(Guid accountId): Cleans up failed upload tasksGetRecentUserTasksAsync(Guid accountId, int limit): Gets recent upload tasks
Priority System
Tasks can be assigned priorities (0-100, higher = more important) to control execution order in background processing.
Automatic Cleanup
Old completed/failed tasks are automatically cleaned up after 30 days to prevent database bloat.
Extensibility
The task system is designed to be easily extensible:
// Create custom task types
public class CustomProcessingTask : PersistentTask
{
public CustomProcessingTask()
{
Type = TaskType.Custom;
Name = "Custom Processing";
}
public string CustomParameter
{
get => Parameters.GetValueOrDefault("customParam") as string ?? "";
set => Parameters["customParam"] = value;
}
public object? CustomResult
{
get => Results.GetValueOrDefault("customResult");
set => Results["customResult"] = value;
}
}
Database Schema Extensions
The task system uses JSONB columns for flexible parameter and result storage:
-- Extended tasks table
ALTER TABLE tasks ADD COLUMN priority INTEGER DEFAULT 0;
ALTER TABLE tasks ADD COLUMN estimated_duration_seconds BIGINT;
ALTER TABLE tasks ADD COLUMN started_at TIMESTAMPTZ;
ALTER TABLE tasks ADD COLUMN completed_at TIMESTAMPTZ;
-- Indexes for performance
CREATE INDEX idx_tasks_type ON tasks(type);
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_priority ON tasks(priority);
CREATE INDEX idx_tasks_account_type ON tasks(account_id, type);
Migration Notes
The system maintains backward compatibility with existing upload tasks while adding the new generic framework. Existing PersistentUploadTask entities continue to work unchanged.
Note: This system is designed for production use and includes comprehensive error handling, security measures, and performance optimizations. Always test thoroughly in your environment before deploying to production.