firs scratches
This commit is contained in:
@@ -1,160 +0,0 @@
|
||||
# Migration Guide: File-based Queue to Celery+Redis
|
||||
|
||||
This guide explains how to migrate from the file-based queue system to the new Celery+Redis based system for handling download tasks.
|
||||
|
||||
## Benefits of the New System
|
||||
|
||||
1. **Improved Reliability**: Redis provides reliable persistence for task state
|
||||
2. **Better Scalability**: Celery workers can be scaled across multiple machines
|
||||
3. **Enhanced Monitoring**: Built-in tools for monitoring task status and health
|
||||
4. **Resource Efficiency**: Celery's worker pool is more efficient than Python threads
|
||||
5. **Cleaner Code**: Separates concerns between queue management and download logic
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Redis server (3.0+) installed and running
|
||||
- Python 3.7+ (same as the main application)
|
||||
- Required Python packages:
|
||||
- celery>=5.3.6
|
||||
- redis>=5.0.1
|
||||
- flask-celery-helper>=1.1.0
|
||||
|
||||
## Installation
|
||||
|
||||
1. Install Redis:
|
||||
```bash
|
||||
# For Debian/Ubuntu
|
||||
sudo apt-get install redis-server
|
||||
|
||||
# For Arch Linux
|
||||
sudo pacman -S redis
|
||||
|
||||
# For macOS
|
||||
brew install redis
|
||||
```
|
||||
|
||||
2. Start Redis server:
|
||||
```bash
|
||||
sudo systemctl start redis
|
||||
# or
|
||||
redis-server
|
||||
```
|
||||
|
||||
3. Install required Python packages:
|
||||
```bash
|
||||
pip install -r requirements-celery.txt
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
1. Set the Redis URL in environment variables (optional):
|
||||
```bash
|
||||
export REDIS_URL=redis://localhost:6379/0
|
||||
export REDIS_BACKEND=redis://localhost:6379/0
|
||||
```
|
||||
|
||||
2. Adjust `config/main.json` as needed:
|
||||
```json
|
||||
{
|
||||
"maxConcurrentDownloads": 3,
|
||||
"maxRetries": 3,
|
||||
"retryDelaySeconds": 5,
|
||||
"retry_delay_increase": 5
|
||||
}
|
||||
```
|
||||
|
||||
## Starting the Worker
|
||||
|
||||
To start the Celery worker:
|
||||
|
||||
```bash
|
||||
python celery_worker.py
|
||||
```
|
||||
|
||||
This will start the worker with the configured maximum concurrent downloads.
|
||||
|
||||
## Monitoring
|
||||
|
||||
You can monitor tasks using Flower, a web-based Celery monitoring tool:
|
||||
|
||||
```bash
|
||||
pip install flower
|
||||
celery -A routes.utils.celery_tasks.celery_app flower
|
||||
```
|
||||
|
||||
Then access the dashboard at http://localhost:5555
|
||||
|
||||
## Transitioning from File-based Queue
|
||||
|
||||
The API endpoints (`/api/prgs/*`) have been updated to be backward compatible and will work with both the old .prg file system and the new Celery-based system. This allows for a smooth transition.
|
||||
|
||||
1. During transition, both systems can run in parallel
|
||||
2. New download requests will use the Celery tasks system
|
||||
3. Old .prg files will still be accessible via the same API
|
||||
4. Eventually, the PRG file handling code can be removed once all old tasks are completed
|
||||
|
||||
## Modifying Downloader Functions
|
||||
|
||||
If you need to add a new downloader function, make these changes:
|
||||
|
||||
1. Update the utility module (e.g., track.py) to accept a `progress_callback` parameter
|
||||
2. Use the progress_callback for reporting progress as shown in the example
|
||||
3. Create a new Celery task in `routes/utils/celery_tasks.py`
|
||||
|
||||
Example of implementing a callback in your downloader function:
|
||||
|
||||
```python
|
||||
def download_track(service="", url="", progress_callback=None, ...):
|
||||
"""Download a track with progress reporting"""
|
||||
|
||||
# Create a default callback if none provided
|
||||
if progress_callback is None:
|
||||
progress_callback = lambda x: None
|
||||
|
||||
# Report initializing status
|
||||
progress_callback({
|
||||
"status": "initializing",
|
||||
"type": "track",
|
||||
"song": track_name,
|
||||
"artist": artist_name
|
||||
})
|
||||
|
||||
# Report download progress
|
||||
progress_callback({
|
||||
"status": "downloading",
|
||||
"type": "track",
|
||||
"song": track_name,
|
||||
"artist": artist_name
|
||||
})
|
||||
|
||||
# Report real-time progress
|
||||
progress_callback({
|
||||
"status": "real_time",
|
||||
"type": "track",
|
||||
"song": track_name,
|
||||
"artist": artist_name,
|
||||
"percentage": 0.5 # 50% complete
|
||||
})
|
||||
|
||||
# Report completion
|
||||
progress_callback({
|
||||
"status": "done",
|
||||
"type": "track",
|
||||
"song": track_name,
|
||||
"artist": artist_name
|
||||
})
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
The API endpoints remain unchanged to maintain compatibility with the frontend:
|
||||
|
||||
- `GET /api/prgs/<task_id>` - Get task/file status (works with both task IDs and old .prg filenames)
|
||||
- `DELETE /api/prgs/delete/<task_id>` - Delete a task/file
|
||||
- `GET /api/prgs/list` - List all tasks and files
|
||||
- `POST /api/prgs/retry/<task_id>` - Retry a failed task
|
||||
- `POST /api/prgs/cancel/<task_id>` - Cancel a running task
|
||||
|
||||
## Error Handling
|
||||
|
||||
Errors in Celery tasks are automatically captured and stored in Redis. The task status is updated to "error" and includes the error message and traceback. Tasks can be retried using the `/api/prgs/retry/<task_id>` endpoint.
|
||||
12
Dockerfile
12
Dockerfile
@@ -7,6 +7,8 @@ WORKDIR /app
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
gosu \
|
||||
redis-server \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
@@ -22,5 +24,11 @@ COPY . .
|
||||
# Create necessary directories
|
||||
RUN mkdir -p downloads config creds
|
||||
|
||||
# Default command (overridden in docker-compose.yml)
|
||||
CMD ["python", "app.py"]
|
||||
# Make entrypoint script executable
|
||||
RUN chmod +x entrypoint.sh
|
||||
|
||||
# Set entrypoint to our script
|
||||
ENTRYPOINT ["/app/entrypoint.sh"]
|
||||
|
||||
# Default command (empty as entrypoint will handle the default behavior)
|
||||
CMD []
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
[program:spotizerr-celery]
|
||||
command=/path/to/python /path/to/spotizerr/celery_worker.py
|
||||
directory=/path/to/spotizerr
|
||||
user=username
|
||||
numprocs=1
|
||||
stdout_logfile=/path/to/spotizerr/logs/celery_worker.log
|
||||
stderr_logfile=/path/to/spotizerr/logs/celery_worker_error.log
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startsecs=10
|
||||
priority=999
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
environment=REDIS_URL="redis://localhost:6379/0",REDIS_BACKEND="redis://localhost:6379/0"
|
||||
|
||||
; Comment to show how to set up in supervisord:
|
||||
; 1. Copy this file to /etc/supervisor/conf.d/ (adjust path as needed for your system)
|
||||
; 2. Replace /path/to/python with actual python path (e.g., /usr/bin/python3)
|
||||
; 3. Replace /path/to/spotizerr with the actual path to your spotizerr installation
|
||||
; 4. Replace username with the actual username that should run the process
|
||||
; 5. Create logs directory: mkdir -p /path/to/spotizerr/logs
|
||||
; 6. Run: sudo supervisorctl reread && sudo supervisorctl update
|
||||
; 7. Check status: sudo supervisorctl status spotizerr-celery
|
||||
@@ -1,6 +1,16 @@
|
||||
name: spotizerr
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: spotizerr-redis
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
|
||||
spotizerr:
|
||||
volumes:
|
||||
- ./creds:/app/creds
|
||||
@@ -9,7 +19,15 @@ services:
|
||||
ports:
|
||||
- 7171:7171
|
||||
image: cooldockerizer93/spotizerr
|
||||
container_name: spotizerr-app
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
redis:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- PUID=1000 # Replace with your desired user ID | Remove both if you want to run as root (not recommended, might result in unreadable files)
|
||||
- PGID=1000 # Replace with your desired group ID | The user must have write permissions in the volume mapped to /app/downloads
|
||||
- UMASK=0022 # Optional: Sets the default file permissions for newly created files within the container.
|
||||
- MAX_CONCURRENT_DL=3 # Optional: Set the number of concurrent downloads allowed
|
||||
- REDIS_URL=redis://redis:6379/0
|
||||
- REDIS_BACKEND=redis://redis:6379/0
|
||||
|
||||
38
entrypoint.sh
Normal file → Executable file
38
entrypoint.sh
Normal file → Executable file
@@ -6,10 +6,40 @@ if [ -n "${UMASK}" ]; then
|
||||
umask "${UMASK}"
|
||||
fi
|
||||
|
||||
# Function to start the application
|
||||
start_application() {
|
||||
# Start Flask app in the background
|
||||
echo "Starting Flask application..."
|
||||
python app.py &
|
||||
|
||||
# Wait a moment for Flask to initialize
|
||||
sleep 2
|
||||
|
||||
# Start Celery worker
|
||||
echo "Starting Celery worker..."
|
||||
celery -A routes.utils.celery_tasks.celery_app worker --loglevel=info --concurrency=${MAX_CONCURRENT_DL:-3} -Q downloads &
|
||||
|
||||
# Keep the script running
|
||||
wait
|
||||
}
|
||||
|
||||
# Check if custom command was provided
|
||||
if [ $# -gt 0 ]; then
|
||||
# Custom command provided, use it instead of default app startup
|
||||
RUN_COMMAND="$@"
|
||||
else
|
||||
# No custom command, use our default application startup
|
||||
RUN_COMMAND="start_application"
|
||||
fi
|
||||
|
||||
# Check if both PUID and PGID are not set
|
||||
if [ -z "${PUID}" ] && [ -z "${PGID}" ]; then
|
||||
# Run as root directly
|
||||
if [ $# -gt 0 ]; then
|
||||
exec "$@"
|
||||
else
|
||||
start_application
|
||||
fi
|
||||
else
|
||||
# Verify both PUID and PGID are set
|
||||
if [ -z "${PUID}" ] || [ -z "${PGID}" ]; then
|
||||
@@ -19,7 +49,11 @@ else
|
||||
|
||||
# Check for root user request
|
||||
if [ "${PUID}" -eq 0 ] && [ "${PGID}" -eq 0 ]; then
|
||||
if [ $# -gt 0 ]; then
|
||||
exec "$@"
|
||||
else
|
||||
start_application
|
||||
fi
|
||||
else
|
||||
# Check if the group with the specified GID already exists
|
||||
if getent group "${PGID}" >/dev/null; then
|
||||
@@ -45,6 +79,10 @@ else
|
||||
chown -R "${USER_NAME}:${GROUP_NAME}" /app || true
|
||||
|
||||
# Run as specified user
|
||||
if [ $# -gt 0 ]; then
|
||||
exec gosu "${USER_NAME}" "$@"
|
||||
else
|
||||
exec gosu "${USER_NAME}" bash -c "$(declare -f start_application); start_application"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
celery==5.3.6
|
||||
redis==5.0.1
|
||||
flask-celery-helper==1.1.0
|
||||
@@ -43,3 +43,6 @@ websocket-client==1.5.1
|
||||
websockets==14.2
|
||||
Werkzeug==3.1.3
|
||||
zeroconf==0.62.0
|
||||
celery==5.3.6
|
||||
redis==5.0.1
|
||||
flask-celery-helper==1.1.0
|
||||
15
start_app.sh
15
start_app.sh
@@ -1,15 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Start Flask app in the background
|
||||
echo "Starting Flask application..."
|
||||
python app.py &
|
||||
|
||||
# Wait a moment for Flask to initialize
|
||||
sleep 2
|
||||
|
||||
# Start Celery worker
|
||||
echo "Starting Celery worker..."
|
||||
celery -A routes.utils.celery_tasks.celery_app worker --loglevel=info --concurrency=${MAX_CONCURRENT_DL:-3} -Q downloads &
|
||||
|
||||
# Keep the script running
|
||||
wait
|
||||
@@ -1,19 +0,0 @@
|
||||
[program:spotizerr_flask]
|
||||
directory=/home/xoconoch/coding/spotizerr
|
||||
command=python app.py
|
||||
autostart=true
|
||||
autorestart=true
|
||||
stderr_logfile=/var/log/spotizerr/flask.err.log
|
||||
stdout_logfile=/var/log/spotizerr/flask.out.log
|
||||
|
||||
[program:spotizerr_celery]
|
||||
directory=/home/xoconoch/coding/spotizerr
|
||||
command=celery -A routes.utils.celery_tasks.celery_app worker --loglevel=info --concurrency=%(ENV_MAX_CONCURRENT_DL)s -Q downloads
|
||||
environment=MAX_CONCURRENT_DL=3
|
||||
autostart=true
|
||||
autorestart=true
|
||||
stderr_logfile=/var/log/spotizerr/celery.err.log
|
||||
stdout_logfile=/var/log/spotizerr/celery.out.log
|
||||
|
||||
[group:spotizerr]
|
||||
programs=spotizerr_flask,spotizerr_celery
|
||||
Reference in New Issue
Block a user