Merge pull request #157 from Xoconoch/dev

2.3.0
This commit is contained in:
Xoconoch
2025-06-05 23:06:50 +02:00
committed by GitHub
18 changed files with 1760 additions and 1650 deletions

111
README.md
View File

@@ -61,114 +61,9 @@ Access at: `http://localhost:7171`
## Configuration
### Initial Setup
1. Access settings via the gear icon
2. Switch between service tabs (Spotify/Deezer)
3. Enter credentials using the form
4. Configure active accounts in settings
### Spotify Setup
_Note: If you want Spotify-only mode, just keep "Download fallback" setting disabled and don't bother adding Deezer credentials. Deezer-only mode is not, and will not be supported since there already is a much better tool for that called "Deemix"_
### Spotify Credentials Setup
First create a Spotify credentials file using the 3rd-party `librespot-auth` tool, this step has to be done in a PC/Laptop that has the Spotify desktop app installed.
---
#### For Linux (using Docker)
1. Clone the `librespot-auth` repository:
```shell
git clone --depth 1 https://github.com/dspearson/librespot-auth.git
```
2. Build the repository using the Rust Docker image:
```shell
docker run --rm -v "$(pwd)/librespot-auth":/app -w /app rust:latest cargo build --release
```
3. Run the built binary:
```shell
./librespot-auth/target/release/librespot-auth --name "mySpotifyAccount1" --class=computer
```
---
#### For Windows (using Docker)
1. Clone the `librespot-auth` repository:
```shell
git clone --depth 1 https://github.com/dspearson/librespot-auth.git
```
2. Build the repository using a windows-targeted Rust Docker image ([why a different image?](https://github.com/jscharnitzke/rust-build-windows)):
```shell
docker run --rm -v "${pwd}/librespot-auth:/app" -w "/app" jscharnitzke/rust-build-windows --release
```
3. Run the built binary:
```shell
.\librespot-auth\target\x86_64-pc-windows-gnu\release\librespot-auth.exe --name "mySpotifyAccount1" --class=computer
```
---
#### For Apple Silicon (macOS)
1. Clone the `librespot-auth` repository:
```shell
git clone --depth 1 https://github.com/dspearson/librespot-auth.git
```
2. Install Rust using Homebrew:
```shell
brew install rustup
brew install rust
```
3. Build `librespot-auth` for Apple Silicon:
```shell
cd librespot-auth
cargo build --target=aarch64-apple-darwin --release
```
4. Run the built binary:
```shell
./target/aarch64-apple-darwin/release/librespot-auth --name "mySpotifyAccount1" --class=computer
```
---
- Now open the Spotify app
- Click on the "Connect to a device" icon
- Under the "Select Another Device" section, click "mySpotifyAccount1"
- This utility will create a `credentials.json` file
This file has the following format:
```
{"username": "long text" "auth_type": 1 "auth_data": "even longer text"}
```
The important ones are the "username" and "auth_data" parameters, these match the "username" and "credentials" sections respectively when adding/editing spotify credentials in Spotizerr.
In the terminal, you can directly print these parameters using jq:
```
jq -r '.username, .auth_data' credentials.json
```
### Spotify Developer Setup
In order for searching to work, you need to set up your own Spotify Developer application:
1. Visit the [Spotify Developer Dashboard](https://developer.spotify.com/dashboard/)
2. Log in with your Spotify account
3. Click "Create app"
4. Fill in:
- App name (e.g., "My Spotizerr App")
- App description
- Redirect URI: `http://127.0.0.1:7171/callback` (or your custom domain if exposed)
- Check the Developer Terms agreement box
5. Click "Create"
6. On your app page, note your "Client ID"
7. Click "Show client secret" to reveal your "Client Secret"
8. Add these credentials in Spotizerr's settings page under the Spotify service section
Spotify is VERY petty, so, in order to simplify the process, another tool was created to perform this part of the setup; see [spotizerr-auth](https://github.com/Xoconoch/spotizerr-auth)
### Deezer ARL Setup
@@ -266,4 +161,4 @@ Copy that value and paste it into the correspondant setting in Spotizerr
# Acknowledgements
- This project is based on the amazing [deezspot library](https://github.com/jakiepari/deezspot), although their creators are in no way related with Spotizerr, they still deserve credit
- This project was inspired by the amazing [deezspot library](https://github.com/jakiepari/deezspot), although their creators are in no way related with Spotizerr, they still deserve credit.

View File

@@ -2,4 +2,4 @@ waitress==3.0.2
celery==5.5.3
Flask==3.1.1
flask_cors==6.0.0
deezspot-spotizerr==1.5.2
deezspot-spotizerr==1.7.0

View File

@@ -4,45 +4,106 @@ from routes.utils.credentials import (
list_credentials,
create_credential,
delete_credential,
edit_credential
edit_credential,
init_credentials_db,
# Import new utility functions for global Spotify API creds
_get_global_spotify_api_creds,
save_global_spotify_api_creds
)
from pathlib import Path
import logging
logger = logging.getLogger(__name__)
credentials_bp = Blueprint('credentials', __name__)
# Initialize the database and tables when the blueprint is loaded
init_credentials_db()
@credentials_bp.route('/spotify_api_config', methods=['GET', 'PUT'])
def handle_spotify_api_config():
"""Handles GET and PUT requests for the global Spotify API client_id and client_secret."""
try:
if request.method == 'GET':
client_id, client_secret = _get_global_spotify_api_creds()
if client_id is not None and client_secret is not None:
return jsonify({"client_id": client_id, "client_secret": client_secret}), 200
else:
# If search.json exists but is empty/incomplete, or doesn't exist
return jsonify({
"warning": "Global Spotify API credentials are not fully configured or file is missing.",
"client_id": client_id or "",
"client_secret": client_secret or ""
}), 200
elif request.method == 'PUT':
data = request.get_json()
if not data or 'client_id' not in data or 'client_secret' not in data:
return jsonify({"error": "Request body must contain 'client_id' and 'client_secret'"}), 400
client_id = data['client_id']
client_secret = data['client_secret']
if not isinstance(client_id, str) or not isinstance(client_secret, str):
return jsonify({"error": "'client_id' and 'client_secret' must be strings"}), 400
if save_global_spotify_api_creds(client_id, client_secret):
return jsonify({"message": "Global Spotify API credentials updated successfully."}), 200
else:
return jsonify({"error": "Failed to save global Spotify API credentials."}), 500
except Exception as e:
logger.error(f"Error in /spotify_api_config: {e}", exc_info=True)
return jsonify({"error": f"An unexpected error occurred: {str(e)}"}), 500
@credentials_bp.route('/<service>', methods=['GET'])
def handle_list_credentials(service):
try:
if service not in ['spotify', 'deezer']:
return jsonify({"error": "Invalid service. Must be 'spotify' or 'deezer'"}), 400
return jsonify(list_credentials(service))
except ValueError as e:
except ValueError as e: # Should not happen with service check above
return jsonify({"error": str(e)}), 400
except Exception as e:
return jsonify({"error": str(e)}), 500
logger.error(f"Error listing credentials for {service}: {e}", exc_info=True)
return jsonify({"error": f"An unexpected error occurred: {str(e)}"}), 500
@credentials_bp.route('/<service>/<name>', methods=['GET', 'POST', 'PUT', 'DELETE'])
def handle_single_credential(service, name):
try:
# Get credential type from query parameters, default to 'credentials'
cred_type = request.args.get('type', 'credentials')
if cred_type not in ['credentials', 'search']:
return jsonify({"error": "Invalid credential type. Must be 'credentials' or 'search'"}), 400
if service not in ['spotify', 'deezer']:
return jsonify({"error": "Invalid service. Must be 'spotify' or 'deezer'"}), 400
# cred_type logic is removed for Spotify as API keys are global.
# For Deezer, it's always 'credentials' type implicitly.
if request.method == 'GET':
return jsonify(get_credential(service, name, cred_type))
# get_credential for Spotify now only returns region and blob_file_path
return jsonify(get_credential(service, name))
elif request.method == 'POST':
data = request.get_json()
create_credential(service, name, data, cred_type)
return jsonify({"message": f"{cred_type.capitalize()} credential created successfully"}), 201
if not data:
return jsonify({"error": "Request body cannot be empty."}), 400
# create_credential for Spotify now expects 'region' and 'blob_content'
# For Deezer, it expects 'arl' and 'region'
# Validation is handled within create_credential utility function
result = create_credential(service, name, data)
return jsonify({"message": f"Credential for '{name}' ({service}) created successfully.", "details": result}), 201
elif request.method == 'PUT':
data = request.get_json()
edit_credential(service, name, data, cred_type)
return jsonify({"message": f"{cred_type.capitalize()} credential updated successfully"})
if not data:
return jsonify({"error": "Request body cannot be empty."}), 400
# edit_credential for Spotify now handles updates to 'region', 'blob_content'
# For Deezer, 'arl', 'region'
result = edit_credential(service, name, data)
return jsonify({"message": f"Credential for '{name}' ({service}) updated successfully.", "details": result})
elif request.method == 'DELETE':
delete_credential(service, name, cred_type if cred_type != 'credentials' else None)
return jsonify({"message": f"{cred_type.capitalize()} credential deleted successfully"})
# delete_credential for Spotify also handles deleting the blob directory
result = delete_credential(service, name)
return jsonify({"message": f"Credential for '{name}' ({service}) deleted successfully.", "details": result})
except (ValueError, FileNotFoundError, FileExistsError) as e:
status_code = 400
@@ -50,66 +111,76 @@ def handle_single_credential(service, name):
status_code = 404
elif isinstance(e, FileExistsError):
status_code = 409
logger.warning(f"Client error in /<{service}>/<{name}>: {str(e)}")
return jsonify({"error": str(e)}), status_code
except Exception as e:
return jsonify({"error": str(e)}), 500
logger.error(f"Server error in /<{service}>/<{name}>: {e}", exc_info=True)
return jsonify({"error": f"An unexpected error occurred: {str(e)}"}), 500
@credentials_bp.route('/search/<service>/<name>', methods=['GET', 'POST', 'PUT'])
def handle_search_credential(service, name):
"""Special route specifically for search credentials"""
try:
if request.method == 'GET':
return jsonify(get_credential(service, name, 'search'))
elif request.method in ['POST', 'PUT']:
data = request.get_json()
# Validate required fields
if not data.get('client_id') or not data.get('client_secret'):
return jsonify({"error": "Both client_id and client_secret are required"}), 400
# For POST, first check if the credentials directory exists
if request.method == 'POST' and not any(Path(f'./data/{service}/{name}').glob('*.json')):
return jsonify({"error": f"Account '{name}' doesn't exist. Create it first."}), 404
# Create or update search credentials
method_func = create_credential if request.method == 'POST' else edit_credential
method_func(service, name, data, 'search')
action = "created" if request.method == 'POST' else "updated"
return jsonify({"message": f"Search credentials {action} successfully"})
except (ValueError, FileNotFoundError) as e:
status_code = 400 if isinstance(e, ValueError) else 404
return jsonify({"error": str(e)}), status_code
except Exception as e:
return jsonify({"error": str(e)}), 500
# The '/search/<service>/<name>' route is now obsolete for Spotify and has been removed.
@credentials_bp.route('/all/<service>', methods=['GET'])
def handle_all_credentials(service):
"""Lists all credentials for a given service. For Spotify, API keys are global and not listed per account."""
try:
credentials = []
for name in list_credentials(service):
# For each credential, get both the main credentials and search credentials if they exist
cred_data = {
"name": name,
"credentials": get_credential(service, name, 'credentials')
}
if service not in ['spotify', 'deezer']:
return jsonify({"error": "Invalid service. Must be 'spotify' or 'deezer'"}), 400
# For Spotify accounts, also try to get search credentials
if service == 'spotify':
try:
search_creds = get_credential(service, name, 'search')
if search_creds: # Only add if not empty
cred_data["search"] = search_creds
except:
pass # Ignore errors if search.json doesn't exist
credentials.append(cred_data)
credentials_list = []
account_names = list_credentials(service) # This lists names from DB
for name in account_names:
try:
# get_credential for Spotify returns region and blob_file_path.
# For Deezer, it returns arl and region.
account_data = get_credential(service, name)
# We don't add global Spotify API keys here as they are separate
credentials_list.append({"name": name, "details": account_data})
except FileNotFoundError:
logger.warning(f"Credential name '{name}' listed for service '{service}' but not found by get_credential. Skipping.")
except Exception as e_inner:
logger.error(f"Error fetching details for credential '{name}' ({service}): {e_inner}", exc_info=True)
credentials_list.append({"name": name, "error": f"Could not retrieve details: {str(e_inner)}"})
return jsonify(credentials)
except (ValueError, FileNotFoundError) as e:
status_code = 400 if isinstance(e, ValueError) else 404
return jsonify({"error": str(e)}), status_code
return jsonify(credentials_list)
except Exception as e:
return jsonify({"error": str(e)}), 500
logger.error(f"Error in /all/{service}: {e}", exc_info=True)
return jsonify({"error": f"An unexpected error occurred: {str(e)}"}), 500
@credentials_bp.route('/markets', methods=['GET'])
def handle_markets():
"""
Returns a list of unique market regions for Deezer and Spotify accounts.
"""
try:
deezer_regions = set()
spotify_regions = set()
# Process Deezer accounts
deezer_account_names = list_credentials('deezer')
for name in deezer_account_names:
try:
account_data = get_credential('deezer', name)
if account_data and 'region' in account_data and account_data['region']:
deezer_regions.add(account_data['region'])
except Exception as e:
logger.warning(f"Could not retrieve region for deezer account {name}: {e}")
# Process Spotify accounts
spotify_account_names = list_credentials('spotify')
for name in spotify_account_names:
try:
account_data = get_credential('spotify', name)
if account_data and 'region' in account_data and account_data['region']:
spotify_regions.add(account_data['region'])
except Exception as e:
logger.warning(f"Could not retrieve region for spotify account {name}: {e}")
return jsonify({
"deezer": sorted(list(deezer_regions)),
"spotify": sorted(list(spotify_regions))
}), 200
except Exception as e:
logger.error(f"Error in /markets: {e}", exc_info=True)
return jsonify({"error": f"An unexpected error occurred: {str(e)}"}), 500

View File

@@ -4,6 +4,8 @@ import traceback
from deezspot.spotloader import SpoLogin
from deezspot.deezloader import DeeLogin
from pathlib import Path
from routes.utils.credentials import get_credential, _get_global_spotify_api_creds, get_spotify_blob_path
from routes.utils.celery_config import get_config_params
def download_album(
url,
@@ -28,88 +30,49 @@ def download_album(
is_spotify_url = 'open.spotify.com' in url.lower()
is_deezer_url = 'deezer.com' in url.lower()
# Determine service exclusively from URL
service = ''
if is_spotify_url:
service = 'spotify'
elif is_deezer_url:
service = 'deezer'
else:
# If URL can't be detected, raise an error
error_msg = "Invalid URL: Must be from open.spotify.com or deezer.com"
print(f"ERROR: {error_msg}")
raise ValueError(error_msg)
print(f"DEBUG: album.py - URL detection: is_spotify_url={is_spotify_url}, is_deezer_url={is_deezer_url}")
print(f"DEBUG: album.py - Service determined from URL: {service}")
print(f"DEBUG: album.py - Credentials: main={main}, fallback={fallback}")
# Load Spotify client credentials if available
spotify_client_id = None
spotify_client_secret = None
# Smartly determine where to look for Spotify search credentials
if service == 'spotify' and fallback:
# If fallback is enabled, use the fallback account for Spotify search credentials
search_creds_path = Path(f'./data/creds/spotify/{fallback}/search.json')
print(f"DEBUG: Using Spotify search credentials from fallback: {search_creds_path}")
else:
# Otherwise use the main account for Spotify search credentials
search_creds_path = Path(f'./data/creds/spotify/{main}/search.json')
print(f"DEBUG: Using Spotify search credentials from main: {search_creds_path}")
if search_creds_path.exists():
try:
with open(search_creds_path, 'r') as f:
search_creds = json.load(f)
spotify_client_id = search_creds.get('client_id')
spotify_client_secret = search_creds.get('client_secret')
print(f"DEBUG: Loaded Spotify client credentials successfully")
except Exception as e:
print(f"Error loading Spotify search credentials: {e}")
# For Spotify URLs: check if fallback is enabled, if so use the fallback logic,
# otherwise download directly from Spotify
print(f"DEBUG: album.py - Credentials provided: main_account_name='{main}', fallback_account_name='{fallback}'")
# Get global Spotify API credentials
global_spotify_client_id, global_spotify_client_secret = _get_global_spotify_api_creds()
if not global_spotify_client_id or not global_spotify_client_secret:
warning_msg = "WARN: album.py - Global Spotify client_id/secret not found in search.json. Spotify operations will likely fail."
print(warning_msg)
if service == 'spotify':
if fallback:
if quality is None:
quality = 'FLAC'
if fall_quality is None:
fall_quality = 'HIGH'
if fallback: # Fallback is a Deezer account name for a Spotify URL
if quality is None: quality = 'FLAC' # Deezer quality for first attempt
if fall_quality is None: fall_quality = 'HIGH' # Spotify quality for fallback (if Deezer fails)
# First attempt: use DeeLogin's download_albumspo with the 'main' (Deezer credentials)
deezer_error = None
try:
# Load Deezer credentials from 'main' under deezer directory
deezer_creds_dir = os.path.join('./data/creds/deezer', main)
deezer_creds_path = os.path.abspath(os.path.join(deezer_creds_dir, 'credentials.json'))
# Attempt 1: Deezer via download_albumspo (using 'fallback' as Deezer account name)
print(f"DEBUG: album.py - Spotify URL. Attempt 1: Deezer (account: {fallback})")
deezer_fallback_creds = get_credential('deezer', fallback)
arl = deezer_fallback_creds.get('arl')
if not arl:
raise ValueError(f"ARL not found for Deezer account '{fallback}'.")
# DEBUG: Print Deezer credential paths being used
print(f"DEBUG: Looking for Deezer credentials at:")
print(f"DEBUG: deezer_creds_dir = {deezer_creds_dir}")
print(f"DEBUG: deezer_creds_path = {deezer_creds_path}")
print(f"DEBUG: Directory exists = {os.path.exists(deezer_creds_dir)}")
print(f"DEBUG: Credentials file exists = {os.path.exists(deezer_creds_path)}")
# List available directories to compare
print(f"DEBUG: Available Deezer credential directories:")
for dir_name in os.listdir('./data/creds/deezer'):
print(f"DEBUG: ./data/creds/deezer/{dir_name}")
with open(deezer_creds_path, 'r') as f:
deezer_creds = json.load(f)
# Initialize DeeLogin with Deezer credentials and Spotify client credentials
dl = DeeLogin(
arl=deezer_creds.get('arl', ''),
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
arl=arl,
spotify_client_id=global_spotify_client_id,
spotify_client_secret=global_spotify_client_secret,
progress_callback=progress_callback
)
print(f"DEBUG: Starting album download using Deezer credentials (download_albumspo)")
# Download using download_albumspo; pass real_time_dl accordingly and the custom formatting
dl.download_albumspo(
link_album=url,
link_album=url, # Spotify URL
output_dir="./downloads",
quality_download=quality,
quality_download=quality, # Deezer quality
recursive_quality=True,
recursive_download=False,
not_interface=False,
@@ -124,35 +87,32 @@ def download_album(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Album download completed successfully using Deezer credentials")
print(f"DEBUG: album.py - Album download via Deezer (account: {fallback}) successful for Spotify URL.")
except Exception as e:
deezer_error = e
# Immediately report the Deezer error
print(f"ERROR: Deezer album download attempt failed: {e}")
print(f"ERROR: album.py - Deezer attempt (account: {fallback}) for Spotify URL failed: {e}")
traceback.print_exc()
print("Attempting Spotify fallback...")
print(f"DEBUG: album.py - Attempting Spotify direct download (account: {main} for blob)...")
# Load fallback Spotify credentials and attempt download
# Attempt 2: Spotify direct via download_album (using 'main' as Spotify account for blob)
try:
spo_creds_dir = os.path.join('./data/creds/spotify', fallback)
spo_creds_path = os.path.abspath(os.path.join(spo_creds_dir, 'credentials.json'))
print(f"DEBUG: Using Spotify fallback credentials from: {spo_creds_path}")
print(f"DEBUG: Fallback credentials exist: {os.path.exists(spo_creds_path)}")
# We've already loaded the Spotify client credentials above based on fallback
if not global_spotify_client_id or not global_spotify_client_secret:
raise ValueError("Global Spotify API credentials (client_id/secret) not configured for Spotify download.")
blob_file_path = get_spotify_blob_path(main)
if not blob_file_path or not blob_file_path.exists():
raise FileNotFoundError(f"Spotify credentials blob file not found or path is invalid for account '{main}'. Path: {str(blob_file_path)}")
spo = SpoLogin(
credentials_path=spo_creds_path,
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
credentials_path=str(blob_file_path), # Ensure it's a string
spotify_client_id=global_spotify_client_id,
spotify_client_secret=global_spotify_client_secret,
progress_callback=progress_callback
)
print(f"DEBUG: Starting album download using Spotify fallback credentials")
spo.download_album(
link_album=url,
link_album=url, # Spotify URL
output_dir="./downloads",
quality_download=fall_quality,
quality_download=fall_quality, # Spotify quality
recursive_quality=True,
recursive_download=False,
not_interface=False,
@@ -168,34 +128,34 @@ def download_album(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Album download completed successfully using Spotify fallback")
print(f"DEBUG: album.py - Spotify direct download (account: {main} for blob) successful.")
except Exception as e2:
# If fallback also fails, raise an error indicating both attempts failed
print(f"ERROR: Spotify fallback also failed: {e2}")
print(f"ERROR: album.py - Spotify direct download (account: {main} for blob) also failed: {e2}")
raise RuntimeError(
f"Both main (Deezer) and fallback (Spotify) attempts failed. "
f"Both Deezer attempt (account: {fallback}) and Spotify direct (account: {main} for blob) failed. "
f"Deezer error: {deezer_error}, Spotify error: {e2}"
) from e2
else:
# Original behavior: use Spotify main
if quality is None:
quality = 'HIGH'
creds_dir = os.path.join('./data/creds/spotify', main)
credentials_path = os.path.abspath(os.path.join(creds_dir, 'credentials.json'))
print(f"DEBUG: Using Spotify main credentials from: {credentials_path}")
print(f"DEBUG: Credentials exist: {os.path.exists(credentials_path)}")
# Spotify URL, no fallback. Direct Spotify download using 'main' (Spotify account for blob)
if quality is None: quality = 'HIGH' # Default Spotify quality
print(f"DEBUG: album.py - Spotify URL, no fallback. Direct download with Spotify account (for blob): {main}")
if not global_spotify_client_id or not global_spotify_client_secret:
raise ValueError("Global Spotify API credentials (client_id/secret) not configured for Spotify download.")
blob_file_path = get_spotify_blob_path(main)
if not blob_file_path or not blob_file_path.exists():
raise FileNotFoundError(f"Spotify credentials blob file not found or path is invalid for account '{main}'. Path: {str(blob_file_path)}")
spo = SpoLogin(
credentials_path=credentials_path,
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
credentials_path=str(blob_file_path), # Ensure it's a string
spotify_client_id=global_spotify_client_id,
spotify_client_secret=global_spotify_client_secret,
progress_callback=progress_callback
)
print(f"DEBUG: Starting album download using Spotify main credentials")
spo.download_album(
link_album=url,
output_dir="./downloads",
quality_download=quality,
quality_download=quality,
recursive_quality=True,
recursive_download=False,
not_interface=False,
@@ -211,27 +171,24 @@ def download_album(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Album download completed successfully using Spotify main")
# For Deezer URLs: download directly from Deezer
print(f"DEBUG: album.py - Direct Spotify download (account: {main} for blob) successful.")
elif service == 'deezer':
if quality is None:
quality = 'FLAC'
# Existing code remains the same, ignoring fallback
creds_dir = os.path.join('./data/creds/deezer', main)
creds_path = os.path.abspath(os.path.join(creds_dir, 'credentials.json'))
print(f"DEBUG: Using Deezer credentials from: {creds_path}")
print(f"DEBUG: Credentials exist: {os.path.exists(creds_path)}")
with open(creds_path, 'r') as f:
creds = json.load(f)
# Deezer URL. Direct Deezer download using 'main' (Deezer account name for ARL)
if quality is None: quality = 'FLAC' # Default Deezer quality
print(f"DEBUG: album.py - Deezer URL. Direct download with Deezer account: {main}")
deezer_main_creds = get_credential('deezer', main) # For ARL
arl = deezer_main_creds.get('arl')
if not arl:
raise ValueError(f"ARL not found for Deezer account '{main}'.")
dl = DeeLogin(
arl=creds.get('arl', ''),
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
arl=arl, # Account specific ARL
spotify_client_id=global_spotify_client_id, # Global Spotify keys
spotify_client_secret=global_spotify_client_secret, # Global Spotify keys
progress_callback=progress_callback
)
print(f"DEBUG: Starting album download using Deezer credentials (download_albumdee)")
dl.download_albumdee(
dl.download_albumdee( # Deezer URL, download via Deezer
link_album=url,
output_dir="./downloads",
quality_download=quality,
@@ -248,9 +205,10 @@ def download_album(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Album download completed successfully using Deezer direct")
print(f"DEBUG: album.py - Direct Deezer download (account: {main}) successful.")
else:
raise ValueError(f"Unsupported service: {service}")
# Should be caught by initial service check, but as a safeguard
raise ValueError(f"Unsupported service determined: {service}")
except Exception as e:
print(f"ERROR: Album download failed with exception: {e}")
traceback.print_exc()

View File

@@ -6,6 +6,7 @@ import logging
from flask import Blueprint, Response, request, url_for
from routes.utils.celery_queue_manager import download_queue_manager, get_config_params
from routes.utils.get_info import get_spotify_info
from routes.utils.credentials import get_credential, _get_global_spotify_api_creds
from routes.utils.celery_tasks import get_last_task_status, ProgressState
from deezspot.easy_spoty import Spo
@@ -19,36 +20,42 @@ def log_json(message_dict):
print(json.dumps(message_dict))
def get_artist_discography(url, main, album_type='album,single,compilation,appears_on', progress_callback=None):
def get_artist_discography(url, main_spotify_account_name, album_type='album,single,compilation,appears_on', progress_callback=None):
"""
Validate the URL, extract the artist ID, and retrieve the discography.
Uses global Spotify API client_id/secret for Spo initialization.
Args:
url (str): Spotify artist URL.
main_spotify_account_name (str): Name of the Spotify account (for context/logging, not API keys for Spo.__init__).
album_type (str): Types of albums to fetch.
progress_callback: Optional callback for progress.
"""
if not url:
log_json({"status": "error", "message": "No artist URL provided."})
raise ValueError("No artist URL provided.")
# This will raise an exception if the link is invalid.
link_is_valid(link=url)
link_is_valid(link=url) # This will raise an exception if the link is invalid.
# Initialize Spotify API with credentials
spotify_client_id = None
spotify_client_secret = None
search_creds_path = Path(f'./data/creds/spotify/{main}/search.json')
if search_creds_path.exists():
try:
with open(search_creds_path, 'r') as f:
search_creds = json.load(f)
spotify_client_id = search_creds.get('client_id')
spotify_client_secret = search_creds.get('client_secret')
except Exception as e:
log_json({"status": "error", "message": f"Error loading Spotify search credentials: {e}"})
raise
client_id, client_secret = _get_global_spotify_api_creds()
if not client_id or not client_secret:
log_json({"status": "error", "message": "Global Spotify API client_id or client_secret not configured."})
raise ValueError("Global Spotify API credentials are not configured.")
# Initialize the Spotify client with credentials
if spotify_client_id and spotify_client_secret:
Spo.__init__(spotify_client_id, spotify_client_secret)
if not main_spotify_account_name:
# This is a warning now, as API keys are global.
logger.warning("main_spotify_account_name not provided for get_artist_discography context. Using global API keys.")
else:
raise ValueError("No Spotify credentials found")
# Check if account exists for context, good for consistency
try:
get_credential('spotify', main_spotify_account_name)
logger.debug(f"Spotify account context '{main_spotify_account_name}' exists for get_artist_discography.")
except FileNotFoundError:
logger.warning(f"Spotify account '{main_spotify_account_name}' provided for discography context not found.")
except Exception as e:
logger.warning(f"Error checking Spotify account '{main_spotify_account_name}' for discography context: {e}")
Spo.__init__(client_id, client_secret) # Initialize with global API keys
try:
artist_id = get_ids(url)
@@ -58,6 +65,11 @@ def get_artist_discography(url, main, album_type='album,single,compilation,appea
raise ValueError(msg)
try:
# The progress_callback is not a standard param for Spo.get_artist
# If Spo.get_artist is meant to be Spo.get_artist_discography, that would take limit/offset
# Assuming it's Spo.get_artist which takes artist_id and album_type.
# If progress_callback was for a different Spo method, this needs review.
# For now, removing progress_callback from this specific call as Spo.get_artist doesn't use it.
discography = Spo.get_artist(artist_id, album_type=album_type)
return discography
except Exception as fetch_error:

View File

@@ -100,6 +100,10 @@ task_queues = {
'downloads': {
'exchange': 'downloads',
'routing_key': 'downloads',
},
'utility_tasks': {
'exchange': 'utility_tasks',
'routing_key': 'utility_tasks',
}
}

View File

@@ -21,9 +21,11 @@ from .celery_tasks import (
cleanup_stale_errors,
delayed_delete_task_data
)
from .celery_config import get_config_params
from .celery_config import get_config_params, MAX_CONCURRENT_DL
# Import history manager
from .history_manager import init_history_db
# Import credentials manager for DB init
from .credentials import init_credentials_db
# Configure logging
logger = logging.getLogger(__name__)
@@ -39,384 +41,228 @@ class CeleryManager:
Manages Celery workers dynamically based on configuration changes.
"""
def __init__(self):
self.celery_process = None
self.current_worker_count = 0
self.monitoring_thread = None
self.error_cleanup_thread = None
self.running = False
self.log_queue = queue.Queue()
self.output_threads = []
def __init__(self, app_name="download_tasks"):
self.app_name = app_name
self.download_worker_process = None
self.utility_worker_process = None
self.download_log_thread_stdout = None
self.download_log_thread_stderr = None
self.utility_log_thread_stdout = None
self.utility_log_thread_stderr = None
self.stop_event = threading.Event()
self.config_monitor_thread = None
# self.concurrency now specifically refers to download worker concurrency
self.concurrency = get_config_params().get('maxConcurrentDownloads', MAX_CONCURRENT_DL)
logger.info(f"CeleryManager initialized. Download concurrency set to: {self.concurrency}")
def _cleanup_stale_tasks(self):
logger.info("Cleaning up potentially stale Celery tasks...")
def _get_worker_command(self, queues, concurrency, worker_name_suffix, log_level="INFO"):
# Use a unique worker name to avoid conflicts.
# %h is replaced by celery with the actual hostname.
hostname = f"worker_{worker_name_suffix}@%h"
command = [
"celery",
"-A", self.app_name,
"worker",
"--loglevel=" + log_level,
"-Q", queues,
"-c", str(concurrency),
"--hostname=" + hostname,
"--pool=prefork"
]
# Optionally add --without-gossip, --without-mingle, --without-heartbeat
# if experiencing issues or to reduce network load, but defaults are usually fine.
# Example: command.extend(["--without-gossip", "--without-mingle"])
logger.debug(f"Generated Celery command: {' '.join(command)}")
return command
def _process_output_reader(self, stream, log_prefix, error=False):
logger.debug(f"Log reader thread started for {log_prefix}")
try:
tasks = get_all_celery_tasks_info()
if not tasks:
logger.info("No tasks found in Redis to check for staleness.")
return
active_stale_states = [
ProgressState.PROCESSING,
ProgressState.INITIALIZING,
ProgressState.DOWNLOADING,
ProgressState.PROGRESS,
ProgressState.REAL_TIME,
ProgressState.RETRYING
]
stale_tasks_count = 0
for task_summary in tasks:
task_id = task_summary.get("task_id")
if not task_id:
continue
last_status_data = get_last_task_status(task_id)
if last_status_data:
current_status_str = last_status_data.get("status")
if current_status_str in active_stale_states:
logger.warning(f"Task {task_id} ('{task_summary.get('name', 'Unknown')}') found in stale state '{current_status_str}'. Marking as error.")
task_info_details = get_task_info(task_id)
config = get_config_params()
error_payload = {
"status": ProgressState.ERROR,
"message": "Task interrupted due to application restart.",
"error": "Task interrupted due to application restart.",
"timestamp": time.time(),
"type": task_info_details.get("type", task_summary.get("type", "unknown")),
"name": task_info_details.get("name", task_summary.get("name", "Unknown")),
"artist": task_info_details.get("artist", task_summary.get("artist", "")),
"can_retry": True,
"retry_count": last_status_data.get("retry_count", 0),
"max_retries": config.get('maxRetries', 3)
}
store_task_status(task_id, error_payload)
stale_tasks_count += 1
# Schedule deletion for this interrupted task
logger.info(f"Task {task_id} was interrupted. Data scheduled for deletion in 30s.")
delayed_delete_task_data.apply_async(
args=[task_id, "Task interrupted by application restart and auto-cleaned."],
countdown=30
)
if stale_tasks_count > 0:
logger.info(f"Marked {stale_tasks_count} stale tasks as 'error'.")
for line in iter(stream.readline, ''):
if line:
log_method = logger.error if error else logger.info
log_method(f"{log_prefix}: {line.strip()}")
elif self.stop_event.is_set(): # If empty line and stop is set, likely EOF
break
# Loop may also exit if stream is closed by process termination
except ValueError: #ValueError: I/O operation on closed file
if not self.stop_event.is_set():
logger.error(f"Error reading Celery output from {log_prefix} (ValueError - stream closed unexpectedly?)", exc_info=False) # Don't print full trace for common close error
else:
logger.info("No stale tasks found that needed cleanup (active states).")
# NEW: Check for tasks that are already terminal but might have missed their cleanup
logger.info("Checking for terminal tasks (COMPLETE, CANCELLED, terminal ERROR) that might have missed cleanup...")
cleaned_during_this_pass = 0
# `tasks` variable is from `get_all_celery_tasks_info()` called at the beginning of the method
for task_summary in tasks:
task_id = task_summary.get("task_id")
if not task_id:
continue
last_status_data = get_last_task_status(task_id)
if last_status_data:
current_status_str = last_status_data.get("status")
task_info_details = get_task_info(task_id) # Get full info for download_type etc.
cleanup_reason = ""
schedule_cleanup = False
if current_status_str == ProgressState.COMPLETE:
# If a task is COMPLETE (any download_type) and still here, its original scheduled deletion was missed.
logger.warning(f"Task {task_id} ('{task_summary.get('name', 'Unknown')}', type: {task_info_details.get('download_type')}) is COMPLETE and still in Redis. Re-scheduling cleanup.")
cleanup_reason = f"Task ({task_info_details.get('download_type')}) was COMPLETE; re-scheduling auto-cleanup."
schedule_cleanup = True
elif current_status_str == ProgressState.CANCELLED:
logger.warning(f"Task {task_id} ('{task_summary.get('name', 'Unknown')}') is CANCELLED and still in Redis. Re-scheduling cleanup.")
cleanup_reason = "Task was CANCELLED; re-scheduling auto-cleanup."
schedule_cleanup = True
elif current_status_str == ProgressState.ERROR:
can_retry_flag = last_status_data.get("can_retry", False)
# is_submission_error_task and is_duplicate_error_task are flags on task_info, not typically on last_status
is_submission_error = task_info_details.get("is_submission_error_task", False)
is_duplicate_error = task_info_details.get("is_duplicate_error_task", False)
# Check if it's an error state that should have been cleaned up
if not can_retry_flag or is_submission_error or is_duplicate_error or last_status_data.get("status") == ProgressState.ERROR_RETRIED:
# ERROR_RETRIED means the original task is done and should be cleaned.
logger.warning(f"Task {task_id} ('{task_summary.get('name', 'Unknown')}') is in a terminal ERROR state ('{last_status_data.get('error')}') and still in Redis. Re-scheduling cleanup.")
cleanup_reason = f"Task was in terminal ERROR state ('{last_status_data.get('error', 'Unknown error')}'); re-scheduling auto-cleanup."
schedule_cleanup = True
elif current_status_str == ProgressState.ERROR_RETRIED:
# This state itself implies the task is terminal and its data can be cleaned.
logger.warning(f"Task {task_id} ('{task_summary.get('name', 'Unknown')}') is ERROR_RETRIED and still in Redis. Re-scheduling cleanup.")
cleanup_reason = "Task was ERROR_RETRIED; re-scheduling auto-cleanup."
schedule_cleanup = True
if schedule_cleanup:
delayed_delete_task_data.apply_async(
args=[task_id, cleanup_reason],
countdown=30 # Schedule with 30s delay
)
cleaned_during_this_pass +=1
if cleaned_during_this_pass > 0:
logger.info(f"Re-scheduled cleanup for {cleaned_during_this_pass} terminal tasks that were still in Redis.")
else:
logger.info("No additional terminal tasks found in Redis needing cleanup re-scheduling.")
logger.info(f"{log_prefix} stream reader gracefully stopped due to closed stream after stop signal.")
except Exception as e:
logger.error(f"Error during stale task cleanup: {e}", exc_info=True)
logger.error(f"Unexpected error in log reader for {log_prefix}: {e}", exc_info=True)
finally:
if hasattr(stream, 'close') and not stream.closed:
stream.close()
logger.info(f"{log_prefix} stream reader thread finished.")
def start(self):
"""Start the Celery manager and initial workers"""
if self.running:
return
self.running = True
# Initialize history database
init_history_db()
# Clean up stale tasks BEFORE starting/restarting workers
self._cleanup_stale_tasks()
# Start initial workers
self._update_workers()
# Start monitoring thread for config changes
self.monitoring_thread = threading.Thread(target=self._monitor_config, daemon=True)
self.monitoring_thread.start()
self.stop_event.clear() # Clear stop event before starting
# Start periodic error cleanup thread
self.error_cleanup_thread = threading.Thread(target=self._run_periodic_error_cleanup, daemon=True)
self.error_cleanup_thread.start()
# Start Download Worker
if self.download_worker_process and self.download_worker_process.poll() is None:
logger.info("Celery Download Worker is already running.")
else:
self.concurrency = get_config_params().get('maxConcurrentDownloads', self.concurrency)
download_cmd = self._get_worker_command(
queues="downloads",
concurrency=self.concurrency,
worker_name_suffix="dlw" # Download Worker
)
logger.info(f"Starting Celery Download Worker with command: {' '.join(download_cmd)}")
self.download_worker_process = subprocess.Popen(
download_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True
)
self.download_log_thread_stdout = threading.Thread(target=self._process_output_reader, args=(self.download_worker_process.stdout, "Celery[DW-STDOUT]"))
self.download_log_thread_stderr = threading.Thread(target=self._process_output_reader, args=(self.download_worker_process.stderr, "Celery[DW-STDERR]", True))
self.download_log_thread_stdout.start()
self.download_log_thread_stderr.start()
logger.info(f"Celery Download Worker (PID: {self.download_worker_process.pid}) started with concurrency {self.concurrency}.")
# Start Utility Worker
if self.utility_worker_process and self.utility_worker_process.poll() is None:
logger.info("Celery Utility Worker is already running.")
else:
utility_cmd = self._get_worker_command(
queues="utility_tasks,default", # Listen to utility and default
concurrency=3,
worker_name_suffix="utw" # Utility Worker
)
logger.info(f"Starting Celery Utility Worker with command: {' '.join(utility_cmd)}")
self.utility_worker_process = subprocess.Popen(
utility_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True
)
self.utility_log_thread_stdout = threading.Thread(target=self._process_output_reader, args=(self.utility_worker_process.stdout, "Celery[UW-STDOUT]"))
self.utility_log_thread_stderr = threading.Thread(target=self._process_output_reader, args=(self.utility_worker_process.stderr, "Celery[UW-STDERR]", True))
self.utility_log_thread_stdout.start()
self.utility_log_thread_stderr.start()
logger.info(f"Celery Utility Worker (PID: {self.utility_worker_process.pid}) started with concurrency 3.")
if self.config_monitor_thread is None or not self.config_monitor_thread.is_alive():
self.config_monitor_thread = threading.Thread(target=self._monitor_config_changes)
self.config_monitor_thread.daemon = True # Allow main program to exit even if this thread is running
self.config_monitor_thread.start()
logger.info("CeleryManager: Config monitor thread started.")
else:
logger.info("CeleryManager: Config monitor thread already running.")
def _monitor_config_changes(self):
logger.info("CeleryManager: Config monitor thread active, monitoring configuration changes...")
while not self.stop_event.is_set():
try:
time.sleep(10) # Check every 10 seconds
if self.stop_event.is_set(): break
current_config = get_config_params()
new_max_concurrent_downloads = current_config.get('maxConcurrentDownloads', self.concurrency)
if new_max_concurrent_downloads != self.concurrency:
logger.info(f"CeleryManager: Detected change in maxConcurrentDownloads from {self.concurrency} to {new_max_concurrent_downloads}. Restarting download worker only.")
# Stop only the download worker
if self.download_worker_process and self.download_worker_process.poll() is None:
logger.info(f"Stopping Celery Download Worker (PID: {self.download_worker_process.pid}) for config update...")
self.download_worker_process.terminate()
try:
self.download_worker_process.wait(timeout=10)
logger.info(f"Celery Download Worker (PID: {self.download_worker_process.pid}) terminated.")
except subprocess.TimeoutExpired:
logger.warning(f"Celery Download Worker (PID: {self.download_worker_process.pid}) did not terminate gracefully, killing.")
self.download_worker_process.kill()
self.download_worker_process = None
# Wait for log threads of download worker to finish
if self.download_log_thread_stdout and self.download_log_thread_stdout.is_alive():
self.download_log_thread_stdout.join(timeout=5)
if self.download_log_thread_stderr and self.download_log_thread_stderr.is_alive():
self.download_log_thread_stderr.join(timeout=5)
self.concurrency = new_max_concurrent_downloads
# Restart only the download worker
download_cmd = self._get_worker_command("downloads", self.concurrency, "dlw")
logger.info(f"Restarting Celery Download Worker with command: {' '.join(download_cmd)}")
self.download_worker_process = subprocess.Popen(
download_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True
)
self.download_log_thread_stdout = threading.Thread(target=self._process_output_reader, args=(self.download_worker_process.stdout, "Celery[DW-STDOUT]"))
self.download_log_thread_stderr = threading.Thread(target=self._process_output_reader, args=(self.download_worker_process.stderr, "Celery[DW-STDERR]", True))
self.download_log_thread_stdout.start()
self.download_log_thread_stderr.start()
logger.info(f"Celery Download Worker (PID: {self.download_worker_process.pid}) restarted with new concurrency {self.concurrency}.")
except Exception as e:
logger.error(f"CeleryManager: Error in config monitor thread: {e}", exc_info=True)
# Avoid busy-looping on continuous errors
if not self.stop_event.is_set(): time.sleep(30)
logger.info("CeleryManager: Config monitor thread stopped.")
# Register shutdown handler
atexit.register(self.stop)
def _stop_worker_process(self, worker_process, worker_name):
if worker_process and worker_process.poll() is None:
logger.info(f"Terminating Celery {worker_name} Worker (PID: {worker_process.pid})...")
worker_process.terminate()
try:
worker_process.wait(timeout=10)
logger.info(f"Celery {worker_name} Worker (PID: {worker_process.pid}) terminated.")
except subprocess.TimeoutExpired:
logger.warning(f"Celery {worker_name} Worker (PID: {worker_process.pid}) did not terminate gracefully, killing.")
worker_process.kill()
return None # Set process to None after stopping
def stop(self):
"""Stop the Celery manager and all workers"""
self.running = False
logger.info("CeleryManager: Stopping Celery workers...")
self.stop_event.set() # Signal all threads to stop
# Stop download worker
self.download_worker_process = self._stop_worker_process(self.download_worker_process, "Download")
# Stop all running threads
for thread in self.output_threads:
if thread.is_alive():
# We can't really stop the threads, but they'll exit on their own
# when the process is terminated since they're daemon threads
pass
# Stop utility worker
self.utility_worker_process = self._stop_worker_process(self.utility_worker_process, "Utility")
logger.info("Joining log threads...")
thread_timeout = 5 # seconds to wait for log threads
# Join download worker log threads
if self.download_log_thread_stdout and self.download_log_thread_stdout.is_alive():
self.download_log_thread_stdout.join(timeout=thread_timeout)
if self.download_log_thread_stderr and self.download_log_thread_stderr.is_alive():
self.download_log_thread_stderr.join(timeout=thread_timeout)
# Join utility worker log threads
if self.utility_log_thread_stdout and self.utility_log_thread_stdout.is_alive():
self.utility_log_thread_stdout.join(timeout=thread_timeout)
if self.utility_log_thread_stderr and self.utility_log_thread_stderr.is_alive():
self.utility_log_thread_stderr.join(timeout=thread_timeout)
if self.config_monitor_thread and self.config_monitor_thread.is_alive():
logger.info("Joining config_monitor_thread...")
self.config_monitor_thread.join(timeout=thread_timeout)
if self.celery_process:
logger.info("Stopping Celery workers...")
try:
# Send SIGTERM to process group
os.killpg(os.getpgid(self.celery_process.pid), signal.SIGTERM)
self.celery_process.wait(timeout=5)
except (subprocess.TimeoutExpired, ProcessLookupError):
# Force kill if not terminated
try:
os.killpg(os.getpgid(self.celery_process.pid), signal.SIGKILL)
except ProcessLookupError:
pass
self.celery_process = None
self.current_worker_count = 0
def _get_worker_count(self):
"""Get the configured worker count from config file"""
try:
if not Path(CONFIG_PATH).exists():
return 3 # Default
with open(CONFIG_PATH, 'r') as f:
config = json.load(f)
return int(config.get('maxConcurrentDownloads', 3))
except Exception as e:
logger.error(f"Error reading worker count from config: {e}")
return 3 # Default on error
def _update_workers(self):
"""Update workers if needed based on configuration"""
new_worker_count = self._get_worker_count()
if new_worker_count == self.current_worker_count and self.celery_process and self.celery_process.poll() is None:
return # No change and process is running
logger.info(f"Updating Celery workers from {self.current_worker_count} to {new_worker_count}")
# Stop existing workers if running
if self.celery_process:
try:
logger.info("Stopping existing Celery workers...")
os.killpg(os.getpgid(self.celery_process.pid), signal.SIGTERM)
self.celery_process.wait(timeout=5)
except (subprocess.TimeoutExpired, ProcessLookupError):
try:
logger.warning("Forcibly killing Celery workers with SIGKILL")
os.killpg(os.getpgid(self.celery_process.pid), signal.SIGKILL)
except ProcessLookupError:
pass
# Clear output threads list
self.output_threads = []
# Wait a moment to ensure processes are terminated
time.sleep(2)
# Additional cleanup - find and kill any stray Celery processes
try:
# This runs a shell command to find and kill all celery processes
subprocess.run(
"ps aux | grep 'celery -A routes.utils.celery_tasks.celery_app worker' | grep -v grep | awk '{print $2}' | xargs -r kill -9",
shell=True,
stderr=subprocess.PIPE
)
logger.info("Killed any stray Celery processes")
# Wait a moment to ensure processes are terminated
logger.info("CeleryManager: All workers and threads signaled to stop and joined.")
def restart(self):
logger.info("CeleryManager: Restarting all Celery workers...")
self.stop()
# Short delay before restarting
logger.info("Waiting a brief moment before restarting workers...")
time.sleep(2)
self.start()
logger.info("CeleryManager: All Celery workers restarted.")
# Global instance for managing Celery workers
celery_manager = CeleryManager()
# Example of how to use the manager (typically called from your main app script)
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] [%(threadName)s] [%(name)s] - %(message)s')
logger.info("Starting Celery Manager example...")
celery_manager.start()
try:
while True:
time.sleep(1)
except Exception as e:
logger.error(f"Error during stray process cleanup: {e}")
# Start new workers with updated concurrency
try:
# Set environment variables to configure Celery logging
env = os.environ.copy()
env['PYTHONUNBUFFERED'] = '1' # Ensure Python output is unbuffered
# Construct command with extra logging options
cmd = [
'celery',
'-A', CELERY_APP,
'worker',
'--loglevel=info',
f'--concurrency={new_worker_count}',
'-Q', 'downloads,default',
'--logfile=-', # Output logs to stdout
'--without-heartbeat', # Reduce log noise
'--without-gossip', # Reduce log noise
'--without-mingle', # Reduce log noise
# Add unique worker name to prevent conflicts
f'--hostname=worker@%h-{uuid.uuid4()}'
]
logger.info(f"Starting new Celery workers with command: {' '.join(cmd)}")
self.celery_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env,
preexec_fn=os.setsid, # New process group for clean termination
universal_newlines=True,
bufsize=1 # Line buffered
)
self.current_worker_count = new_worker_count
logger.info(f"Started Celery workers with concurrency {new_worker_count}, PID: {self.celery_process.pid}")
# Verify the process started correctly
time.sleep(2)
if self.celery_process.poll() is not None:
# Process exited prematurely
stdout, stderr = "", ""
try:
stdout, stderr = self.celery_process.communicate(timeout=1)
except subprocess.TimeoutExpired:
pass
logger.error(f"Celery workers failed to start. Exit code: {self.celery_process.poll()}")
logger.error(f"Stdout: {stdout}")
logger.error(f"Stderr: {stderr}")
self.celery_process = None
raise RuntimeError("Celery workers failed to start")
# Start non-blocking output reader threads for both stdout and stderr
stdout_thread = threading.Thread(
target=self._process_output_reader,
args=(self.celery_process.stdout, "STDOUT"),
daemon=True
)
stdout_thread.start()
self.output_threads.append(stdout_thread)
stderr_thread = threading.Thread(
target=self._process_output_reader,
args=(self.celery_process.stderr, "STDERR"),
daemon=True
)
stderr_thread.start()
self.output_threads.append(stderr_thread)
except Exception as e:
logger.error(f"Error starting Celery workers: {e}")
# In case of failure, make sure we don't leave orphaned processes
if self.celery_process and self.celery_process.poll() is None:
try:
os.killpg(os.getpgid(self.celery_process.pid), signal.SIGKILL)
except (ProcessLookupError, OSError):
pass
self.celery_process = None
def _process_output_reader(self, pipe, stream_name):
"""Read and log output from the process"""
try:
for line in iter(pipe.readline, ''):
if not line:
break
line = line.strip()
if not line:
continue
# Format the message to identify it's from Celery
if "ERROR" in line or "CRITICAL" in line:
logger.error(f"Celery[{stream_name}]: {line}")
elif "WARNING" in line:
logger.warning(f"Celery[{stream_name}]: {line}")
elif "DEBUG" in line:
logger.debug(f"Celery[{stream_name}]: {line}")
else:
logger.info(f"Celery[{stream_name}]: {line}")
except Exception as e:
logger.error(f"Error processing Celery output: {e}")
finally:
pipe.close()
def _monitor_config(self):
"""Monitor configuration file for changes"""
logger.info("Starting config monitoring thread")
last_check_time = 0
while self.running:
try:
# Check for changes
if time.time() - last_check_time >= CONFIG_CHECK_INTERVAL:
self._update_workers()
last_check_time = time.time()
time.sleep(1)
except Exception as e:
logger.error(f"Error in config monitoring thread: {e}")
time.sleep(5) # Wait before retrying
def _run_periodic_error_cleanup(self):
"""Periodically triggers the cleanup_stale_errors Celery task."""
cleanup_interval = 60 # Run cleanup task every 60 seconds
logger.info(f"Starting periodic error cleanup scheduler (runs every {cleanup_interval}s).")
while self.running:
try:
logger.info("Scheduling cleanup_stale_errors task...")
cleanup_stale_errors.delay() # Call the Celery task
except Exception as e:
logger.error(f"Error scheduling cleanup_stale_errors task: {e}", exc_info=True)
# Wait for the next interval
# Use a loop to check self.running more frequently to allow faster shutdown
for _ in range(cleanup_interval):
if not self.running:
break
time.sleep(1)
logger.info("Periodic error cleanup scheduler stopped.")
# Create single instance
celery_manager = CeleryManager()
except KeyboardInterrupt:
logger.info("Keyboard interrupt received, stopping Celery Manager...")
finally:
celery_manager.stop()
logger.info("Celery Manager example finished.")

View File

@@ -931,11 +931,15 @@ def task_prerun_handler(task_id=None, task=None, *args, **kwargs):
def task_postrun_handler(task_id=None, task=None, retval=None, state=None, *args, **kwargs):
"""Signal handler when a task finishes"""
try:
# Define download task names
download_task_names = ["download_track", "download_album", "download_playlist"]
last_status_for_history = get_last_task_status(task_id)
if last_status_for_history and last_status_for_history.get("status") in [ProgressState.COMPLETE, ProgressState.ERROR, ProgressState.CANCELLED, "ERROR_RETRIED", "ERROR_AUTO_CLEANED"]:
if state == states.REVOKED and last_status_for_history.get("status") != ProgressState.CANCELLED:
logger.info(f"Task {task_id} was REVOKED (likely cancelled), logging to history.")
_log_task_to_history(task_id, 'CANCELLED', "Task was revoked/cancelled.")
if task and task.name in download_task_names: # Check if it's a download task
_log_task_to_history(task_id, 'CANCELLED', "Task was revoked/cancelled.")
# return # Let status update proceed if necessary
task_info = get_task_info(task_id)
@@ -952,7 +956,8 @@ def task_postrun_handler(task_id=None, task=None, retval=None, state=None, *args
"message": "Download completed successfully."
})
logger.info(f"Task {task_id} completed successfully: {task_info.get('name', 'Unknown')}")
_log_task_to_history(task_id, 'COMPLETED')
if task and task.name in download_task_names: # Check if it's a download task
_log_task_to_history(task_id, 'COMPLETED')
if task_info.get("download_type") == "track": # Applies to single track downloads and tracks from playlists/albums
delayed_delete_task_data.apply_async(
@@ -999,12 +1004,15 @@ def task_postrun_handler(task_id=None, task=None, retval=None, state=None, *args
logger.error(f"Error in task_postrun_handler: {e}", exc_info=True)
@task_failure.connect
def task_failure_handler(task_id=None, exception=None, traceback=None, *args, **kwargs):
def task_failure_handler(task_id=None, exception=None, traceback=None, sender=None, *args, **kwargs):
"""Signal handler when a task fails"""
try:
# Skip if Retry exception
if isinstance(exception, Retry):
return
# Define download task names
download_task_names = ["download_track", "download_album", "download_playlist"]
# Get task info and status
task_info = get_task_info(task_id)
@@ -1038,7 +1046,8 @@ def task_failure_handler(task_id=None, exception=None, traceback=None, *args, **
})
logger.error(f"Task {task_id} failed: {str(exception)}")
_log_task_to_history(task_id, 'ERROR', str(exception))
if sender and sender.name in download_task_names: # Check if it's a download task
_log_task_to_history(task_id, 'ERROR', str(exception))
if can_retry:
logger.info(f"Task {task_id} can be retried ({retry_count}/{max_retries})")
@@ -1346,7 +1355,7 @@ def delete_task_data_and_log(task_id, reason="Task data deleted"):
logger.error(f"Error deleting task data for {task_id}: {e}", exc_info=True)
return False
@celery_app.task(name="cleanup_stale_errors", queue="default") # Put on default queue, not downloads
@celery_app.task(name="cleanup_stale_errors", queue="utility_tasks") # Put on utility_tasks queue
def cleanup_stale_errors():
"""
Periodically checks for tasks in ERROR state for more than 1 minute and cleans them up.
@@ -1385,7 +1394,7 @@ def cleanup_stale_errors():
logger.error(f"Error during cleanup_stale_errors: {e}", exc_info=True)
return {"status": "error", "error": str(e)}
@celery_app.task(name="delayed_delete_task_data", queue="default") # Use default queue for utility tasks
@celery_app.task(name="delayed_delete_task_data", queue="utility_tasks") # Use utility_tasks queue
def delayed_delete_task_data(task_id, reason):
"""
Celery task to delete task data after a delay.

View File

@@ -1,447 +1,467 @@
import json
from pathlib import Path
import shutil
from deezspot.spotloader import SpoLogin
from deezspot.deezloader import DeeLogin
import sqlite3
import traceback # For logging detailed error messages
import time # For retry delays
import logging
def _get_spotify_search_creds(creds_dir: Path):
"""Helper to load client_id and client_secret from search.json for a Spotify account."""
search_file = creds_dir / 'search.json'
if search_file.exists():
# Assuming deezspot is in a location findable by Python's import system
# from deezspot.spotloader import SpoLogin # Used in validation
# from deezspot.deezloader import DeeLogin # Used in validation
# For now, as per original, validation calls these directly.
logger = logging.getLogger(__name__) # Assuming logger is configured elsewhere
# --- New Database and Path Definitions ---
CREDS_BASE_DIR = Path('./data/creds')
ACCOUNTS_DB_PATH = CREDS_BASE_DIR / 'accounts.db'
BLOBS_DIR = CREDS_BASE_DIR / 'blobs'
GLOBAL_SEARCH_JSON_PATH = CREDS_BASE_DIR / 'search.json' # Global Spotify API creds
EXPECTED_SPOTIFY_TABLE_COLUMNS = {
"name": "TEXT PRIMARY KEY",
# client_id and client_secret are now global
"region": "TEXT", # ISO 3166-1 alpha-2
"created_at": "REAL",
"updated_at": "REAL"
}
EXPECTED_DEEZER_TABLE_COLUMNS = {
"name": "TEXT PRIMARY KEY",
"arl": "TEXT",
"region": "TEXT", # ISO 3166-1 alpha-2
"created_at": "REAL",
"updated_at": "REAL"
}
def _get_db_connection():
ACCOUNTS_DB_PATH.parent.mkdir(parents=True, exist_ok=True)
BLOBS_DIR.mkdir(parents=True, exist_ok=True) # Ensure blobs directory also exists
conn = sqlite3.connect(ACCOUNTS_DB_PATH, timeout=10)
conn.row_factory = sqlite3.Row
return conn
def _ensure_table_schema(cursor: sqlite3.Cursor, table_name: str, expected_columns: dict):
"""Ensures the given table has all expected columns, adding them if necessary."""
try:
cursor.execute(f"PRAGMA table_info({table_name})")
existing_columns_info = cursor.fetchall()
existing_column_names = {col[1] for col in existing_columns_info}
added_columns = False
for col_name, col_type in expected_columns.items():
if col_name not in existing_column_names:
# Basic protection against altering PK after creation if table is not empty
if 'PRIMARY KEY' in col_type.upper() and existing_columns_info:
logger.warning(
f"Column '{col_name}' is part of PRIMARY KEY for table '{table_name}' "
f"and was expected to be created by CREATE TABLE. Skipping explicit ADD COLUMN."
)
continue
col_type_for_add = col_type.replace(' PRIMARY KEY', '').strip()
try:
cursor.execute(f"ALTER TABLE {table_name} ADD COLUMN {col_name} {col_type_for_add}")
logger.info(f"Added missing column '{col_name} {col_type_for_add}' to table '{table_name}'.")
added_columns = True
except sqlite3.OperationalError as alter_e:
logger.warning(
f"Could not add column '{col_name}' to table '{table_name}': {alter_e}. "
f"It might already exist with a different definition or there's another schema mismatch."
)
return added_columns
except sqlite3.Error as e:
logger.error(f"Error ensuring schema for table '{table_name}': {e}", exc_info=True)
return False
def init_credentials_db():
"""Initializes the accounts.db and its tables if they don't exist."""
try:
with _get_db_connection() as conn:
cursor = conn.cursor()
# Spotify Table
cursor.execute("""
CREATE TABLE IF NOT EXISTS spotify (
name TEXT PRIMARY KEY,
region TEXT,
created_at REAL,
updated_at REAL
)
""")
_ensure_table_schema(cursor, "spotify", EXPECTED_SPOTIFY_TABLE_COLUMNS)
# Deezer Table
cursor.execute("""
CREATE TABLE IF NOT EXISTS deezer (
name TEXT PRIMARY KEY,
arl TEXT,
region TEXT,
created_at REAL,
updated_at REAL
)
""")
_ensure_table_schema(cursor, "deezer", EXPECTED_DEEZER_TABLE_COLUMNS)
# Ensure global search.json exists, create if not
if not GLOBAL_SEARCH_JSON_PATH.exists():
logger.info(f"Global Spotify search credential file not found at {GLOBAL_SEARCH_JSON_PATH}. Creating empty file.")
with open(GLOBAL_SEARCH_JSON_PATH, 'w') as f_search:
json.dump({"client_id": "", "client_secret": ""}, f_search, indent=4)
conn.commit()
logger.info(f"Credentials database initialized/schema checked at {ACCOUNTS_DB_PATH}")
except sqlite3.Error as e:
logger.error(f"Error initializing credentials database: {e}", exc_info=True)
raise
def _get_global_spotify_api_creds():
"""Loads client_id and client_secret from the global search.json."""
if GLOBAL_SEARCH_JSON_PATH.exists():
try:
with open(search_file, 'r') as f:
with open(GLOBAL_SEARCH_JSON_PATH, 'r') as f:
search_data = json.load(f)
return search_data.get('client_id'), search_data.get('client_secret')
except Exception:
# Log error if search.json is malformed or unreadable
print(f"Warning: Could not read Spotify search credentials from {search_file}")
traceback.print_exc()
return None, None
client_id = search_data.get('client_id')
client_secret = search_data.get('client_secret')
if client_id and client_secret:
return client_id, client_secret
else:
logger.warning(f"Global Spotify API credentials in {GLOBAL_SEARCH_JSON_PATH} are incomplete.")
except Exception as e:
logger.error(f"Error reading global Spotify API credentials from {GLOBAL_SEARCH_JSON_PATH}: {e}", exc_info=True)
else:
logger.warning(f"Global Spotify API credential file {GLOBAL_SEARCH_JSON_PATH} not found.")
return None, None # Return None if file doesn't exist or creds are incomplete/invalid
def _validate_with_retry(service_name, account_name, creds_dir_path, cred_file_path, data_for_validation, is_spotify):
def save_global_spotify_api_creds(client_id: str, client_secret: str):
"""Saves client_id and client_secret to the global search.json."""
try:
GLOBAL_SEARCH_JSON_PATH.parent.mkdir(parents=True, exist_ok=True)
with open(GLOBAL_SEARCH_JSON_PATH, 'w') as f:
json.dump({"client_id": client_id, "client_secret": client_secret}, f, indent=4)
logger.info(f"Global Spotify API credentials saved to {GLOBAL_SEARCH_JSON_PATH}")
return True
except Exception as e:
logger.error(f"Error saving global Spotify API credentials to {GLOBAL_SEARCH_JSON_PATH}: {e}", exc_info=True)
return False
def _validate_with_retry(service_name, account_name, validation_data):
"""
Attempts to validate credentials with retries for connection errors.
- For Spotify, cred_file_path is used.
- For Deezer, data_for_validation (which contains the 'arl' key) is used.
validation_data (dict): For Spotify, expects {'client_id': ..., 'client_secret': ..., 'blob_file_path': ...}
For Deezer, expects {'arl': ...}
Returns True if validated, raises ValueError if not.
"""
max_retries = 5
# Deezspot imports need to be available. Assuming they are.
from deezspot.spotloader import SpoLogin
from deezspot.deezloader import DeeLogin
max_retries = 3 # Reduced for brevity, was 5
last_exception = None
for attempt in range(max_retries):
try:
if is_spotify:
client_id, client_secret = _get_spotify_search_creds(creds_dir_path)
SpoLogin(credentials_path=str(cred_file_path), spotify_client_id=client_id, spotify_client_secret=client_secret)
if service_name == 'spotify':
# For Spotify, validation uses the account's blob and GLOBAL API creds
global_client_id, global_client_secret = _get_global_spotify_api_creds()
if not global_client_id or not global_client_secret:
raise ValueError("Global Spotify API client_id or client_secret not configured for validation.")
blob_file_path = validation_data.get('blob_file_path')
if not blob_file_path or not Path(blob_file_path).exists():
raise ValueError(f"Spotify blob file missing for validation of account {account_name}")
SpoLogin(credentials_path=str(blob_file_path), spotify_client_id=global_client_id, spotify_client_secret=global_client_secret)
else: # Deezer
arl = data_for_validation.get('arl')
arl = validation_data.get('arl')
if not arl:
# This should be caught by prior checks, but as a safeguard:
raise ValueError("Missing 'arl' for Deezer validation.")
DeeLogin(arl=arl)
print(f"{service_name.capitalize()} credentials for {account_name} validated successfully (attempt {attempt + 1}).")
return True # Validation successful
logger.info(f"{service_name.capitalize()} credentials for {account_name} validated successfully (attempt {attempt + 1}).")
return True
except Exception as e:
last_exception = e
error_str = str(e).lower()
# More comprehensive check for connection-related errors
is_connection_error = (
"connection refused" in error_str or
"connection error" in error_str or
"timeout" in error_str or
"temporary failure in name resolution" in error_str or
"dns lookup failed" in error_str or
"network is unreachable" in error_str or
"ssl handshake failed" in error_str or # Can be network-related
"connection reset by peer" in error_str
"connection refused" in error_str or "connection error" in error_str or
"timeout" in error_str or "temporary failure in name resolution" in error_str or
"dns lookup failed" in error_str or "network is unreachable" in error_str or
"ssl handshake failed" in error_str or "connection reset by peer" in error_str
)
if is_connection_error and attempt < max_retries - 1:
retry_delay = 2 + attempt # Increasing delay (2s, 3s, 4s, 5s)
print(f"Validation for {account_name} ({service_name}) failed on attempt {attempt + 1}/{max_retries} due to connection issue: {e}. Retrying in {retry_delay}s...")
retry_delay = 2 + attempt
logger.warning(f"Validation for {account_name} ({service_name}) failed (attempt {attempt + 1}) due to connection issue: {e}. Retrying in {retry_delay}s...")
time.sleep(retry_delay)
continue # Go to next retry attempt
continue
else:
# Not a connection error, or it's the last retry for a connection error
print(f"Validation for {account_name} ({service_name}) failed on attempt {attempt + 1} with non-retryable error or max retries reached for connection error.")
break # Exit retry loop
logger.error(f"Validation for {account_name} ({service_name}) failed on attempt {attempt + 1} (non-retryable or max retries).")
break
# If loop finished without returning True, validation failed
print(f"ERROR: Credential validation definitively failed for {service_name} account {account_name} after {attempt + 1} attempt(s).")
if last_exception:
base_error_message = str(last_exception).splitlines()[-1]
detailed_error_message = f"Invalid {service_name} credentials. Verification failed: {base_error_message}"
if is_spotify and "incorrect padding" in base_error_message.lower():
detailed_error_message += ". Hint: Do not throw your password here, read the docs"
# traceback.print_exc() # Already printed in create/edit, avoid duplicate full trace
detailed_error_message = f"Invalid {service_name} credentials for {account_name}. Verification failed: {base_error_message}"
if service_name == 'spotify' and "incorrect padding" in base_error_message.lower():
detailed_error_message += ". Hint: For Spotify, ensure the credentials blob content is correct."
raise ValueError(detailed_error_message)
else: # Should not happen if loop runs at least once
raise ValueError(f"Invalid {service_name} credentials. Verification failed (unknown reason after retries).")
def get_credential(service, name, cred_type='credentials'):
"""
Retrieves existing credential contents by name.
Args:
service (str): 'spotify' or 'deezer'
name (str): Custom name of the credential to retrieve
cred_type (str): 'credentials' or 'search' - type of credential file to read
Returns:
dict: Credential data as dictionary
Raises:
FileNotFoundError: If the credential doesn't exist
ValueError: For invalid service name or cred_type
"""
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
if cred_type not in ['credentials', 'search']:
raise ValueError("Credential type must be 'credentials' or 'search'")
# For Deezer, only credentials.json is supported
if service == 'deezer' and cred_type == 'search':
raise ValueError("Search credentials are only supported for Spotify")
creds_dir = Path('./data/creds') / service / name
file_path = creds_dir / f'{cred_type}.json'
if not file_path.exists():
if cred_type == 'search':
# Return empty dict if search.json doesn't exist
return {}
raise FileNotFoundError(f"Credential '{name}' not found for {service}")
with open(file_path, 'r') as f:
return json.load(f)
def list_credentials(service):
"""
Lists all available credential names for a service
Args:
service (str): 'spotify' or 'deezer'
Returns:
list: Array of credential names
Raises:
ValueError: For invalid service name
"""
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
service_dir = Path('./data/creds') / service
if not service_dir.exists():
return []
return [d.name for d in service_dir.iterdir() if d.is_dir()]
else:
raise ValueError(f"Invalid {service_name} credentials for {account_name}. Verification failed (unknown reason after retries).")
def create_credential(service, name, data, cred_type='credentials'):
def create_credential(service, name, data):
"""
Creates a new credential file for the specified service.
Creates a new credential.
Args:
service (str): 'spotify' or 'deezer'
name (str): Custom name for the credential
data (dict): Dictionary containing the credential data
cred_type (str): 'credentials' or 'search' - type of credential file to create
data (dict): For Spotify: {'client_id', 'client_secret', 'region', 'blob_content'}
For Deezer: {'arl', 'region'}
Raises:
ValueError: If service is invalid, data has invalid fields, or missing required fields
FileExistsError: If the credential directory already exists (for credentials.json)
ValueError, FileExistsError
"""
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
if cred_type not in ['credentials', 'search']:
raise ValueError("Credential type must be 'credentials' or 'search'")
# For Deezer, only credentials.json is supported
if service == 'deezer' and cred_type == 'search':
raise ValueError("Search credentials are only supported for Spotify")
# Validate data structure
required_fields = []
allowed_fields = []
if cred_type == 'credentials':
if service == 'spotify':
required_fields = ['username', 'credentials']
allowed_fields = required_fields + ['type']
data['type'] = 'AUTHENTICATION_STORED_SPOTIFY_CREDENTIALS'
else:
required_fields = ['arl']
allowed_fields = required_fields.copy()
# Check for extra fields
extra_fields = set(data.keys()) - set(allowed_fields)
if extra_fields:
raise ValueError(f"Deezer credentials can only contain 'arl'. Extra fields found: {', '.join(extra_fields)}")
elif cred_type == 'search':
required_fields = ['client_id', 'client_secret']
allowed_fields = required_fields.copy()
# Check for extra fields
extra_fields = set(data.keys()) - set(allowed_fields)
if extra_fields:
raise ValueError(f"Search credentials can only contain 'client_id' and 'client_secret'. Extra fields found: {', '.join(extra_fields)}")
for field in required_fields:
if field not in data:
raise ValueError(f"Missing required field for {cred_type}: {field}")
# Create directory
creds_dir = Path('./data/creds') / service / name
file_created_now = False
dir_created_now = False
if not name or not isinstance(name, str):
raise ValueError("Credential name must be a non-empty string.")
if cred_type == 'credentials':
current_time = time.time()
with _get_db_connection() as conn:
cursor = conn.cursor()
conn.row_factory = sqlite3.Row
try:
creds_dir.mkdir(parents=True, exist_ok=False)
dir_created_now = True
except FileExistsError:
# Directory already exists, which is fine for creating credentials.json
# if it doesn't exist yet, or if we are overwriting (though POST usually means new)
pass
except Exception as e:
raise ValueError(f"Could not create directory {creds_dir}: {e}")
if service == 'spotify':
required_fields = {'region', 'blob_content'} # client_id/secret are global
if not required_fields.issubset(data.keys()):
raise ValueError(f"Missing fields for Spotify. Required: {required_fields}")
file_path = creds_dir / 'credentials.json'
if file_path.exists() and request.method == 'POST': # type: ignore
# Safety check for POST to not overwrite if file exists unless it's an edit (PUT)
raise FileExistsError(f"Credential file {file_path} already exists. Use PUT to modify.")
# Write the credential file first
try:
with open(file_path, 'w') as f:
json.dump(data, f, indent=4)
file_created_now = True # Mark as created for potential cleanup
except Exception as e:
if dir_created_now: # Cleanup directory if file write failed
blob_path = BLOBS_DIR / name / 'credentials.json'
validation_data = {'blob_file_path': str(blob_path)} # Validation uses global API creds
blob_path.parent.mkdir(parents=True, exist_ok=True)
with open(blob_path, 'w') as f_blob:
if isinstance(data['blob_content'], dict):
json.dump(data['blob_content'], f_blob, indent=4)
else: # assume string
f_blob.write(data['blob_content'])
try:
creds_dir.rmdir()
except OSError: # rmdir fails if not empty, though it should be
pass
raise ValueError(f"Could not write credential file {file_path}: {e}")
# --- Validation Step ---
try:
_validate_with_retry(
service_name=service,
account_name=name,
creds_dir_path=creds_dir,
cred_file_path=file_path,
data_for_validation=data, # 'data' contains the arl for Deezer
is_spotify=(service == 'spotify')
)
except ValueError as val_err: # Catch the specific error from our helper
print(f"ERROR: Credential validation failed during creation for {service} account {name}: {val_err}")
traceback.print_exc() # Print full traceback here for creation failure context
# Clean up the created file and directory if validation fails
if file_created_now:
try:
file_path.unlink(missing_ok=True)
except OSError:
pass # Ignore if somehow already gone
if dir_created_now and not any(creds_dir.iterdir()): # Only remove if empty
try:
creds_dir.rmdir()
except OSError:
pass
raise # Re-raise the ValueError from validation
elif cred_type == 'search': # Spotify only
# For search.json, ensure the directory exists (it should if credentials.json exists)
if not creds_dir.exists():
# This implies credentials.json was not created first, which is an issue.
# However, the form logic might allow adding API creds to an existing empty dir.
# For now, let's create it if it's missing, assuming API creds can be standalone.
try:
creds_dir.mkdir(parents=True, exist_ok=True)
except Exception as e:
raise ValueError(f"Could not create directory for search credentials {creds_dir}: {e}")
file_path = creds_dir / 'search.json'
# No specific validation for client_id/secret themselves, they are validated in use.
with open(file_path, 'w') as f:
json.dump(data, f, indent=4)
def delete_credential(service, name, cred_type=None):
"""
Deletes an existing credential directory or specific credential file.
Args:
service (str): 'spotify' or 'deezer'
name (str): Name of the credential to delete
cred_type (str, optional): If specified ('credentials' or 'search'), only deletes
that specific file. If None, deletes the whole directory.
Raises:
FileNotFoundError: If the credential directory or specified file does not exist
"""
creds_dir = Path('./data/creds') / service / name
if cred_type:
if cred_type not in ['credentials', 'search']:
raise ValueError("Credential type must be 'credentials' or 'search'")
file_path = creds_dir / f'{cred_type}.json'
if not file_path.exists():
raise FileNotFoundError(f"{cred_type.capitalize()} credential '{name}' not found for {service}")
# Delete just the specific file
file_path.unlink()
# If it was credentials.json and no other credential files remain, also delete the directory
if cred_type == 'credentials' and not any(creds_dir.iterdir()):
creds_dir.rmdir()
else:
# Delete the entire directory
if not creds_dir.exists():
raise FileNotFoundError(f"Credential '{name}' not found for {service}")
shutil.rmtree(creds_dir)
def edit_credential(service, name, new_data, cred_type='credentials'):
"""
Edits an existing credential file.
Args:
service (str): 'spotify' or 'deezer'
name (str): Name of the credential to edit
new_data (dict): Dictionary containing fields to update
cred_type (str): 'credentials' or 'search' - type of credential file to edit
Raises:
FileNotFoundError: If the credential does not exist
ValueError: If new_data contains invalid fields or missing required fields after update
"""
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
if cred_type not in ['credentials', 'search']:
raise ValueError("Credential type must be 'credentials' or 'search'")
# For Deezer, only credentials.json is supported
if service == 'deezer' and cred_type == 'search':
raise ValueError("Search credentials are only supported for Spotify")
# Get file path
creds_dir = Path('./data/creds') / service / name
file_path = creds_dir / f'{cred_type}.json'
original_data_str = None # Store original data as string to revert
file_existed_before_edit = file_path.exists()
if file_existed_before_edit:
with open(file_path, 'r') as f:
original_data_str = f.read()
try:
data = json.loads(original_data_str)
except json.JSONDecodeError:
# If existing file is corrupt, treat as if we are creating it anew for edit
data = {}
original_data_str = None # Can't revert to corrupt data
else:
# If file doesn't exist, and we're editing (PUT), it's usually an error
# unless it's for search.json which can be created during an edit flow.
if cred_type == 'credentials':
raise FileNotFoundError(f"Cannot edit non-existent credentials file: {file_path}")
data = {} # Start with empty data for search.json creation
# Validate new_data fields (data to be merged)
allowed_fields = []
if cred_type == 'credentials':
if service == 'spotify':
allowed_fields = ['username', 'credentials']
else:
allowed_fields = ['arl']
else: # search.json
allowed_fields = ['client_id', 'client_secret']
for key in new_data.keys():
if key not in allowed_fields:
raise ValueError(f"Invalid field '{key}' for {cred_type} credentials")
# Update data (merging new_data into existing or empty data)
data.update(new_data)
# --- Write and Validate Step for 'credentials' type ---
if cred_type == 'credentials':
try:
# Temporarily write new data for validation
with open(file_path, 'w') as f:
json.dump(data, f, indent=4)
_validate_with_retry('spotify', name, validation_data)
cursor.execute(
"INSERT INTO spotify (name, region, created_at, updated_at) VALUES (?, ?, ?, ?)",
(name, data['region'], current_time, current_time)
)
except Exception as e:
if blob_path.exists(): blob_path.unlink() # Cleanup blob
if blob_path.parent.exists() and not any(blob_path.parent.iterdir()): blob_path.parent.rmdir()
raise # Re-raise validation or DB error
_validate_with_retry(
service_name=service,
account_name=name,
creds_dir_path=creds_dir,
cred_file_path=file_path,
data_for_validation=data, # 'data' is the merged data with 'arl' for Deezer
is_spotify=(service == 'spotify')
)
except ValueError as val_err: # Catch the specific error from our helper
print(f"ERROR: Edited credential validation failed for {service} account {name}: {val_err}")
traceback.print_exc() # Print full traceback here for edit failure context
# Revert or delete the file
if original_data_str is not None:
with open(file_path, 'w') as f:
f.write(original_data_str) # Restore original content
elif file_existed_before_edit: # file existed but original_data_str is None (corrupt)
pass
else: # File didn't exist before this edit attempt, so remove it
try:
file_path.unlink(missing_ok=True)
except OSError:
pass # Ignore if somehow already gone
raise # Re-raise the ValueError from validation
except Exception as e: # Catch other potential errors like file IO during temp write
print(f"ERROR: Unexpected error during edit/validation for {service} account {name}: {e}")
traceback.print_exc()
# Attempt revert/delete
if original_data_str is not None:
with open(file_path, 'w') as f: f.write(original_data_str)
elif file_existed_before_edit:
pass
else:
try:
file_path.unlink(missing_ok=True)
except OSError: pass
raise ValueError(f"Failed to save edited {service} credentials due to: {str(e).splitlines()[-1]}")
elif service == 'deezer':
required_fields = {'arl', 'region'}
if not required_fields.issubset(data.keys()):
raise ValueError(f"Missing fields for Deezer. Required: {required_fields}")
validation_data = {'arl': data['arl']}
_validate_with_retry('deezer', name, validation_data)
cursor.execute(
"INSERT INTO deezer (name, arl, region, created_at, updated_at) VALUES (?, ?, ?, ?, ?)",
(name, data['arl'], data['region'], current_time, current_time)
)
conn.commit()
logger.info(f"Credential '{name}' for {service} created successfully.")
return {"status": "created", "service": service, "name": name}
except sqlite3.IntegrityError:
raise FileExistsError(f"Credential '{name}' already exists for {service}.")
except Exception as e:
logger.error(f"Error creating credential {name} for {service}: {e}", exc_info=True)
raise ValueError(f"Could not create credential: {e}")
def get_credential(service, name):
"""
Retrieves a specific credential by name.
For Spotify, returns dict with name, region, and blob_content (from file).
For Deezer, returns dict with name, arl, and region.
Raises FileNotFoundError if the credential does not exist.
"""
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
# For 'search' type, just write, no specific validation here for client_id/secret
elif cred_type == 'search':
if not creds_dir.exists(): # Should not happen if we're editing
raise FileNotFoundError(f"Credential directory {creds_dir} not found for editing search credentials.")
with open(file_path, 'w') as f:
json.dump(data, f, indent=4) # `data` here is the merged data for search
with _get_db_connection() as conn:
cursor = conn.cursor()
conn.row_factory = sqlite3.Row # Ensure row_factory is set for this cursor
cursor.execute(f"SELECT * FROM {service} WHERE name = ?", (name,))
row = cursor.fetchone()
# For Deezer: Strip all fields except 'arl' - This should use `data` which is `updated_data`
if service == 'deezer' and cred_type == 'credentials':
if 'arl' not in data:
raise ValueError("Missing 'arl' field for Deezer credential after edit.")
data = {'arl': data['arl']}
if not row:
raise FileNotFoundError(f"No {service} credential found with name '{name}'")
data = dict(row)
# Ensure required fields are present
required_fields = []
if cred_type == 'credentials':
if service == 'spotify':
required_fields = ['username', 'credentials', 'type']
data['type'] = 'AUTHENTICATION_STORED_SPOTIFY_CREDENTIALS'
else:
required_fields = ['arl']
else: # search.json
required_fields = ['client_id', 'client_secret']
blob_file_path = BLOBS_DIR / name / 'credentials.json'
data['blob_file_path'] = str(blob_file_path) # Keep for internal use
try:
with open(blob_file_path, 'r') as f_blob:
blob_data = json.load(f_blob)
data['blob_content'] = blob_data
except FileNotFoundError:
logger.warning(f"Spotify blob file not found for {name} at {blob_file_path} during get_credential.")
data['blob_content'] = None
except json.JSONDecodeError:
logger.warning(f"Error decoding JSON from Spotify blob file for {name} at {blob_file_path}.")
data['blob_content'] = None
except Exception as e:
logger.error(f"Unexpected error reading Spotify blob for {name}: {e}", exc_info=True)
data['blob_content'] = None
cleaned_data = {
'name': data.get('name'),
'region': data.get('region'),
'blob_content': data.get('blob_content')
}
return cleaned_data
elif service == 'deezer':
cleaned_data = {
'name': data.get('name'),
'region': data.get('region'),
'arl': data.get('arl')
}
return cleaned_data
# Fallback, should not be reached if service is spotify or deezer
return None
def list_credentials(service):
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
for field in required_fields:
if field not in data:
raise ValueError(f"Missing required field '{field}' after update for {cred_type}")
with _get_db_connection() as conn:
cursor = conn.cursor()
conn.row_factory = sqlite3.Row
cursor.execute(f"SELECT name FROM {service}")
return [row['name'] for row in cursor.fetchall()]
def delete_credential(service, name):
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
# Save updated data
with open(file_path, 'w') as f:
json.dump(data, f, indent=4)
with _get_db_connection() as conn:
cursor = conn.cursor()
conn.row_factory = sqlite3.Row
cursor.execute(f"DELETE FROM {service} WHERE name = ?", (name,))
if cursor.rowcount == 0:
raise FileNotFoundError(f"Credential '{name}' not found for {service}.")
if service == 'spotify':
blob_dir = BLOBS_DIR / name
if blob_dir.exists():
shutil.rmtree(blob_dir)
conn.commit()
logger.info(f"Credential '{name}' for {service} deleted.")
return {"status": "deleted", "service": service, "name": name}
def edit_credential(service, name, new_data):
"""
Edits an existing credential.
new_data for Spotify can include: client_id, client_secret, region, blob_content.
new_data for Deezer can include: arl, region.
Fields not in new_data remain unchanged.
"""
if service not in ['spotify', 'deezer']:
raise ValueError("Service must be 'spotify' or 'deezer'")
current_time = time.time()
# Fetch existing data first to preserve unchanged fields and for validation backup
try:
existing_cred = get_credential(service, name) # This will raise FileNotFoundError if not found
except FileNotFoundError:
raise
except Exception as e: # Catch other errors from get_credential
raise ValueError(f"Could not retrieve existing credential {name} for edit: {e}")
updated_fields = new_data.copy()
with _get_db_connection() as conn:
cursor = conn.cursor()
conn.row_factory = sqlite3.Row
if service == 'spotify':
# Prepare data for DB update
db_update_data = {
'region': updated_fields.get('region', existing_cred['region']),
'updated_at': current_time,
'name': name # for WHERE clause
}
blob_path = Path(existing_cred['blob_file_path']) # Use path from existing
original_blob_content = None
if blob_path.exists():
with open(blob_path, 'r') as f_orig_blob:
original_blob_content = f_orig_blob.read()
# If blob_content is being updated, write it temporarily for validation
if 'blob_content' in updated_fields:
blob_path.parent.mkdir(parents=True, exist_ok=True)
with open(blob_path, 'w') as f_new_blob:
if isinstance(updated_fields['blob_content'], dict):
json.dump(updated_fields['blob_content'], f_new_blob, indent=4)
else:
f_new_blob.write(updated_fields['blob_content'])
validation_data = {'blob_file_path': str(blob_path)}
try:
_validate_with_retry('spotify', name, validation_data)
set_clause = ", ".join([f"{key} = ?" for key in db_update_data if key != 'name'])
values = [db_update_data[key] for key in db_update_data if key != 'name'] + [name]
cursor.execute(f"UPDATE spotify SET {set_clause} WHERE name = ?", tuple(values))
# If validation passed and blob was in new_data, it's already written.
# If blob_content was NOT in new_data, the existing blob (if any) remains.
except Exception as e:
# Revert blob if it was changed and validation failed
if 'blob_content' in updated_fields and original_blob_content is not None:
with open(blob_path, 'w') as f_revert_blob:
f_revert_blob.write(original_blob_content)
elif 'blob_content' in updated_fields and original_blob_content is None and blob_path.exists():
# If new blob was written but there was no original to revert to, delete the new one.
blob_path.unlink()
raise # Re-raise validation or DB error
elif service == 'deezer':
db_update_data = {
'arl': updated_fields.get('arl', existing_cred['arl']),
'region': updated_fields.get('region', existing_cred['region']),
'updated_at': current_time,
'name': name # for WHERE clause
}
validation_data = {'arl': db_update_data['arl']}
_validate_with_retry('deezer', name, validation_data) # Validation happens before DB write for Deezer
set_clause = ", ".join([f"{key} = ?" for key in db_update_data if key != 'name'])
values = [db_update_data[key] for key in db_update_data if key != 'name'] + [name]
cursor.execute(f"UPDATE deezer SET {set_clause} WHERE name = ?", tuple(values))
if cursor.rowcount == 0: # Should not happen if get_credential succeeded
raise FileNotFoundError(f"Credential '{name}' for {service} disappeared during edit.")
conn.commit()
logger.info(f"Credential '{name}' for {service} updated successfully.")
return {"status": "updated", "service": service, "name": name}
# --- Helper for credential file path (mainly for Spotify blob) ---
def get_spotify_blob_path(account_name: str) -> Path:
return BLOBS_DIR / account_name / 'credentials.json'
# It's good practice to call init_credentials_db() when the app starts.
# This can be done in the main application setup. For now, defining it here.
# If this script is run directly for setup, you could add:
# if __name__ == '__main__':
# init_credentials_db()
# print("Credentials database initialized.")

View File

@@ -4,48 +4,63 @@ from deezspot.easy_spoty import Spo
import json
from pathlib import Path
from routes.utils.celery_queue_manager import get_config_params
from routes.utils.credentials import get_credential, _get_global_spotify_api_creds
# Import Deezer API and logging
from deezspot.deezloader.dee_api import API as DeezerAPI
import logging
# Initialize logger
logger = logging.getLogger(__name__)
# We'll rely on get_config_params() instead of directly loading the config file
def get_spotify_info(spotify_id, spotify_type, limit=None, offset=None):
"""
Get info from Spotify API using the default Spotify account configured in main.json
Get info from Spotify API. Uses global client_id/secret from search.json.
The default Spotify account from main.json might still be relevant for other Spo settings or if Spo uses it.
Args:
spotify_id: The Spotify ID of the entity
spotify_type: The type of entity (track, album, playlist, artist)
limit (int, optional): The maximum number of items to return. Only used if spotify_type is "artist".
offset (int, optional): The index of the first item to return. Only used if spotify_type is "artist".
spotify_type: The type of entity (track, album, playlist, artist, artist_discography, episode)
limit (int, optional): The maximum number of items to return. Only used if spotify_type is "artist_discography".
offset (int, optional): The index of the first item to return. Only used if spotify_type is "artist_discography".
Returns:
Dictionary with the entity information
"""
client_id = None
client_secret = None
client_id, client_secret = _get_global_spotify_api_creds()
# Get config parameters including Spotify account
if not client_id or not client_secret:
raise ValueError("Global Spotify API client_id or client_secret not configured in ./data/creds/search.json.")
# Get config parameters including default Spotify account name
# This might still be useful if Spo uses the account name for other things (e.g. market/region if not passed explicitly)
# For now, we are just ensuring the API keys are set.
config_params = get_config_params()
main = config_params.get('spotify', '')
main_spotify_account_name = config_params.get('spotify', '') # Still good to know which account is 'default' contextually
if not main:
raise ValueError("No Spotify account configured in settings")
if spotify_id:
search_creds_path = Path(f'./data/creds/spotify/{main}/search.json')
if search_creds_path.exists():
try:
with open(search_creds_path, 'r') as f:
search_creds = json.load(f)
client_id = search_creds.get('client_id')
client_secret = search_creds.get('client_secret')
except Exception as e:
print(f"Error loading search credentials: {e}")
# Initialize the Spotify client with credentials (if available)
if client_id and client_secret:
Spo.__init__(client_id, client_secret)
if not main_spotify_account_name:
# This is less critical now that API keys are global, but could indicate a misconfiguration
# if other parts of Spo expect an account context.
print(f"WARN: No default Spotify account name configured in settings (main.json). API calls will use global keys.")
else:
raise ValueError("No Spotify credentials found")
# Optionally, one could load the specific account's region here if Spo.init or methods need it,
# but easy_spoty's Spo doesn't seem to take region directly in __init__.
# It might use it internally based on account details if credentials.json (blob) contains it.
try:
# We call get_credential just to check if the account exists,
# not for client_id/secret anymore for Spo.__init__
get_credential('spotify', main_spotify_account_name)
except FileNotFoundError:
# This is a more serious warning if an account is expected to exist.
print(f"WARN: Default Spotify account '{main_spotify_account_name}' configured in main.json was not found in credentials database.")
except Exception as e:
print(f"WARN: Error accessing default Spotify account '{main_spotify_account_name}': {e}")
# Initialize the Spotify client with GLOBAL credentials
Spo.__init__(client_id, client_secret)
if spotify_type == "track":
return Spo.get_track(spotify_id)
elif spotify_type == "album":
@@ -67,3 +82,58 @@ def get_spotify_info(spotify_id, spotify_type, limit=None, offset=None):
return Spo.get_episode(spotify_id)
else:
raise ValueError(f"Unsupported Spotify type: {spotify_type}")
def get_deezer_info(deezer_id, deezer_type, limit=None):
"""
Get info from Deezer API.
Args:
deezer_id: The Deezer ID of the entity.
deezer_type: The type of entity (track, album, playlist, artist, episode,
artist_top_tracks, artist_albums, artist_related,
artist_radio, artist_playlists).
limit (int, optional): The maximum number of items to return. Used for
artist_top_tracks, artist_albums, artist_playlists.
Deezer API methods usually have their own defaults (e.g., 25)
if limit is not provided or None is passed to them.
Returns:
Dictionary with the entity information.
Raises:
ValueError: If deezer_type is unsupported.
Various exceptions from DeezerAPI (NoDataApi, QuotaExceeded, requests.exceptions.RequestException, etc.)
"""
logger.debug(f"Fetching Deezer info for ID {deezer_id}, type {deezer_type}, limit {limit}")
# DeezerAPI uses class methods; its @classmethod __init__ handles setup.
# No specific ARL or account handling here as DeezerAPI seems to use general endpoints.
if deezer_type == "track":
return DeezerAPI.get_track(deezer_id)
elif deezer_type == "album":
return DeezerAPI.get_album(deezer_id)
elif deezer_type == "playlist":
return DeezerAPI.get_playlist(deezer_id)
elif deezer_type == "artist":
return DeezerAPI.get_artist(deezer_id)
elif deezer_type == "episode":
return DeezerAPI.get_episode(deezer_id)
elif deezer_type == "artist_top_tracks":
if limit is not None:
return DeezerAPI.get_artist_top_tracks(deezer_id, limit=limit)
return DeezerAPI.get_artist_top_tracks(deezer_id) # Use API default limit
elif deezer_type == "artist_albums": # Maps to get_artist_top_albums
if limit is not None:
return DeezerAPI.get_artist_top_albums(deezer_id, limit=limit)
return DeezerAPI.get_artist_top_albums(deezer_id) # Use API default limit
elif deezer_type == "artist_related":
return DeezerAPI.get_artist_related(deezer_id)
elif deezer_type == "artist_radio":
return DeezerAPI.get_artist_radio(deezer_id)
elif deezer_type == "artist_playlists":
if limit is not None:
return DeezerAPI.get_artist_top_playlists(deezer_id, limit=limit)
return DeezerAPI.get_artist_top_playlists(deezer_id) # Use API default limit
else:
logger.error(f"Unsupported Deezer type: {deezer_type}")
raise ValueError(f"Unsupported Deezer type: {deezer_type}")

View File

@@ -9,13 +9,40 @@ logger = logging.getLogger(__name__)
HISTORY_DIR = Path('./data/history')
HISTORY_DB_FILE = HISTORY_DIR / 'download_history.db'
EXPECTED_COLUMNS = {
'task_id': 'TEXT PRIMARY KEY',
'download_type': 'TEXT',
'item_name': 'TEXT',
'item_artist': 'TEXT',
'item_album': 'TEXT',
'item_url': 'TEXT',
'spotify_id': 'TEXT',
'status_final': 'TEXT', # 'COMPLETED', 'ERROR', 'CANCELLED'
'error_message': 'TEXT',
'timestamp_added': 'REAL',
'timestamp_completed': 'REAL',
'original_request_json': 'TEXT',
'last_status_obj_json': 'TEXT',
'service_used': 'TEXT',
'quality_profile': 'TEXT',
'convert_to': 'TEXT',
'bitrate': 'TEXT'
}
def init_history_db():
"""Initializes the download history database and creates the table if it doesn't exist."""
"""Initializes the download history database, creates the table if it doesn't exist,
and adds any missing columns to an existing table."""
conn = None
try:
HISTORY_DIR.mkdir(parents=True, exist_ok=True)
conn = sqlite3.connect(HISTORY_DB_FILE)
cursor = conn.cursor()
cursor.execute("""
# Create table if it doesn't exist (idempotent)
# The primary key constraint is handled by the initial CREATE TABLE.
# If 'task_id' is missing, it cannot be added as PRIMARY KEY to an existing table
# without complex migrations. We assume 'task_id' will exist if the table exists.
create_table_sql = f"""
CREATE TABLE IF NOT EXISTS download_history (
task_id TEXT PRIMARY KEY,
download_type TEXT,
@@ -24,7 +51,7 @@ def init_history_db():
item_album TEXT,
item_url TEXT,
spotify_id TEXT,
status_final TEXT, -- 'COMPLETED', 'ERROR', 'CANCELLED'
status_final TEXT,
error_message TEXT,
timestamp_added REAL,
timestamp_completed REAL,
@@ -35,9 +62,48 @@ def init_history_db():
convert_to TEXT,
bitrate TEXT
)
""")
"""
cursor.execute(create_table_sql)
conn.commit()
logger.info(f"Download history database initialized at {HISTORY_DB_FILE}")
# Check for missing columns and add them
cursor.execute("PRAGMA table_info(download_history)")
existing_columns_info = cursor.fetchall()
existing_column_names = {col[1] for col in existing_columns_info}
added_columns = False
for col_name, col_type in EXPECTED_COLUMNS.items():
if col_name not in existing_column_names:
if 'PRIMARY KEY' in col_type.upper() and col_name == 'task_id':
# This case should be handled by CREATE TABLE, but as a safeguard:
# If task_id is somehow missing and table exists, this is a problem.
# Adding it as PK here is complex and might fail if data exists.
# For now, we assume CREATE TABLE handles the PK.
# If we were to add it, it would be 'ALTER TABLE download_history ADD COLUMN task_id TEXT;'
# and then potentially a separate step to make it PK if table is empty, which is non-trivial.
logger.warning(f"Column '{col_name}' is part of PRIMARY KEY and was expected to be created by CREATE TABLE. Skipping explicit ADD COLUMN.")
continue
# For other columns, just add them.
# Remove PRIMARY KEY from type definition if present, as it's only for table creation.
col_type_for_add = col_type.replace(' PRIMARY KEY', '').strip()
try:
cursor.execute(f"ALTER TABLE download_history ADD COLUMN {col_name} {col_type_for_add}")
logger.info(f"Added missing column '{col_name} {col_type_for_add}' to download_history table.")
added_columns = True
except sqlite3.OperationalError as alter_e:
# This might happen if a column (e.g. task_id) without "PRIMARY KEY" is added by this loop
# but the initial create table already made it a primary key.
# Or other more complex scenarios.
logger.warning(f"Could not add column '{col_name}': {alter_e}. It might already exist or there's a schema mismatch.")
if added_columns:
conn.commit()
logger.info(f"Download history table schema updated at {HISTORY_DB_FILE}")
else:
logger.info(f"Download history database schema is up-to-date at {HISTORY_DB_FILE}")
except sqlite3.Error as e:
logger.error(f"Error initializing download history database: {e}", exc_info=True)
finally:

View File

@@ -4,6 +4,8 @@ import traceback
from deezspot.spotloader import SpoLogin
from deezspot.deezloader import DeeLogin
from pathlib import Path
from routes.utils.credentials import get_credential, _get_global_spotify_api_creds
from routes.utils.celery_config import get_config_params
def download_playlist(
url,
@@ -28,83 +30,49 @@ def download_playlist(
is_spotify_url = 'open.spotify.com' in url.lower()
is_deezer_url = 'deezer.com' in url.lower()
# Determine service exclusively from URL
service = ''
if is_spotify_url:
service = 'spotify'
elif is_deezer_url:
service = 'deezer'
else:
# If URL can't be detected, raise an error
error_msg = "Invalid URL: Must be from open.spotify.com or deezer.com"
print(f"ERROR: {error_msg}")
raise ValueError(error_msg)
print(f"DEBUG: playlist.py - URL detection: is_spotify_url={is_spotify_url}, is_deezer_url={is_deezer_url}")
print(f"DEBUG: playlist.py - Service determined from URL: {service}")
print(f"DEBUG: playlist.py - Credentials: main={main}, fallback={fallback}")
# Load Spotify client credentials if available
spotify_client_id = None
spotify_client_secret = None
# Smartly determine where to look for Spotify search credentials
if service == 'spotify' and fallback:
# If fallback is enabled, use the fallback account for Spotify search credentials
search_creds_path = Path(f'./data/creds/spotify/{fallback}/search.json')
print(f"DEBUG: Using Spotify search credentials from fallback: {search_creds_path}")
else:
# Otherwise use the main account for Spotify search credentials
search_creds_path = Path(f'./data/creds/spotify/{main}/search.json')
print(f"DEBUG: Using Spotify search credentials from main: {search_creds_path}")
if search_creds_path.exists():
try:
with open(search_creds_path, 'r') as f:
search_creds = json.load(f)
spotify_client_id = search_creds.get('client_id')
spotify_client_secret = search_creds.get('client_secret')
print(f"DEBUG: Loaded Spotify client credentials successfully")
except Exception as e:
print(f"Error loading Spotify search credentials: {e}")
# For Spotify URLs: check if fallback is enabled, if so use the fallback logic,
# otherwise download directly from Spotify
print(f"DEBUG: playlist.py - Credentials provided: main_account_name='{main}', fallback_account_name='{fallback}'")
# Get global Spotify API credentials
global_spotify_client_id, global_spotify_client_secret = _get_global_spotify_api_creds()
if not global_spotify_client_id or not global_spotify_client_secret:
warning_msg = "WARN: playlist.py - Global Spotify client_id/secret not found in search.json. Spotify operations will likely fail."
print(warning_msg)
if service == 'spotify':
if fallback:
if quality is None:
quality = 'FLAC'
if fall_quality is None:
fall_quality = 'HIGH'
if fallback: # Fallback is a Deezer account name for a Spotify URL
if quality is None: quality = 'FLAC' # Deezer quality for first attempt
if fall_quality is None: fall_quality = 'HIGH' # Spotify quality for fallback (if Deezer fails)
# First attempt: use DeeLogin's download_playlistspo with the 'main' (Deezer credentials)
deezer_error = None
try:
# Load Deezer credentials from 'main' under deezer directory
deezer_creds_dir = os.path.join('./data/creds/deezer', main)
deezer_creds_path = os.path.abspath(os.path.join(deezer_creds_dir, 'credentials.json'))
# Attempt 1: Deezer via download_playlistspo (using 'fallback' as Deezer account name)
print(f"DEBUG: playlist.py - Spotify URL. Attempt 1: Deezer (account: {fallback})")
deezer_fallback_creds = get_credential('deezer', fallback)
arl = deezer_fallback_creds.get('arl')
if not arl:
raise ValueError(f"ARL not found for Deezer account '{fallback}'.")
# DEBUG: Print Deezer credential paths being used
print(f"DEBUG: Looking for Deezer credentials at:")
print(f"DEBUG: deezer_creds_dir = {deezer_creds_dir}")
print(f"DEBUG: deezer_creds_path = {deezer_creds_path}")
print(f"DEBUG: Directory exists = {os.path.exists(deezer_creds_dir)}")
print(f"DEBUG: Credentials file exists = {os.path.exists(deezer_creds_path)}")
with open(deezer_creds_path, 'r') as f:
deezer_creds = json.load(f)
# Initialize DeeLogin with Deezer credentials
dl = DeeLogin(
arl=deezer_creds.get('arl', ''),
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
arl=arl,
spotify_client_id=global_spotify_client_id,
spotify_client_secret=global_spotify_client_secret,
progress_callback=progress_callback
)
print(f"DEBUG: Starting playlist download using Deezer credentials (download_playlistspo)")
# Download using download_playlistspo; pass the custom formatting parameters.
dl.download_playlistspo(
link_playlist=url,
link_playlist=url, # Spotify URL
output_dir="./downloads",
quality_download=quality,
quality_download=quality, # Deezer quality
recursive_quality=True,
recursive_download=False,
not_interface=False,
@@ -119,35 +87,33 @@ def download_playlist(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Playlist download completed successfully using Deezer credentials")
print(f"DEBUG: playlist.py - Playlist download via Deezer (account: {fallback}) successful for Spotify URL.")
except Exception as e:
deezer_error = e
# Immediately report the Deezer error
print(f"ERROR: Deezer playlist download attempt failed: {e}")
print(f"ERROR: playlist.py - Deezer attempt (account: {fallback}) for Spotify URL failed: {e}")
traceback.print_exc()
print("Attempting Spotify fallback...")
print(f"DEBUG: playlist.py - Attempting Spotify direct download (account: {main} for blob)...")
# Load fallback Spotify credentials and attempt download
# Attempt 2: Spotify direct via download_playlist (using 'main' as Spotify account for blob)
try:
spo_creds_dir = os.path.join('./data/creds/spotify', fallback)
spo_creds_path = os.path.abspath(os.path.join(spo_creds_dir, 'credentials.json'))
print(f"DEBUG: Using Spotify fallback credentials from: {spo_creds_path}")
print(f"DEBUG: Fallback credentials exist: {os.path.exists(spo_creds_path)}")
# We've already loaded the Spotify client credentials above based on fallback
if not global_spotify_client_id or not global_spotify_client_secret:
raise ValueError("Global Spotify API credentials (client_id/secret) not configured for Spotify download.")
spotify_main_creds = get_credential('spotify', main) # For blob path
blob_file_path = spotify_main_creds.get('blob_file_path')
if not Path(blob_file_path).exists():
raise FileNotFoundError(f"Spotify credentials blob file not found at {blob_file_path} for account '{main}'")
spo = SpoLogin(
credentials_path=spo_creds_path,
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
credentials_path=blob_file_path,
spotify_client_id=global_spotify_client_id,
spotify_client_secret=global_spotify_client_secret,
progress_callback=progress_callback
)
print(f"DEBUG: Starting playlist download using Spotify fallback credentials")
spo.download_playlist(
link_playlist=url,
link_playlist=url, # Spotify URL
output_dir="./downloads",
quality_download=fall_quality,
quality_download=fall_quality, # Spotify quality
recursive_quality=True,
recursive_download=False,
not_interface=False,
@@ -163,34 +129,36 @@ def download_playlist(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Playlist download completed successfully using Spotify fallback")
print(f"DEBUG: playlist.py - Spotify direct download (account: {main} for blob) successful.")
except Exception as e2:
# If fallback also fails, raise an error indicating both attempts failed
print(f"ERROR: Spotify fallback also failed: {e2}")
print(f"ERROR: playlist.py - Spotify direct download (account: {main} for blob) also failed: {e2}")
raise RuntimeError(
f"Both main (Deezer) and fallback (Spotify) attempts failed. "
f"Both Deezer attempt (account: {fallback}) and Spotify direct (account: {main} for blob) failed. "
f"Deezer error: {deezer_error}, Spotify error: {e2}"
) from e2
else:
# Original behavior: use Spotify main
if quality is None:
quality = 'HIGH'
creds_dir = os.path.join('./data/creds/spotify', main)
credentials_path = os.path.abspath(os.path.join(creds_dir, 'credentials.json'))
print(f"DEBUG: Using Spotify main credentials from: {credentials_path}")
print(f"DEBUG: Credentials exist: {os.path.exists(credentials_path)}")
# Spotify URL, no fallback. Direct Spotify download using 'main' (Spotify account for blob)
if quality is None: quality = 'HIGH' # Default Spotify quality
print(f"DEBUG: playlist.py - Spotify URL, no fallback. Direct download with Spotify account (for blob): {main}")
if not global_spotify_client_id or not global_spotify_client_secret:
raise ValueError("Global Spotify API credentials (client_id/secret) not configured for Spotify download.")
spotify_main_creds = get_credential('spotify', main) # For blob path
blob_file_path = spotify_main_creds.get('blob_file_path')
if not Path(blob_file_path).exists():
raise FileNotFoundError(f"Spotify credentials blob file not found at {blob_file_path} for account '{main}'")
spo = SpoLogin(
credentials_path=credentials_path,
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
credentials_path=blob_file_path,
spotify_client_id=global_spotify_client_id,
spotify_client_secret=global_spotify_client_secret,
progress_callback=progress_callback
)
print(f"DEBUG: Starting playlist download using Spotify main credentials")
spo.download_playlist(
link_playlist=url,
output_dir="./downloads",
quality_download=quality,
quality_download=quality,
recursive_quality=True,
recursive_download=False,
not_interface=False,
@@ -206,31 +174,28 @@ def download_playlist(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Playlist download completed successfully using Spotify main")
# For Deezer URLs: download directly from Deezer
print(f"DEBUG: playlist.py - Direct Spotify download (account: {main} for blob) successful.")
elif service == 'deezer':
if quality is None:
quality = 'FLAC'
# Existing code for Deezer, using main as Deezer account.
creds_dir = os.path.join('./data/creds/deezer', main)
creds_path = os.path.abspath(os.path.join(creds_dir, 'credentials.json'))
print(f"DEBUG: Using Deezer credentials from: {creds_path}")
print(f"DEBUG: Credentials exist: {os.path.exists(creds_path)}")
with open(creds_path, 'r') as f:
creds = json.load(f)
# Deezer URL. Direct Deezer download using 'main' (Deezer account name for ARL)
if quality is None: quality = 'FLAC' # Default Deezer quality
print(f"DEBUG: playlist.py - Deezer URL. Direct download with Deezer account: {main}")
deezer_main_creds = get_credential('deezer', main) # For ARL
arl = deezer_main_creds.get('arl')
if not arl:
raise ValueError(f"ARL not found for Deezer account '{main}'.")
dl = DeeLogin(
arl=creds.get('arl', ''),
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
arl=arl, # Account specific ARL
spotify_client_id=global_spotify_client_id, # Global Spotify keys
spotify_client_secret=global_spotify_client_secret, # Global Spotify keys
progress_callback=progress_callback
)
print(f"DEBUG: Starting playlist download using Deezer direct")
dl.download_playlistdee(
dl.download_playlistdee( # Deezer URL, download via Deezer
link_playlist=url,
output_dir="./downloads",
quality_download=quality,
recursive_quality=False,
recursive_quality=False, # Usually False for playlists to get individual track qualities
recursive_download=False,
make_zip=False,
custom_dir_format=custom_dir_format,
@@ -243,9 +208,10 @@ def download_playlist(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: Playlist download completed successfully using Deezer direct")
print(f"DEBUG: playlist.py - Direct Deezer download (account: {main}) successful.")
else:
raise ValueError(f"Unsupported service: {service}")
# Should be caught by initial service check, but as a safeguard
raise ValueError(f"Unsupported service determined: {service}")
except Exception as e:
print(f"ERROR: Playlist download failed with exception: {e}")
traceback.print_exc()

View File

@@ -2,6 +2,7 @@ from deezspot.easy_spoty import Spo
import json
from pathlib import Path
import logging
from routes.utils.credentials import get_credential, _get_global_spotify_api_creds
# Configure logger
logger = logging.getLogger(__name__)
@@ -12,48 +13,38 @@ def search(
limit: int = 3,
main: str = None
) -> dict:
logger.info(f"Search requested: query='{query}', type={search_type}, limit={limit}, main={main}")
logger.info(f"Search requested: query='{query}', type={search_type}, limit={limit}, main_account_name={main}")
# If main account is specified, load client ID and secret from the account's search.json
client_id = None
client_secret = None
client_id, client_secret = _get_global_spotify_api_creds()
if not client_id or not client_secret:
logger.error("Global Spotify API client_id or client_secret not configured in ./data/creds/search.json.")
raise ValueError("Spotify API credentials are not configured globally for search.")
if main:
search_creds_path = Path(f'./data/creds/spotify/{main}/search.json')
logger.debug(f"Looking for credentials at: {search_creds_path}")
if search_creds_path.exists():
try:
with open(search_creds_path, 'r') as f:
search_creds = json.load(f)
client_id = search_creds.get('client_id')
client_secret = search_creds.get('client_secret')
logger.debug(f"Credentials loaded successfully for account: {main}")
except Exception as e:
logger.error(f"Error loading search credentials: {e}")
print(f"Error loading search credentials: {e}")
else:
logger.warning(f"Credentials file not found at: {search_creds_path}")
# Initialize the Spotify client with credentials (if available)
if client_id and client_secret:
logger.debug("Initializing Spotify client with account credentials")
Spo.__init__(client_id, client_secret)
logger.debug(f"Spotify account context '{main}' was provided for search. API keys are global, but this account might be used for other context by Spo if relevant.")
try:
get_credential('spotify', main)
logger.debug(f"Spotify account '{main}' exists.")
except FileNotFoundError:
logger.warning(f"Spotify account '{main}' provided for search context not found in credentials. Search will proceed with global API keys.")
except Exception as e:
logger.warning(f"Error checking existence of Spotify account '{main}': {e}. Search will proceed with global API keys.")
else:
logger.debug("Using default Spotify client credentials")
logger.debug("No specific 'main' account context provided for search. Using global API keys.")
# Perform the Spotify search
logger.debug(f"Executing Spotify search with query='{query}', type={search_type}")
logger.debug(f"Initializing Spotify client with global API credentials for search.")
Spo.__init__(client_id, client_secret)
logger.debug(f"Executing Spotify search with query='{query}', type={search_type}, limit={limit}")
try:
spotify_response = Spo.search(
query=query,
search_type=search_type,
limit=limit,
client_id=client_id,
client_secret=client_secret
limit=limit
)
logger.info(f"Search completed successfully")
logger.info(f"Search completed successfully for query: '{query}'")
return spotify_response
except Exception as e:
logger.error(f"Error during Spotify search: {e}")
logger.error(f"Error during Spotify search for query '{query}': {e}", exc_info=True)
raise

View File

@@ -4,6 +4,8 @@ import traceback
from deezspot.spotloader import SpoLogin
from deezspot.deezloader import DeeLogin
from pathlib import Path
from routes.utils.credentials import get_credential, _get_global_spotify_api_creds, get_spotify_blob_path
from routes.utils.celery_config import get_config_params
def download_track(
url,
@@ -28,84 +30,53 @@ def download_track(
is_spotify_url = 'open.spotify.com' in url.lower()
is_deezer_url = 'deezer.com' in url.lower()
# Determine service exclusively from URL
service = ''
if is_spotify_url:
service = 'spotify'
elif is_deezer_url:
service = 'deezer'
else:
# If URL can't be detected, raise an error
error_msg = "Invalid URL: Must be from open.spotify.com or deezer.com"
print(f"ERROR: {error_msg}")
raise ValueError(error_msg)
print(f"DEBUG: track.py - URL detection: is_spotify_url={is_spotify_url}, is_deezer_url={is_deezer_url}")
print(f"DEBUG: track.py - Service determined from URL: {service}")
print(f"DEBUG: track.py - Credentials: main={main}, fallback={fallback}")
# Load Spotify client credentials if available
spotify_client_id = None
spotify_client_secret = None
# Smartly determine where to look for Spotify search credentials
if service == 'spotify' and fallback:
# If fallback is enabled, use the fallback account for Spotify search credentials
search_creds_path = Path(f'./data/creds/spotify/{fallback}/search.json')
print(f"DEBUG: Using Spotify search credentials from fallback: {search_creds_path}")
else:
# Otherwise use the main account for Spotify search credentials
search_creds_path = Path(f'./data/creds/spotify/{main}/search.json')
print(f"DEBUG: Using Spotify search credentials from main: {search_creds_path}")
if search_creds_path.exists():
try:
with open(search_creds_path, 'r') as f:
search_creds = json.load(f)
spotify_client_id = search_creds.get('client_id')
spotify_client_secret = search_creds.get('client_secret')
print(f"DEBUG: Loaded Spotify client credentials successfully")
except Exception as e:
print(f"Error loading Spotify search credentials: {e}")
# For Spotify URLs: check if fallback is enabled, if so use the fallback logic,
# otherwise download directly from Spotify
print(f"DEBUG: track.py - Credentials provided: main_account_name='{main}', fallback_account_name='{fallback}'")
# Get global Spotify API credentials for SpoLogin and DeeLogin (if it uses Spotify search)
global_spotify_client_id, global_spotify_client_secret = _get_global_spotify_api_creds()
if not global_spotify_client_id or not global_spotify_client_secret:
# This is a critical failure if Spotify operations are involved
warning_msg = "WARN: track.py - Global Spotify client_id/secret not found in search.json. Spotify operations will likely fail."
print(warning_msg)
# Depending on flow, might want to raise error here if service is 'spotify'
# For now, let it proceed and fail at SpoLogin/DeeLogin init if keys are truly needed and missing.
if service == 'spotify':
if fallback:
if quality is None:
quality = 'FLAC'
if fall_quality is None:
fall_quality = 'HIGH'
if fallback: # Fallback is a Deezer account name for a Spotify URL
if quality is None: quality = 'FLAC' # Deezer quality for first attempt
if fall_quality is None: fall_quality = 'HIGH' # Spotify quality for fallback (if Deezer fails)
# First attempt: use Deezer's download_trackspo with 'main' (Deezer credentials)
deezer_error = None
try:
deezer_creds_dir = os.path.join('./data/creds/deezer', main)
deezer_creds_path = os.path.abspath(os.path.join(deezer_creds_dir, 'credentials.json'))
# Attempt 1: Deezer via download_trackspo (using 'fallback' as Deezer account name)
print(f"DEBUG: track.py - Spotify URL. Attempt 1: Deezer (account: {fallback})")
deezer_fallback_creds = get_credential('deezer', fallback)
arl = deezer_fallback_creds.get('arl')
if not arl:
raise ValueError(f"ARL not found for Deezer account '{fallback}'.")
# DEBUG: Print Deezer credential paths being used
print(f"DEBUG: Looking for Deezer credentials at:")
print(f"DEBUG: deezer_creds_dir = {deezer_creds_dir}")
print(f"DEBUG: deezer_creds_path = {deezer_creds_path}")
print(f"DEBUG: Directory exists = {os.path.exists(deezer_creds_dir)}")
print(f"DEBUG: Credentials file exists = {os.path.exists(deezer_creds_path)}")
# List available directories to compare
print(f"DEBUG: Available Deezer credential directories:")
for dir_name in os.listdir('./data/creds/deezer'):
print(f"DEBUG: ./data/creds/deezer/{dir_name}")
with open(deezer_creds_path, 'r') as f:
deezer_creds = json.load(f)
dl = DeeLogin(
arl=deezer_creds.get('arl', ''),
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
arl=arl,
spotify_client_id=global_spotify_client_id, # Global creds
spotify_client_secret=global_spotify_client_secret, # Global creds
progress_callback=progress_callback
)
# download_trackspo means: Spotify URL, download via Deezer
dl.download_trackspo(
link_track=url,
link_track=url, # Spotify URL
output_dir="./downloads",
quality_download=quality,
quality_download=quality, # Deezer quality
recursive_quality=False,
recursive_download=False,
not_interface=False,
@@ -118,30 +89,33 @@ def download_track(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: track.py - Track download via Deezer (account: {fallback}) successful for Spotify URL.")
except Exception as e:
deezer_error = e
# Immediately report the Deezer error
print(f"ERROR: Deezer download attempt failed: {e}")
print(f"ERROR: track.py - Deezer attempt (account: {fallback}) for Spotify URL failed: {e}")
traceback.print_exc()
print("Attempting Spotify fallback...")
print(f"DEBUG: track.py - Attempting Spotify direct download (account: {main})...")
# If the first attempt fails, use the fallback Spotify credentials
# Attempt 2: Spotify direct via download_track (using 'main' as Spotify account for blob)
try:
spo_creds_dir = os.path.join('./data/creds/spotify', fallback)
spo_creds_path = os.path.abspath(os.path.join(spo_creds_dir, 'credentials.json'))
# We've already loaded the Spotify client credentials above based on fallback
if not global_spotify_client_id or not global_spotify_client_secret:
raise ValueError("Global Spotify API credentials (client_id/secret) not configured for Spotify download.")
# Use get_spotify_blob_path directly
blob_file_path = get_spotify_blob_path(main)
if not blob_file_path.exists(): # Check existence on the Path object
raise FileNotFoundError(f"Spotify credentials blob file not found at {str(blob_file_path)} for account '{main}'")
spo = SpoLogin(
credentials_path=spo_creds_path,
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
credentials_path=str(blob_file_path), # Account specific blob
spotify_client_id=global_spotify_client_id, # Global API keys
spotify_client_secret=global_spotify_client_secret, # Global API keys
progress_callback=progress_callback
)
spo.download_track(
link_track=url,
link_track=url, # Spotify URL
output_dir="./downloads",
quality_download=fall_quality,
quality_download=fall_quality, # Spotify quality
recursive_quality=False,
recursive_download=False,
not_interface=False,
@@ -156,28 +130,36 @@ def download_track(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: track.py - Spotify direct download (account: {main} for blob) successful.")
except Exception as e2:
# If fallback also fails, raise an error indicating both attempts failed
print(f"ERROR: track.py - Spotify direct download (account: {main} for blob) also failed: {e2}")
raise RuntimeError(
f"Both main (Deezer) and fallback (Spotify) attempts failed. "
f"Both Deezer attempt (account: {fallback}) and Spotify direct (account: {main} for blob) failed. "
f"Deezer error: {deezer_error}, Spotify error: {e2}"
) from e2
else:
# Directly use Spotify main account
if quality is None:
quality = 'HIGH'
creds_dir = os.path.join('./data/creds/spotify', main)
credentials_path = os.path.abspath(os.path.join(creds_dir, 'credentials.json'))
# Spotify URL, no fallback. Direct Spotify download using 'main' (Spotify account for blob)
if quality is None: quality = 'HIGH' # Default Spotify quality
print(f"DEBUG: track.py - Spotify URL, no fallback. Direct download with Spotify account (for blob): {main}")
if not global_spotify_client_id or not global_spotify_client_secret:
raise ValueError("Global Spotify API credentials (client_id/secret) not configured for Spotify download.")
# Use get_spotify_blob_path directly
blob_file_path = get_spotify_blob_path(main)
if not blob_file_path.exists(): # Check existence on the Path object
raise FileNotFoundError(f"Spotify credentials blob file not found at {str(blob_file_path)} for account '{main}'")
spo = SpoLogin(
credentials_path=credentials_path,
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
credentials_path=str(blob_file_path), # Account specific blob
spotify_client_id=global_spotify_client_id, # Global API keys
spotify_client_secret=global_spotify_client_secret, # Global API keys
progress_callback=progress_callback
)
spo.download_track(
link_track=url,
output_dir="./downloads",
quality_download=quality,
quality_download=quality,
recursive_quality=False,
recursive_download=False,
not_interface=False,
@@ -192,22 +174,24 @@ def download_track(
convert_to=convert_to,
bitrate=bitrate
)
# For Deezer URLs: download directly from Deezer
print(f"DEBUG: track.py - Direct Spotify download (account: {main} for blob) successful.")
elif service == 'deezer':
if quality is None:
quality = 'FLAC'
# Deezer download logic remains unchanged, with the custom formatting parameters passed along.
creds_dir = os.path.join('./data/creds/deezer', main)
creds_path = os.path.abspath(os.path.join(creds_dir, 'credentials.json'))
with open(creds_path, 'r') as f:
creds = json.load(f)
# Deezer URL. Direct Deezer download using 'main' (Deezer account name for ARL)
if quality is None: quality = 'FLAC' # Default Deezer quality
print(f"DEBUG: track.py - Deezer URL. Direct download with Deezer account: {main}")
deezer_main_creds = get_credential('deezer', main) # For ARL
arl = deezer_main_creds.get('arl')
if not arl:
raise ValueError(f"ARL not found for Deezer account '{main}'.")
dl = DeeLogin(
arl=creds.get('arl', ''),
spotify_client_id=spotify_client_id,
spotify_client_secret=spotify_client_secret,
arl=arl, # Account specific ARL
spotify_client_id=global_spotify_client_id, # Global Spotify keys for internal Spo use by DeeLogin
spotify_client_secret=global_spotify_client_secret, # Global Spotify keys
progress_callback=progress_callback
)
dl.download_trackdee(
dl.download_trackdee( # Deezer URL, download via Deezer
link_track=url,
output_dir="./downloads",
quality_download=quality,
@@ -223,8 +207,10 @@ def download_track(
convert_to=convert_to,
bitrate=bitrate
)
print(f"DEBUG: track.py - Direct Deezer download (account: {main}) successful.")
else:
raise ValueError(f"Unsupported service: {service}")
# Should be caught by initial service check, but as a safeguard
raise ValueError(f"Unsupported service determined: {service}")
except Exception as e:
traceback.print_exc()
raise

View File

@@ -14,6 +14,101 @@ ARTISTS_DB_PATH = DB_DIR / 'artists.db'
# Config path for watch.json is managed in routes.utils.watch.manager now
# CONFIG_PATH = Path('./data/config/watch.json') # Removed
# Expected column definitions
EXPECTED_WATCHED_PLAYLISTS_COLUMNS = {
'spotify_id': 'TEXT PRIMARY KEY',
'name': 'TEXT',
'owner_id': 'TEXT',
'owner_name': 'TEXT',
'total_tracks': 'INTEGER',
'link': 'TEXT',
'snapshot_id': 'TEXT',
'last_checked': 'INTEGER',
'added_at': 'INTEGER',
'is_active': 'INTEGER DEFAULT 1'
}
EXPECTED_PLAYLIST_TRACKS_COLUMNS = {
'spotify_track_id': 'TEXT PRIMARY KEY',
'title': 'TEXT',
'artist_names': 'TEXT',
'album_name': 'TEXT',
'album_artist_names': 'TEXT',
'track_number': 'INTEGER',
'album_spotify_id': 'TEXT',
'duration_ms': 'INTEGER',
'added_at_playlist': 'TEXT',
'added_to_db': 'INTEGER',
'is_present_in_spotify': 'INTEGER DEFAULT 1',
'last_seen_in_spotify': 'INTEGER'
}
EXPECTED_WATCHED_ARTISTS_COLUMNS = {
'spotify_id': 'TEXT PRIMARY KEY',
'name': 'TEXT',
'link': 'TEXT',
'total_albums_on_spotify': 'INTEGER', # Number of albums found via API
'last_checked': 'INTEGER',
'added_at': 'INTEGER',
'is_active': 'INTEGER DEFAULT 1',
'genres': 'TEXT', # Comma-separated
'popularity': 'INTEGER',
'image_url': 'TEXT'
}
EXPECTED_ARTIST_ALBUMS_COLUMNS = {
'album_spotify_id': 'TEXT PRIMARY KEY',
'artist_spotify_id': 'TEXT', # Foreign key to watched_artists
'name': 'TEXT',
'album_group': 'TEXT', # album, single, compilation, appears_on
'album_type': 'TEXT', # album, single, compilation
'release_date': 'TEXT',
'release_date_precision': 'TEXT', # year, month, day
'total_tracks': 'INTEGER',
'link': 'TEXT',
'image_url': 'TEXT',
'added_to_db': 'INTEGER',
'last_seen_on_spotify': 'INTEGER', # Timestamp when last confirmed via API
'download_task_id': 'TEXT',
'download_status': 'INTEGER DEFAULT 0', # 0: Not Queued, 1: Queued/In Progress, 2: Downloaded, 3: Error
'is_fully_downloaded_managed_by_app': 'INTEGER DEFAULT 0' # 0: No, 1: Yes (app has marked all its tracks as downloaded)
}
def _ensure_table_schema(cursor: sqlite3.Cursor, table_name: str, expected_columns: dict, table_description: str):
"""
Ensures the given table has all expected columns, adding them if necessary.
"""
try:
cursor.execute(f"PRAGMA table_info({table_name})")
existing_columns_info = cursor.fetchall()
existing_column_names = {col[1] for col in existing_columns_info}
added_columns_to_this_table = False
for col_name, col_type in expected_columns.items():
if col_name not in existing_column_names:
if 'PRIMARY KEY' in col_type.upper() and existing_columns_info: # Only warn if table already exists
logger.warning(
f"Column '{col_name}' is part of PRIMARY KEY for {table_description} '{table_name}' "
f"and was expected to be created by CREATE TABLE. Skipping explicit ADD COLUMN. "
f"Manual schema review might be needed if this table was not empty."
)
continue
col_type_for_add = col_type.replace(' PRIMARY KEY', '').strip()
try:
cursor.execute(f"ALTER TABLE {table_name} ADD COLUMN {col_name} {col_type_for_add}")
logger.info(f"Added missing column '{col_name} {col_type_for_add}' to {table_description} table '{table_name}'.")
added_columns_to_this_table = True
except sqlite3.OperationalError as alter_e:
logger.warning(
f"Could not add column '{col_name}' to {table_description} table '{table_name}': {alter_e}. "
f"It might already exist with a different definition or there's another schema mismatch."
)
return added_columns_to_this_table
except sqlite3.Error as e:
logger.error(f"Error ensuring schema for {table_description} table '{table_name}': {e}", exc_info=True)
return False
def _get_playlists_db_connection():
DB_DIR.mkdir(parents=True, exist_ok=True)
conn = sqlite3.connect(PLAYLISTS_DB_PATH, timeout=10)
@@ -27,7 +122,7 @@ def _get_artists_db_connection():
return conn
def init_playlists_db():
"""Initializes the playlists database and creates the main watched_playlists table if it doesn't exist."""
"""Initializes the playlists database and creates/updates the main watched_playlists table."""
try:
with _get_playlists_db_connection() as conn:
cursor = conn.cursor()
@@ -45,15 +140,17 @@ def init_playlists_db():
is_active INTEGER DEFAULT 1
)
""")
conn.commit()
logger.info(f"Playlists database initialized successfully at {PLAYLISTS_DB_PATH}")
# Ensure schema
if _ensure_table_schema(cursor, 'watched_playlists', EXPECTED_WATCHED_PLAYLISTS_COLUMNS, "watched playlists"):
conn.commit()
logger.info(f"Playlists database initialized/updated successfully at {PLAYLISTS_DB_PATH}")
except sqlite3.Error as e:
logger.error(f"Error initializing watched_playlists table: {e}", exc_info=True)
raise
def _create_playlist_tracks_table(playlist_spotify_id: str):
"""Creates a table for a specific playlist to store its tracks if it doesn't exist in playlists.db."""
table_name = f"playlist_{playlist_spotify_id.replace('-', '_')}" # Sanitize table name
"""Creates or updates a table for a specific playlist to store its tracks in playlists.db."""
table_name = f"playlist_{playlist_spotify_id.replace('-', '_').replace(' ', '_')}" # Sanitize table name
try:
with _get_playlists_db_connection() as conn: # Use playlists connection
cursor = conn.cursor()
@@ -73,8 +170,10 @@ def _create_playlist_tracks_table(playlist_spotify_id: str):
last_seen_in_spotify INTEGER -- Timestamp when last confirmed in Spotify playlist
)
""")
conn.commit()
logger.info(f"Tracks table '{table_name}' created or already exists in {PLAYLISTS_DB_PATH}.")
# Ensure schema
if _ensure_table_schema(cursor, table_name, EXPECTED_PLAYLIST_TRACKS_COLUMNS, f"playlist tracks ({playlist_spotify_id})"):
conn.commit()
logger.info(f"Tracks table '{table_name}' created/updated or already exists in {PLAYLISTS_DB_PATH}.")
except sqlite3.Error as e:
logger.error(f"Error creating playlist tracks table {table_name} in {PLAYLISTS_DB_PATH}: {e}", exc_info=True)
raise
@@ -388,50 +487,66 @@ def add_single_track_to_playlist_db(playlist_spotify_id: str, track_item_for_db:
# --- Artist Watch Database Functions ---
def init_artists_db():
"""Initializes the artists database and creates the watched_artists table if it doesn't exist."""
"""Initializes the artists database and creates/updates the main watched_artists table."""
try:
with _get_artists_db_connection() as conn:
cursor = conn.cursor()
# Note: total_albums_on_spotify, genres, popularity, image_url added to EXPECTED_WATCHED_ARTISTS_COLUMNS
# and will be added by _ensure_table_schema if missing.
cursor.execute("""
CREATE TABLE IF NOT EXISTS watched_artists (
spotify_id TEXT PRIMARY KEY,
name TEXT,
total_albums_on_spotify INTEGER,
link TEXT,
total_albums_on_spotify INTEGER, -- Number of albums found via API on last full check
last_checked INTEGER,
added_at INTEGER,
is_active INTEGER DEFAULT 1,
last_known_status TEXT,
last_task_id TEXT
genres TEXT, -- Comma-separated list of genres
popularity INTEGER, -- Artist popularity (0-100)
image_url TEXT -- URL of the artist's image
)
""")
conn.commit()
logger.info(f"Artists database initialized successfully at {ARTISTS_DB_PATH}")
# Ensure schema
if _ensure_table_schema(cursor, 'watched_artists', EXPECTED_WATCHED_ARTISTS_COLUMNS, "watched artists"):
conn.commit()
logger.info(f"Artists database initialized/updated successfully at {ARTISTS_DB_PATH}")
except sqlite3.Error as e:
logger.error(f"Error initializing watched_artists table in {ARTISTS_DB_PATH}: {e}", exc_info=True)
raise
def _create_artist_albums_table(artist_spotify_id: str):
"""Creates a table for a specific artist to store its albums if it doesn't exist in artists.db."""
table_name = f"artist_{artist_spotify_id.replace('-', '_')}_albums"
"""Creates or updates a table for a specific artist to store their albums in artists.db."""
table_name = f"artist_{artist_spotify_id.replace('-', '_').replace(' ', '_')}" # Sanitize table name
try:
with _get_artists_db_connection() as conn:
with _get_artists_db_connection() as conn: # Use artists connection
cursor = conn.cursor()
# Note: Several columns including artist_spotify_id, release_date_precision, image_url,
# last_seen_on_spotify, download_task_id, download_status, is_fully_downloaded_managed_by_app
# are part of EXPECTED_ARTIST_ALBUMS_COLUMNS and will be added by _ensure_table_schema.
cursor.execute(f"""
CREATE TABLE IF NOT EXISTS {table_name} (
album_spotify_id TEXT PRIMARY KEY,
artist_spotify_id TEXT,
name TEXT,
album_group TEXT,
album_type TEXT,
release_date TEXT,
release_date_precision TEXT,
total_tracks INTEGER,
added_to_db_at INTEGER,
is_download_initiated INTEGER DEFAULT 0,
task_id TEXT,
last_checked_for_tracks INTEGER
link TEXT,
image_url TEXT,
added_to_db INTEGER,
last_seen_on_spotify INTEGER,
download_task_id TEXT,
download_status INTEGER DEFAULT 0,
is_fully_downloaded_managed_by_app INTEGER DEFAULT 0
)
""")
conn.commit()
logger.info(f"Albums table '{table_name}' for artist {artist_spotify_id} created or exists in {ARTISTS_DB_PATH}.")
# Ensure schema for the specific artist's album table
if _ensure_table_schema(cursor, table_name, EXPECTED_ARTIST_ALBUMS_COLUMNS, f"artist albums ({artist_spotify_id})"):
conn.commit()
logger.info(f"Albums table '{table_name}' created/updated or already exists in {ARTISTS_DB_PATH}.")
except sqlite3.Error as e:
logger.error(f"Error creating artist albums table {table_name} in {ARTISTS_DB_PATH}: {e}", exc_info=True)
raise

View File

@@ -1,45 +1,41 @@
import { downloadQueue } from './queue.js';
// Interfaces for validator data
interface SpotifyValidatorData {
username: string;
credentials?: string; // Credentials might be optional if only username is used as an identifier
// Updated Interfaces for validator data
interface SpotifyFormData {
accountName: string; // Formerly username, maps to 'name' in backend
authBlob: string; // Formerly credentials, maps to 'blob_content' in backend
accountRegion?: string; // Maps to 'region' in backend
}
interface SpotifySearchValidatorData {
client_id: string;
client_secret: string;
}
interface DeezerValidatorData {
interface DeezerFormData {
accountName: string; // Maps to 'name' in backend
arl: string;
accountRegion?: string; // Maps to 'region' in backend
}
// Global service configuration object
const serviceConfig: Record<string, any> = {
spotify: {
fields: [
{ id: 'username', label: 'Username', type: 'text' },
{ id: 'credentials', label: 'Credentials', type: 'text' } // Assuming this is password/token
{ id: 'accountName', label: 'Account Name', type: 'text' },
{ id: 'accountRegion', label: 'Region (ISO 3166-1 alpha-2)', type: 'text', placeholder: 'E.g., US, DE, GB (Optional)'},
{ id: 'authBlob', label: 'Auth Blob (JSON content)', type: 'textarea', rows: 5 }
],
validator: (data: SpotifyValidatorData) => ({ // Typed data
username: data.username,
credentials: data.credentials
validator: (data: SpotifyFormData) => ({
name: data.accountName,
region: data.accountRegion || null, // Send null if empty, backend might have default
blob_content: data.authBlob
}),
// Adding search credentials fields
searchFields: [
{ id: 'client_id', label: 'Client ID', type: 'text' },
{ id: 'client_secret', label: 'Client Secret', type: 'password' }
],
searchValidator: (data: SpotifySearchValidatorData) => ({ // Typed data
client_id: data.client_id,
client_secret: data.client_secret
})
},
deezer: {
fields: [
{ id: 'arl', label: 'ARL', type: 'text' }
{ id: 'accountName', label: 'Account Name', type: 'text' },
{ id: 'accountRegion', label: 'Region (ISO 3166-1 alpha-2)', type: 'text', placeholder: 'E.g., US, DE, FR (Optional)'},
{ id: 'arl', label: 'ARL Token', type: 'text' }
],
validator: (data: DeezerValidatorData) => ({ // Typed data
validator: (data: DeezerFormData) => ({
name: data.accountName,
region: data.accountRegion || null, // Send null if empty
arl: data.arl
})
}
@@ -69,6 +65,13 @@ let credentialsFormCard: HTMLElement | null = null;
let showAddAccountFormBtn: HTMLElement | null = null;
let cancelAddAccountBtn: HTMLElement | null = null;
// Hint element references
let spotifyRegionHint: HTMLElement | null = null;
let deezerRegionHint: HTMLElement | null = null;
// Ensure this is defined, typically at the top with other DOM element getters if used frequently
let spotifyApiConfigStatusDiv: HTMLElement | null = null;
// Helper function to manage visibility of form and add button
function setFormVisibility(showForm: boolean) {
if (credentialsFormCard && showAddAccountFormBtn) {
@@ -173,6 +176,10 @@ document.addEventListener('DOMContentLoaded', async () => {
showAddAccountFormBtn = document.getElementById('showAddAccountFormBtn');
cancelAddAccountBtn = document.getElementById('cancelAddAccountBtn');
// Get hint elements
spotifyRegionHint = document.getElementById('spotifyRegionHint');
deezerRegionHint = document.getElementById('deezerRegionHint');
if (credentialsFormCard && showAddAccountFormBtn) {
// Initially hide form, show add button (default state handled by setFormVisibility if called)
credentialsFormCard.style.display = 'none';
@@ -245,6 +252,7 @@ async function initConfig() {
await updateAccountSelectors();
loadCredentials(currentService);
updateFormFields();
await loadSpotifyApiConfig();
}
function setupServiceTabs() {
@@ -262,6 +270,7 @@ function setupServiceTabs() {
function setupEventListeners() {
(document.getElementById('credentialForm') as HTMLFormElement | null)?.addEventListener('submit', handleCredentialSubmit);
(document.getElementById('saveSpotifyApiConfigBtn') as HTMLButtonElement | null)?.addEventListener('click', saveSpotifyApiConfig);
// Config change listeners
(document.getElementById('defaultServiceSelect') as HTMLSelectElement | null)?.addEventListener('change', function() {
@@ -471,22 +480,12 @@ function renderCredentialsList(service: string, credentials: any[]) {
const credItem = document.createElement('div');
credItem.className = 'credential-item';
const hasSearchCreds = credData.search && Object.keys(credData.search).length > 0;
credItem.innerHTML = `
<div class="credential-info">
<span class="credential-name">${credData.name}</span>
${service === 'spotify' ?
`<div class="search-credentials-status ${hasSearchCreds ? 'has-api' : 'no-api'}">
${hasSearchCreds ? 'API Configured' : 'No API Credentials'}
</div>` : ''}
</div>
<div class="credential-actions">
<button class="edit-btn" data-name="${credData.name}" data-service="${service}">Edit Account</button>
${service === 'spotify' ?
`<button class="edit-search-btn" data-name="${credData.name}" data-service="${service}">
${hasSearchCreds ? 'Edit API' : 'Add API'}
</button>` : ''}
<button class="delete-btn" data-name="${credData.name}" data-service="${service}">Delete</button>
</div>
`;
@@ -505,12 +504,6 @@ function renderCredentialsList(service: string, credentials: any[]) {
handleEditCredential(e as MouseEvent);
});
});
if (service === 'spotify') {
list.querySelectorAll('.edit-search-btn').forEach(btn => {
btn.addEventListener('click', handleEditSearchCredential as EventListener);
});
}
}
async function handleDeleteCredential(e: Event) {
@@ -557,33 +550,49 @@ async function handleDeleteCredential(e: Event) {
async function handleEditCredential(e: MouseEvent) {
const target = e.target as HTMLElement;
const service = target.dataset.service;
const name = target.dataset.name;
const name = target.dataset.name; // This is the name of the credential being edited
try {
(document.querySelector(`[data-service="${service}"]`) as HTMLElement | null)?.click();
await new Promise(resolve => setTimeout(resolve, 50));
setFormVisibility(true); // Show form for editing, will hide add button
setFormVisibility(true);
const response = await fetch(`/api/credentials/${service}/${name}`);
if (!response.ok) {
throw new Error(`Failed to load credential: ${response.statusText}`);
}
const data = await response.json();
const data = await response.json(); // data = {name, region, blob_content/arl}
currentCredential = name ? name : null;
const credentialNameInput = document.getElementById('credentialName') as HTMLInputElement | null;
if (credentialNameInput) {
credentialNameInput.value = name || '';
credentialNameInput.disabled = true;
currentCredential = name ? name : null; // Set the global currentCredential to the one being edited
// Populate the dynamic fields created by updateFormFields
// including 'accountName', 'accountRegion', and 'authBlob' or 'arl'.
if (serviceConfig[service!] && serviceConfig[service!].fields) {
serviceConfig[service!].fields.forEach((fieldConf: { id: string; }) => {
const element = document.getElementById(fieldConf.id) as HTMLInputElement | HTMLTextAreaElement | null;
if (element) {
if (fieldConf.id === 'accountName') {
element.value = data.name || name || ''; // Use data.name from fetched, fallback to clicked name
(element as HTMLInputElement).disabled = true; // Disable editing of account name
} else if (fieldConf.id === 'accountRegion') {
element.value = data.region || '';
} else if (fieldConf.id === 'authBlob' && service === 'spotify') {
// data.blob_content might be an object or string. Ensure textarea gets string.
element.value = typeof data.blob_content === 'object' ? JSON.stringify(data.blob_content, null, 2) : (data.blob_content || '');
} else if (fieldConf.id === 'arl' && service === 'deezer') {
element.value = data.arl || '';
}
// Add more specific population if other fields are introduced
}
});
}
(document.getElementById('formTitle') as HTMLElement | null)!.textContent = `Edit ${service!.charAt(0).toUpperCase() + service!.slice(1)} Account`;
(document.getElementById('submitCredentialBtn') as HTMLElement | null)!.textContent = 'Update Account';
// Show regular fields
populateFormFields(service!, data);
toggleSearchFieldsVisibility(false);
toggleSearchFieldsVisibility(false); // Ensure old per-account search fields are hidden
} catch (error: any) {
showConfigError(error.message);
}
@@ -592,150 +601,109 @@ async function handleEditCredential(e: MouseEvent) {
async function handleEditSearchCredential(e: Event) {
const target = e.target as HTMLElement;
const service = target.dataset.service;
const name = target.dataset.name;
// const name = target.dataset.name; // Account name, not used here anymore
try {
if (service !== 'spotify') {
throw new Error('Search credentials are only available for Spotify');
}
setFormVisibility(true); // Show form for editing search creds, will hide add button
(document.querySelector(`[data-service="${service}"]`) as HTMLElement | null)?.click();
await new Promise(resolve => setTimeout(resolve, 50));
isEditingSearch = true;
currentCredential = name ? name : null;
const credentialNameInput = document.getElementById('credentialName') as HTMLInputElement | null;
if (credentialNameInput) {
credentialNameInput.value = name || '';
credentialNameInput.disabled = true;
}
(document.getElementById('formTitle')as HTMLElement | null)!.textContent = `Spotify API Credentials for ${name}`;
(document.getElementById('submitCredentialBtn') as HTMLElement | null)!.textContent = 'Save API Credentials';
// Try to load existing search credentials
try {
const searchResponse = await fetch(`/api/credentials/${service}/${name}?type=search`);
if (searchResponse.ok) {
const searchData = await searchResponse.json();
// Populate search fields
serviceConfig[service].searchFields.forEach((field: { id: string; }) => {
const element = document.getElementById(field.id) as HTMLInputElement | null;
if (element) element.value = searchData[field.id] || '';
});
} else {
// Clear search fields if no existing search credentials
serviceConfig[service].searchFields.forEach((field: { id: string; }) => {
const element = document.getElementById(field.id) as HTMLInputElement | null;
if (element) element.value = '';
});
}
} catch (error) {
// Clear search fields if there was an error
serviceConfig[service].searchFields.forEach((field: { id: string; }) => {
const element = document.getElementById(field.id) as HTMLInputElement | null;
if (element) element.value = '';
});
}
// Hide regular account fields, show search fields
toggleSearchFieldsVisibility(true);
} catch (error: any) {
showConfigError(error.message);
if (service === 'spotify') {
showConfigError("Spotify API credentials are now managed globally in the 'Global Spotify API Credentials' section.");
// Optionally, scroll to or highlight the global section
const globalSection = document.querySelector('.global-spotify-api-config') as HTMLElement | null;
if (globalSection) globalSection.scrollIntoView({ behavior: 'smooth' });
} else {
// If this function were ever used for other services, that logic would go here.
console.warn(`handleEditSearchCredential called for unhandled service: ${service} or function is obsolete.`);
}
setFormVisibility(false); // Ensure the main account form is hidden if it was opened.
}
function toggleSearchFieldsVisibility(showSearchFields: boolean) {
const serviceFieldsDiv = document.getElementById('serviceFields') as HTMLElement | null;
const searchFieldsDiv = document.getElementById('searchFields') as HTMLElement | null;
const searchFieldsDiv = document.getElementById('searchFields') as HTMLElement | null; // This div might be removed from HTML if not used by other services
if (showSearchFields) {
// Hide regular fields and remove 'required' attribute
if(serviceFieldsDiv) serviceFieldsDiv.style.display = 'none';
// Remove required attribute from service fields
serviceConfig[currentService].fields.forEach((field: { id: string }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
if (input) input.removeAttribute('required');
});
// Show search fields and add 'required' attribute
if(searchFieldsDiv) searchFieldsDiv.style.display = 'block';
// Make search fields required
if (currentService === 'spotify' && serviceConfig[currentService].searchFields) {
serviceConfig[currentService].searchFields.forEach((field: { id: string }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
if (input) input.setAttribute('required', '');
});
}
} else {
// Show regular fields and add 'required' attribute
if(serviceFieldsDiv) serviceFieldsDiv.style.display = 'block';
// Make service fields required
// Simplified: Always show serviceFields, always hide (old) searchFields in this form context.
// The new global Spotify API fields are in a separate card and handled by different functions.
if(serviceFieldsDiv) serviceFieldsDiv.style.display = 'block';
if(searchFieldsDiv) searchFieldsDiv.style.display = 'none';
// Ensure required attributes are set correctly for visible service fields
if (serviceConfig[currentService] && serviceConfig[currentService].fields) {
serviceConfig[currentService].fields.forEach((field: { id: string }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
if (input) input.setAttribute('required', '');
});
// Hide search fields and remove 'required' attribute
if(searchFieldsDiv) searchFieldsDiv.style.display = 'none';
// Remove required from search fields
if (currentService === 'spotify' && serviceConfig[currentService].searchFields) {
serviceConfig[currentService].searchFields.forEach((field: { id: string }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
if (input) input.removeAttribute('required');
});
}
}
// Ensure required attributes are removed from (old) search fields as they are hidden
// This is mainly for cleanup if the searchFieldsDiv still exists for some reason.
if (currentService === 'spotify' && serviceConfig[currentService] && serviceConfig[currentService].searchFields) { // This condition will no longer be true for spotify
serviceConfig[currentService].searchFields.forEach((field: { id: string }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
if (input) input.removeAttribute('required');
});
}
}
function updateFormFields() {
const serviceFieldsDiv = document.getElementById('serviceFields') as HTMLElement | null;
const searchFieldsDiv = document.getElementById('searchFields') as HTMLElement | null;
// Clear any existing fields
if(serviceFieldsDiv) serviceFieldsDiv.innerHTML = '';
if(searchFieldsDiv) searchFieldsDiv.innerHTML = '';
// Add regular account fields
serviceConfig[currentService].fields.forEach((field: { className: string; label: string; type: string; id: string; }) => {
const fieldDiv = document.createElement('div');
fieldDiv.className = 'form-group';
fieldDiv.innerHTML = `
<label>${field.label}:</label>
<input type="${field.type}"
id="${field.id}"
name="${field.id}"
required
${field.type === 'password' ? 'autocomplete="new-password"' : ''}>
`;
serviceFieldsDiv?.appendChild(fieldDiv);
});
// Add search fields for Spotify
if (currentService === 'spotify' && serviceConfig[currentService].searchFields) {
serviceConfig[currentService].searchFields.forEach((field: { className: string; label: string; type: string; id: string; }) => {
if (serviceConfig[currentService] && serviceConfig[currentService].fields) {
serviceConfig[currentService].fields.forEach((field: { id: string; label: string; type: string; placeholder?: string; rows?: number; }) => {
const fieldDiv = document.createElement('div');
fieldDiv.className = 'form-group';
let inputElementHTML = '';
if (field.type === 'textarea') {
inputElementHTML = `<textarea
id="${field.id}"
name="${field.id}"
rows="${field.rows || 3}"
class="form-input"
placeholder="${field.placeholder || ''}"
required></textarea>`;
} else {
inputElementHTML = `<input
type="${field.type}"
id="${field.id}"
name="${field.id}"
class="form-input"
placeholder="${field.placeholder || ''}"
${field.type === 'password' ? 'autocomplete="new-password"' : ''}
required>`;
}
// Region field is optional, so remove 'required' if id is 'accountRegion'
if (field.id === 'accountRegion') {
inputElementHTML = inputElementHTML.replace(' required', '');
}
fieldDiv.innerHTML = `
<label>${field.label}:</label>
<input type="${field.type}"
id="${field.id}"
name="${field.id}"
required
${field.type === 'password' ? 'autocomplete="new-password"' : ''}>
<label for="${field.id}">${field.label}:</label>
${inputElementHTML}
`;
searchFieldsDiv?.appendChild(fieldDiv);
serviceFieldsDiv?.appendChild(fieldDiv);
});
}
// Reset form title and button text
(document.getElementById('formTitle') as HTMLElement | null)!.textContent = `Add New ${currentService.charAt(0).toUpperCase() + currentService.slice(1)} Account`;
(document.getElementById('submitCredentialBtn') as HTMLElement | null)!.textContent = 'Save Account';
// Initially show regular fields, hide search fields
toggleSearchFieldsVisibility(false);
isEditingSearch = false;
toggleSearchFieldsVisibility(false);
isEditingSearch = false;
// Show/hide region hints based on current service
if (spotifyRegionHint && deezerRegionHint) {
if (currentService === 'spotify') {
spotifyRegionHint.style.display = 'block';
deezerRegionHint.style.display = 'none';
} else if (currentService === 'deezer') {
spotifyRegionHint.style.display = 'none';
deezerRegionHint.style.display = 'block';
} else {
// Fallback: hide both if service is unrecognized
spotifyRegionHint.style.display = 'none';
deezerRegionHint.style.display = 'none';
}
}
}
function populateFormFields(service: string, data: Record<string, string>) {
@@ -748,81 +716,82 @@ function populateFormFields(service: string, data: Record<string, string>) {
async function handleCredentialSubmit(e: Event) {
e.preventDefault();
const service = (document.querySelector('.tab-button.active') as HTMLElement | null)?.dataset.service;
const nameInput = document.getElementById('credentialName') as HTMLInputElement | null;
const name = nameInput?.value.trim();
// Get the account name from the 'accountName' field within the dynamically generated serviceFields
const accountNameInput = document.getElementById('accountName') as HTMLInputElement | null;
const accountNameValue = accountNameInput?.value.trim();
try {
if (!currentCredential && !name) {
throw new Error('Credential name is required');
// If we are editing (currentCredential is set), the name comes from currentCredential.
// If we are creating a new one, the name comes from the form's 'accountName' field.
if (!currentCredential && !accountNameValue) {
// Ensure accountNameInput is focused if it's empty during new credential creation
if(accountNameInput && !accountNameValue) accountNameInput.focus();
throw new Error('Account Name is required');
}
if (!service) {
throw new Error('Service not selected');
}
const endpointName = currentCredential || name;
// For POST (new), endpointName is from form. For PUT (edit), it's from currentCredential.
const endpointName = currentCredential || accountNameValue;
if (!endpointName) {
// This should ideally not be reached if the above check for accountNameValue is done correctly.
throw new Error("Account name could not be determined.");
}
let method: string, data: any, endpoint: string;
if (isEditingSearch && service === 'spotify') {
// Handle search credentials
const formData: Record<string, string> = {};
let isValid = true;
let firstInvalidField: HTMLInputElement | null = null;
// Manually validate search fields
serviceConfig[service!].searchFields.forEach((field: { id: string; }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
const value = input ? input.value.trim() : '';
formData[field.id] = value;
if (!value) {
isValid = false;
if (!firstInvalidField && input) firstInvalidField = input;
}
});
if (!isValid) {
if (firstInvalidField) (firstInvalidField as HTMLInputElement).focus();
throw new Error('All fields are required');
}
const formData: Record<string, string> = {};
let isValid = true;
let firstInvalidField: HTMLInputElement | HTMLTextAreaElement | null = null;
const currentServiceFields = serviceConfig[service!]?.fields as Array<{id: string, label: string, type: string}> | undefined;
data = serviceConfig[service!].searchValidator(formData);
endpoint = `/api/credentials/${service}/${endpointName}?type=search`;
// Check if search credentials already exist for this account
const checkResponse = await fetch(endpoint);
method = checkResponse.ok ? 'PUT' : 'POST';
if (currentServiceFields) {
currentServiceFields.forEach((field: { id: string; }) => {
const input = document.getElementById(field.id) as HTMLInputElement | HTMLTextAreaElement | null;
const value = input ? input.value.trim() : '';
formData[field.id] = value;
const isRequired = input?.hasAttribute('required');
if (isRequired && !value) {
isValid = false;
if (!firstInvalidField && input) firstInvalidField = input;
}
});
} else {
// Handle regular account credentials
const formData: Record<string, string> = {};
let isValid = true;
let firstInvalidField: HTMLInputElement | null = null;
// Manually validate account fields
serviceConfig[service!].fields.forEach((field: { id: string; }) => {
const input = document.getElementById(field.id) as HTMLInputElement | null;
const value = input ? input.value.trim() : '';
formData[field.id] = value;
if (!value) {
isValid = false;
if (!firstInvalidField && input) firstInvalidField = input;
}
});
if (!isValid) {
if (firstInvalidField) (firstInvalidField as HTMLInputElement).focus();
throw new Error('All fields are required');
}
data = serviceConfig[service!].validator(formData);
endpoint = `/api/credentials/${service}/${endpointName}`;
method = currentCredential ? 'PUT' : 'POST';
throw new Error(`No fields configured for service: ${service}`);
}
if (!isValid) {
if (firstInvalidField) {
const nonNullInvalidField = firstInvalidField as HTMLInputElement | HTMLTextAreaElement;
nonNullInvalidField.focus();
const fieldName = (nonNullInvalidField as HTMLInputElement).labels?.[0]?.textContent || nonNullInvalidField.id || 'Unknown field';
throw new Error(`Field '${fieldName}' is required.`);
} else {
throw new Error('All required fields must be filled, but a specific invalid field was not identified.');
}
}
// The validator in serviceConfig now expects fields like 'accountName', 'accountRegion', etc.
data = serviceConfig[service!].validator(formData);
// If it's a new credential and the validator didn't explicitly set 'name' from 'accountName',
// (though it should: serviceConfig.spotify.validator expects data.accountName and sets 'name')
// we ensure the 'name' in the payload matches accountNameValue if it's a new POST.
// For PUT, the name is part of the URL and shouldn't be in the body unless changing it is allowed.
// The current validators *do* map e.g. data.accountName to data.name in the output object.
// So, `data` should already have the correct `name` field from `accountName` form field.
endpoint = `/api/credentials/${service}/${endpointName}`;
method = currentCredential ? 'PUT' : 'POST';
const response = await fetch(endpoint, {
method,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(data)
body: JSON.stringify(data) // Data should contain {name, region, blob_content/arl}
});
if (!response.ok) {
@@ -831,16 +800,13 @@ async function handleCredentialSubmit(e: Event) {
}
await updateAccountSelectors();
await saveConfig();
loadCredentials(service!);
loadCredentials(service!);
// Show success message
showConfigSuccess(isEditingSearch ? 'API credentials saved successfully' : 'Account saved successfully');
showConfigSuccess('Account saved successfully');
// Add a delay before hiding the form
setTimeout(() => {
setFormVisibility(false); // Hide form and show add button on successful submission
}, 2000); // 2 second delay
setFormVisibility(false);
}, 2000);
} catch (error: any) {
showConfigError(error.message);
}
@@ -849,26 +815,25 @@ async function handleCredentialSubmit(e: Event) {
function resetForm() {
currentCredential = null;
isEditingSearch = false;
const nameInput = document.getElementById('credentialName') as HTMLInputElement | null;
if (nameInput) {
nameInput.value = '';
nameInput.disabled = false;
}
// The static 'credentialName' input is gone. Resetting the form should clear dynamic fields.
(document.getElementById('credentialForm') as HTMLFormElement | null)?.reset();
// Enable the accountName field again if it was disabled during an edit operation
const accountNameInput = document.getElementById('accountName') as HTMLInputElement | null;
if (accountNameInput) {
accountNameInput.disabled = false;
}
// Reset conversion dropdowns to ensure bitrate is updated correctly
const convertToSelect = document.getElementById('convertToSelect') as HTMLSelectElement | null;
if (convertToSelect) {
convertToSelect.value = ''; // Reset to 'No Conversion'
updateBitrateOptions(''); // Update bitrate for 'No Conversion'
convertToSelect.value = '';
updateBitrateOptions('');
}
// Reset form title and button text
const serviceName = currentService.charAt(0).toUpperCase() + currentService.slice(1);
(document.getElementById('formTitle') as HTMLElement | null)!.textContent = `Add New ${serviceName} Account`;
(document.getElementById('submitCredentialBtn') as HTMLElement | null)!.textContent = 'Save Account';
// Show regular account fields, hide search fields
toggleSearchFieldsVisibility(false);
}
@@ -1181,3 +1146,95 @@ function updateBitrateOptions(selectedFormat: string) {
bitrateSelect.value = '';
}
}
// Function to load global Spotify API credentials
async function loadSpotifyApiConfig() {
const clientIdInput = document.getElementById('globalSpotifyClientId') as HTMLInputElement | null;
const clientSecretInput = document.getElementById('globalSpotifyClientSecret') as HTMLInputElement | null;
spotifyApiConfigStatusDiv = document.getElementById('spotifyApiConfigStatus') as HTMLElement | null; // Assign here or ensure it's globally available
if (!clientIdInput || !clientSecretInput || !spotifyApiConfigStatusDiv) {
console.error("Global Spotify API config form elements not found.");
if(spotifyApiConfigStatusDiv) spotifyApiConfigStatusDiv.textContent = 'Error: Form elements missing.';
return;
}
try {
const response = await fetch('/api/credentials/spotify_api_config');
if (!response.ok) {
const errorData = await response.json().catch(() => ({ error: 'Failed to load Spotify API config, server error.' }));
throw new Error(errorData.error || `HTTP error ${response.status}`);
}
const data = await response.json();
clientIdInput.value = data.client_id || '';
clientSecretInput.value = data.client_secret || '';
if (data.warning) {
spotifyApiConfigStatusDiv.textContent = data.warning;
spotifyApiConfigStatusDiv.className = 'status-message warning';
} else if (data.client_id && data.client_secret) {
spotifyApiConfigStatusDiv.textContent = 'Current API credentials loaded.';
spotifyApiConfigStatusDiv.className = 'status-message success';
} else {
spotifyApiConfigStatusDiv.textContent = 'Global Spotify API credentials are not set.';
spotifyApiConfigStatusDiv.className = 'status-message neutral';
}
} catch (error: any) {
console.error('Error loading Spotify API config:', error);
if(spotifyApiConfigStatusDiv) {
spotifyApiConfigStatusDiv.textContent = `Error loading config: ${error.message}`;
spotifyApiConfigStatusDiv.className = 'status-message error';
}
}
}
// Function to save global Spotify API credentials
async function saveSpotifyApiConfig() {
const clientIdInput = document.getElementById('globalSpotifyClientId') as HTMLInputElement | null;
const clientSecretInput = document.getElementById('globalSpotifyClientSecret') as HTMLInputElement | null;
// spotifyApiConfigStatusDiv should be already assigned by loadSpotifyApiConfig or be a global var
if (!spotifyApiConfigStatusDiv) { // Re-fetch if null, though it should not be if load ran.
spotifyApiConfigStatusDiv = document.getElementById('spotifyApiConfigStatus') as HTMLElement | null;
}
if (!clientIdInput || !clientSecretInput || !spotifyApiConfigStatusDiv) {
console.error("Global Spotify API config form elements not found for saving.");
if(spotifyApiConfigStatusDiv) spotifyApiConfigStatusDiv.textContent = 'Error: Form elements missing.';
return;
}
const client_id = clientIdInput.value.trim();
const client_secret = clientSecretInput.value.trim();
if (!client_id || !client_secret) {
spotifyApiConfigStatusDiv.textContent = 'Client ID and Client Secret cannot be empty.';
spotifyApiConfigStatusDiv.className = 'status-message error';
if(!client_id) clientIdInput.focus(); else clientSecretInput.focus();
return;
}
try {
spotifyApiConfigStatusDiv.textContent = 'Saving...';
spotifyApiConfigStatusDiv.className = 'status-message neutral';
const response = await fetch('/api/credentials/spotify_api_config', {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ client_id, client_secret })
});
const responseData = await response.json(); // Try to parse JSON regardless of ok status for error messages
if (!response.ok) {
throw new Error(responseData.error || `Failed to save Spotify API config. Status: ${response.status}`);
}
spotifyApiConfigStatusDiv.textContent = responseData.message || 'Spotify API credentials saved successfully!';
spotifyApiConfigStatusDiv.className = 'status-message success';
} catch (error: any) {
console.error('Error saving Spotify API config:', error);
if(spotifyApiConfigStatusDiv) {
spotifyApiConfigStatusDiv.textContent = `Error saving: ${error.message}`;
spotifyApiConfigStatusDiv.className = 'status-message error';
}
}
}

View File

@@ -263,6 +263,29 @@ body {
transform: translateY(-2px);
}
/* Master Accounts Configuration Section (Global API Keys + Per-Account Lists) */
.master-accounts-config-section {
background: #181818; /* Consistent with other sections */
padding: 1.5rem;
border-radius: 12px;
margin-bottom: 2rem;
box-shadow: 0 4px 8px rgba(0,0,0,0.15);
transition: transform 0.3s ease; /* Optional, for consistency */
}
.master-accounts-config-section:hover {
transform: translateY(-2px); /* Optional, for consistency */
}
/* Section for Global Spotify API Key Configuration */
.global-api-keys-config {
background: #2a2a2a; /* Slightly different background to stand out or match input groups */
padding: 1.5rem;
border-radius: 8px;
margin-bottom: 1.5rem; /* Space before the per-account sections start */
border: 1px solid #404040; /* Subtle border */
}
/* Section Titles */
.section-title {
font-size: 1.5rem;
@@ -841,11 +864,6 @@ input:checked + .slider:before {
margin-bottom: 2rem;
}
/* Where individual credential items will be rendered */
.credentials-list-items {
/* No specific styles needed here unless items need separation from the add button */
}
/* Styling for the Add New Account button to make it look like a list item */
.add-account-item {
margin-top: 0.75rem; /* Space above the add button if there are items */

View File

@@ -318,43 +318,69 @@
</div>
</div>
<div class="accounts-section">
<div class="service-tabs">
<button class="tab-button active" data-service="spotify">Spotify</button>
<button class="tab-button" data-service="deezer">Deezer</button>
</div>
<div class="master-accounts-config-section">
<h2 class="section-title">Accounts configuration</h2>
<!-- Wrapper for the list and the add button -->
<div class="credentials-list-wrapper card">
<div class="credentials-list-items">
<!-- Dynamic credential items will be rendered here by JavaScript -->
<!-- "No credentials" message will also be rendered here -->
<!-- Global Spotify API Credentials Card: MOVED HERE -->
<div class="global-api-keys-config card"> <!-- Changed class to global-api-keys-config -->
<h2 class="section-title">Global Spotify API Credentials</h2>
<div class="config-item">
<label for="globalSpotifyClientId">Client ID:</label>
<input type="text" id="globalSpotifyClientId" class="form-input" placeholder="Enter your Spotify Client ID">
</div>
<div class="add-account-item">
<button id="showAddAccountFormBtn" class="btn-add-account-styled" type="button">
<img src="{{ url_for('static', filename='images/plus-circle.svg') }}" alt="Add" /> Add New Account
</button>
<div class="config-item">
<label for="globalSpotifyClientSecret">Client Secret:</label>
<input type="password" id="globalSpotifyClientSecret" class="form-input" placeholder="Enter your Spotify Client Secret">
</div>
<div class="config-item">
<button id="saveSpotifyApiConfigBtn" class="btn btn-primary">Save</button>
</div>
<div id="spotifyApiConfigStatus" class="status-message" style="margin-top: 10px;"></div>
</div>
<!-- End Global Spotify API Credentials Card -->
<div class="credentials-form card">
<h2 id="formTitle" class="section-title">Add New Spotify Account</h2>
<form id="credentialForm">
<div class="config-item">
<label>Name:</label>
<input type="text" id="credentialName" class="form-input" required />
<div class="accounts-section">
<div class="service-tabs">
<button class="tab-button active" data-service="spotify">Spotify</button>
<button class="tab-button" data-service="deezer">Deezer</button>
</div>
<!-- Wrapper for the list and the add button -->
<div class="credentials-list-wrapper card">
<div class="credentials-list-items">
<!-- Dynamic credential items will be rendered here by JavaScript -->
<!-- "No credentials" message will also be rendered here -->
</div>
<div id="serviceFields"></div>
<div id="searchFields" style="display: none;"></div>
<button type="submit" id="submitCredentialBtn" class="btn btn-primary save-btn">Save Account</button>
<button type="button" id="cancelAddAccountBtn" class="btn btn-secondary cancel-btn btn-cancel-icon" style="margin-left: 10px;" title="Cancel">
<img src="{{ url_for('static', filename='images/cross.svg') }}" alt="Cancel" />
</button>
</form>
<div id="configSuccess" class="success"></div>
<div id="configError" class="error"></div>
<div class="add-account-item">
<button id="showAddAccountFormBtn" class="btn-add-account-styled" type="button">
<img src="{{ url_for('static', filename='images/plus-circle.svg') }}" alt="Add" /> Add New Account
</button>
</div>
</div>
<div class="credentials-form card">
<h2 id="formTitle" class="section-title">Add New Spotify Account</h2>
<form id="credentialForm">
<div id="serviceFields"></div>
<!-- Region Hints START -->
<div id="spotifyRegionHint" class="setting-description" style="display:none; margin-left: 10px; margin-top: -5px; margin-bottom:15px; font-size: 0.9em;">
<small>Region not matching your account may lead to issues. Check it <a href="https://www.spotify.com/mx/account/profile/" target="_blank" rel="noopener noreferrer">here</a>.</small>
</div>
<div id="deezerRegionHint" class="setting-description" style="display:none; margin-left: 10px; margin-top: -5px; margin-bottom:15px; font-size: 0.9em;">
<small>Region not matching your account may lead to issues. Check it <a href="https://www.deezer.com/account/country_selector" target="_blank" rel="noopener noreferrer">here</a>.</small>
</div>
<!-- Region Hints END -->
<div id="searchFields" style="display: none;"></div>
<button type="submit" id="submitCredentialBtn" class="btn btn-primary save-btn">Save Account</button>
<button type="button" id="cancelAddAccountBtn" class="btn btn-secondary cancel-btn btn-cancel-icon" style="margin-left: 10px;" title="Cancel">
<img src="{{ url_for('static', filename='images/cross.svg') }}" alt="Cancel" />
</button>
</form>
<div id="configSuccess" class="success"></div>
<div id="configError" class="error"></div>
</div>
</div>
</div>
</div> <!-- End of accounts-section -->
</div>
</div>