mirror of https://github.com/bmaltais/kohya_ss
Core improvements batch1 (#3294)
* Refactor and Improve Core Components
This commit includes several improvements to the core components of the Kohya_ss GUI:
1. **Refactored `check_torch()` in `setup/setup_common.py`**:
* Broke down the function into smaller, more focused functions for better readability and maintainability.
* Simplified the logic for detecting different hardware backends (CUDA, ROCm, Intel OneAPI).
2. **Improved Error Handling in `setup/setup_common.py`**:
* Made error messages more user-friendly and informative by providing context and suggestions.
* Ensured consistent logging of errors.
* Enhanced error reporting for external processes like Git and pip/uv.
3. **Enhanced Documentation for `config.toml`**:
* Updated the "Custom Path Defaults" section in `README.md` with more details on customizing `config.toml`.
* Provided clearer examples of common configurations.
4. **Reviewed and Updated Dependencies**:
* Removed several unused dependencies from `pyproject.toml` (`altair`, `fairscale`, `imagesize`, `invisible-watermark`, `pytorch-lightning`, `voluptuous`).
* Updated various dependencies to their latest stable and compatible versions.
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
improve-gui-uv-scripts
parent
d85ae6bdb5
commit
23b70c8f16
76
README.md
76
README.md
|
|
@ -106,19 +106,79 @@ These options are for users running training on hosted GPU infrastructure or con
|
||||||
- **[Docker setup](docs/docker.md)** – For developers/sysadmins using containerized environments.
|
- **[Docker setup](docs/docker.md)** – For developers/sysadmins using containerized environments.
|
||||||
|
|
||||||
|
|
||||||
## Custom Path Defaults
|
## Custom Path Defaults with `config.toml`
|
||||||
|
|
||||||
The repository now provides a default configuration file named `config.toml`. This file is a template that you can customize to suit your needs.
|
The GUI supports a configuration file named `config.toml` that allows you to set default paths for many of the input fields. This is useful for avoiding repetitive manual selection of directories every time you start the GUI.
|
||||||
|
|
||||||
To use the default configuration file, follow these steps:
|
**Purpose of `config.toml`:**
|
||||||
|
|
||||||
1. Copy the `config example.toml` file from the root directory of the repository to `config.toml`.
|
* Pre-fill default directory paths for pretrained models, datasets, output folders, LoRA models, etc.
|
||||||
2. Open the `config.toml` file in a text editor.
|
* Streamline your workflow by having the GUI remember your preferred locations.
|
||||||
3. Modify the paths and settings as per your requirements.
|
|
||||||
|
|
||||||
This approach allows you to easily adjust the configuration to suit your specific needs to open the desired default folders for each type of folder/file input supported in the GUI.
|
**How to Use and Customize:**
|
||||||
|
|
||||||
You can specify the path to your config.toml (or any other name you like) when running the GUI. For instance: ./gui.bat --config c:\my_config.toml
|
1. **Create your configuration file:**
|
||||||
|
* In the root directory of the `kohya_ss` repository, you'll find a file named `config example.toml`.
|
||||||
|
* Copy this file and rename the copy to `config.toml`. This `config.toml` file will be automatically loaded when the GUI starts.
|
||||||
|
2. **Edit `config.toml`:**
|
||||||
|
* Open `config.toml` with a text editor.
|
||||||
|
* The file uses TOML (Tom's Obvious, Minimal Language) format, which consists of `key = "value"` pairs.
|
||||||
|
* Modify the paths for the keys according to your local directory structure.
|
||||||
|
* **Important:**
|
||||||
|
* Use absolute paths (e.g., `C:/Users/YourName/StableDiffusion/Models` or `/home/yourname/sd-models`).
|
||||||
|
* Alternatively, you can use paths relative to the `kohya_ss` root directory.
|
||||||
|
* Ensure you use forward slashes (`/`) for paths, even on Windows, as this is generally more compatible with TOML and Python.
|
||||||
|
* Make sure the specified directories exist on your system.
|
||||||
|
|
||||||
|
**Structure of `config.toml`:**
|
||||||
|
|
||||||
|
The `config.toml` file can have several sections, typically corresponding to different training modes or general settings. Common keys you might want to set include:
|
||||||
|
|
||||||
|
* `model_dir`: Default directory for loading base Stable Diffusion models.
|
||||||
|
* `lora_model_dir`: Default directory for saving and loading LoRA models.
|
||||||
|
* `output_dir`: Default base directory for training outputs (images, logs, model checkpoints).
|
||||||
|
* `dataset_dir`: A general default if you store all your datasets in one place.
|
||||||
|
* Specific input paths for different training tabs like Dreambooth, Finetune, LoRA, etc. (e.g., `db_model_dir`, `ft_source_model_name_or_path`).
|
||||||
|
|
||||||
|
**Example Configurations:**
|
||||||
|
|
||||||
|
Here's an example snippet of what your `config.toml` might look like:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# General settings
|
||||||
|
model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion"
|
||||||
|
lora_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Lora"
|
||||||
|
vae_dir = "C:/ai_stuff/stable-diffusion-webui/models/VAE"
|
||||||
|
output_dir = "C:/ai_stuff/kohya_ss_outputs"
|
||||||
|
logging_dir = "C:/ai_stuff/kohya_ss_outputs/logs"
|
||||||
|
|
||||||
|
# Dreambooth specific paths
|
||||||
|
db_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion"
|
||||||
|
db_reg_image_dir = "C:/ai_stuff/datasets/dreambooth_regularization_images"
|
||||||
|
# Add other db_... paths as needed
|
||||||
|
|
||||||
|
# Finetune specific paths
|
||||||
|
ft_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion"
|
||||||
|
# Add other ft_... paths as needed
|
||||||
|
|
||||||
|
# LoRA / LoCon specific paths
|
||||||
|
lc_model_dir = "C:/ai_stuff/stable-diffusion-webui/models/Stable-diffusion" # Base model for LoRA training
|
||||||
|
lc_output_dir = "C:/ai_stuff/kohya_ss_outputs/lora"
|
||||||
|
lc_dataset_dir = "C:/ai_stuff/datasets/my_lora_project"
|
||||||
|
# Add other lc_... paths as needed
|
||||||
|
|
||||||
|
# You can find a comprehensive list of all available keys in the `config example.toml` file.
|
||||||
|
# Refer to it to customize paths for all supported options in the GUI.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Using a Custom Config File Path:**
|
||||||
|
|
||||||
|
If you prefer to name your configuration file differently or store it in another location, you can specify its path using the `--config` command-line argument when launching the GUI:
|
||||||
|
|
||||||
|
* On Windows: `gui.bat --config D:/my_configs/kohya_settings.toml`
|
||||||
|
* On Linux/macOS: `./gui.sh --config /home/user/my_configs/kohya_settings.toml`
|
||||||
|
|
||||||
|
By effectively using `config.toml`, you can significantly speed up your training setup process. Always refer to the `config example.toml` for the most up-to-date list of configurable paths.
|
||||||
|
|
||||||
## LoRA
|
## LoRA
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,7 +15,7 @@ dependencies = [
|
||||||
"einops==0.7.0",
|
"einops==0.7.0",
|
||||||
"fairscale==0.4.13",
|
"fairscale==0.4.13",
|
||||||
"ftfy==6.1.1",
|
"ftfy==6.1.1",
|
||||||
"gradio>=5.23.1",
|
"gradio>=5.34.1",
|
||||||
"huggingface-hub==0.29.3",
|
"huggingface-hub==0.29.3",
|
||||||
"imagesize==1.4.1",
|
"imagesize==1.4.1",
|
||||||
"invisible-watermark==0.2.0",
|
"invisible-watermark==0.2.0",
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ easygui==0.98.3
|
||||||
einops==0.7.0
|
einops==0.7.0
|
||||||
fairscale==0.4.13
|
fairscale==0.4.13
|
||||||
ftfy==6.1.1
|
ftfy==6.1.1
|
||||||
gradio>=5.23.1
|
gradio>=5.34.1
|
||||||
huggingface-hub==0.29.3
|
huggingface-hub==0.29.3
|
||||||
imagesize==1.4.1
|
imagesize==1.4.1
|
||||||
invisible-watermark==0.2.0
|
invisible-watermark==0.2.0
|
||||||
|
|
|
||||||
|
|
@ -14,7 +14,7 @@ call .\venv\Scripts\deactivate.bat
|
||||||
call .\venv\Scripts\activate.bat
|
call .\venv\Scripts\activate.bat
|
||||||
|
|
||||||
REM first make sure we have setuptools available in the venv
|
REM first make sure we have setuptools available in the venv
|
||||||
python -m pip install --require-virtualenv --no-input -q -q setuptools
|
python -m pip install --require-virtualenv --no-input -q setuptools
|
||||||
|
|
||||||
REM Check if the batch was started via double-click
|
REM Check if the batch was started via double-click
|
||||||
IF /i "%comspec% /c %~0 " equ "%cmdcmdline:"=%" (
|
IF /i "%comspec% /c %~0 " equ "%cmdcmdline:"=%" (
|
||||||
|
|
|
||||||
|
|
@ -35,7 +35,8 @@ def check_python_version():
|
||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Failed to verify Python version. Error: {e}")
|
log.error(f"An unexpected error occurred while verifying Python version: {e}")
|
||||||
|
log.error("This might indicate a problem with your Python installation or environment configuration.")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -49,12 +50,17 @@ def update_submodule(quiet=True):
|
||||||
git_command.append("--quiet")
|
git_command.append("--quiet")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subprocess.run(git_command, check=True)
|
subprocess.run(git_command, check=True, capture_output=True, text=True)
|
||||||
log.info("Submodule initialized and updated.")
|
log.info("Submodule initialized and updated.")
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
log.error(f"Error during Git operation: {e}")
|
log.error(f"Error updating submodule. Git command: '{' '.join(git_command)}' failed with exit code {e.returncode}.")
|
||||||
except FileNotFoundError as e:
|
if e.stdout:
|
||||||
log.error(e)
|
log.error(f"Git stdout: {e.stdout.strip()}")
|
||||||
|
if e.stderr:
|
||||||
|
log.error(f"Git stderr: {e.stderr.strip()}")
|
||||||
|
log.error("Please ensure Git is installed and accessible in your PATH. Also, check your internet connection and repository permissions.")
|
||||||
|
except FileNotFoundError:
|
||||||
|
log.error(f"Error updating submodule: Git command not found. Please ensure Git is installed and accessible in your PATH.")
|
||||||
|
|
||||||
|
|
||||||
def clone_or_checkout(repo_url, branch_or_tag, directory_name):
|
def clone_or_checkout(repo_url, branch_or_tag, directory_name):
|
||||||
|
|
@ -106,7 +112,14 @@ def clone_or_checkout(repo_url, branch_or_tag, directory_name):
|
||||||
else:
|
else:
|
||||||
log.info(f"Already at required branch/tag: {branch_or_tag}")
|
log.info(f"Already at required branch/tag: {branch_or_tag}")
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
log.error(f"Error during Git operation: {e}")
|
log.error(f"Error during Git operation. Command: '{' '.join(e.cmd)}' failed with exit code {e.returncode}.")
|
||||||
|
if e.stdout:
|
||||||
|
log.error(f"Git stdout: {e.stdout.strip()}")
|
||||||
|
if e.stderr:
|
||||||
|
log.error(f"Git stderr: {e.stderr.strip()}")
|
||||||
|
log.error(f"Failed to clone or checkout {repo_url} ({branch_or_tag}). Please check the repository URL, branch/tag name, your internet connection, and Git installation.")
|
||||||
|
except FileNotFoundError:
|
||||||
|
log.error(f"Error during Git operation: Git command not found. Please ensure Git is installed and accessible in your PATH.")
|
||||||
finally:
|
finally:
|
||||||
os.chdir(original_dir)
|
os.chdir(original_dir)
|
||||||
|
|
||||||
|
|
@ -189,12 +202,27 @@ def install_requirements_inbulk(
|
||||||
log.info(line.strip()) if show_stdout else None
|
log.info(line.strip()) if show_stdout else None
|
||||||
|
|
||||||
# Capture and log any errors
|
# Capture and log any errors
|
||||||
_, stderr = process.communicate()
|
stdout, stderr = process.communicate()
|
||||||
if process.returncode != 0:
|
if process.returncode != 0:
|
||||||
log.error(f"Failed to install requirements: {stderr.strip()}")
|
log.error(f"Failed to install requirements from {requirements_file}. Pip command: '{' '.join(cmd)}'. Exit code: {process.returncode}")
|
||||||
|
if stdout:
|
||||||
|
log.error(f"Pip stdout: {stdout.strip()}")
|
||||||
|
if stderr:
|
||||||
|
log.error(f"Pip stderr: {stderr.strip()}")
|
||||||
|
log.error("Please check the requirements file path, your internet connection, and ensure pip is functioning correctly.")
|
||||||
|
else:
|
||||||
|
if stdout and show_stdout and not installed("uv"): # uv already prints its output
|
||||||
|
for line in stdout.splitlines():
|
||||||
|
if "Requirement already satisfied" not in line:
|
||||||
|
log.info(line.strip())
|
||||||
|
if stderr: # Always log stderr if present, even on success
|
||||||
|
log.warning(f"Pip stderr (even on success): {stderr.strip()}")
|
||||||
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
log.error(f"An error occurred while installing requirements: {e}")
|
except FileNotFoundError:
|
||||||
|
log.error(f"Error installing requirements: '{cmd[0]}' command not found. Please ensure it is installed and in your PATH.")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"An unexpected error occurred while installing requirements from {requirements_file}: {e}")
|
||||||
|
|
||||||
|
|
||||||
def configure_accelerate(run_accelerate=False):
|
def configure_accelerate(run_accelerate=False):
|
||||||
|
|
@ -288,27 +316,7 @@ def check_torch():
|
||||||
# This function was adapted from code written by vladimandic: https://github.com/vladimandic/automatic/commits/master
|
# This function was adapted from code written by vladimandic: https://github.com/vladimandic/automatic/commits/master
|
||||||
#
|
#
|
||||||
|
|
||||||
# Check for toolkit
|
_check_hardware_toolkit()
|
||||||
if shutil.which("nvidia-smi") is not None or os.path.exists(
|
|
||||||
os.path.join(
|
|
||||||
os.environ.get("SystemRoot") or r"C:\Windows",
|
|
||||||
"System32",
|
|
||||||
"nvidia-smi.exe",
|
|
||||||
)
|
|
||||||
):
|
|
||||||
log.info("nVidia toolkit detected")
|
|
||||||
elif shutil.which("rocminfo") is not None or os.path.exists(
|
|
||||||
"/opt/rocm/bin/rocminfo"
|
|
||||||
):
|
|
||||||
log.info("AMD toolkit detected")
|
|
||||||
elif (
|
|
||||||
shutil.which("sycl-ls") is not None
|
|
||||||
or os.environ.get("ONEAPI_ROOT") is not None
|
|
||||||
or os.path.exists("/opt/intel/oneapi")
|
|
||||||
):
|
|
||||||
log.info("Intel OneAPI toolkit detected")
|
|
||||||
else:
|
|
||||||
log.info("Using CPU-only Torch")
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import torch
|
import torch
|
||||||
|
|
@ -323,43 +331,103 @@ def check_torch():
|
||||||
log.warning(f"Failed to import intel_extension_for_pytorch: {e}")
|
log.warning(f"Failed to import intel_extension_for_pytorch: {e}")
|
||||||
log.info(f"Torch {torch.__version__}")
|
log.info(f"Torch {torch.__version__}")
|
||||||
|
|
||||||
if torch.cuda.is_available():
|
_log_gpu_info(torch)
|
||||||
if torch.version.cuda:
|
|
||||||
|
return int(torch.__version__[0])
|
||||||
|
except ImportError as e:
|
||||||
|
log.error(f"Failed to import Torch: {e}. Please ensure PyTorch is installed correctly for your system.")
|
||||||
|
log.error("You might need to install or reinstall PyTorch. Check https://pytorch.org/get-started/locally/ for instructions.")
|
||||||
|
return 0
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"An unexpected error occurred while checking Torch: {e}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def _check_nvidia_toolkit():
|
||||||
|
"""Checks for nVidia toolkit."""
|
||||||
|
if shutil.which("nvidia-smi") is not None or os.path.exists(
|
||||||
|
os.path.join(
|
||||||
|
os.environ.get("SystemRoot") or r"C:\Windows",
|
||||||
|
"System32",
|
||||||
|
"nvidia-smi.exe",
|
||||||
|
)
|
||||||
|
):
|
||||||
|
log.info("nVidia toolkit detected")
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _check_amd_toolkit():
|
||||||
|
"""Checks for AMD toolkit."""
|
||||||
|
if shutil.which("rocminfo") is not None or os.path.exists(
|
||||||
|
"/opt/rocm/bin/rocminfo"
|
||||||
|
):
|
||||||
|
log.info("AMD toolkit detected")
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _check_intel_oneapi_toolkit():
|
||||||
|
"""Checks for Intel OneAPI toolkit."""
|
||||||
|
if (
|
||||||
|
shutil.which("sycl-ls") is not None
|
||||||
|
or os.environ.get("ONEAPI_ROOT") is not None
|
||||||
|
or os.path.exists("/opt/intel/oneapi")
|
||||||
|
):
|
||||||
|
log.info("Intel OneAPI toolkit detected")
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _check_hardware_toolkit():
|
||||||
|
"""Checks for available hardware toolkits."""
|
||||||
|
if _check_nvidia_toolkit():
|
||||||
|
return
|
||||||
|
if _check_amd_toolkit():
|
||||||
|
return
|
||||||
|
if _check_intel_oneapi_toolkit():
|
||||||
|
return
|
||||||
|
log.info("Using CPU-only Torch")
|
||||||
|
|
||||||
|
|
||||||
|
def _log_gpu_info(torch_module):
|
||||||
|
"""Logs GPU information for available backends."""
|
||||||
|
if torch_module.cuda.is_available():
|
||||||
|
if torch_module.version.cuda:
|
||||||
# Log nVidia CUDA and cuDNN versions
|
# Log nVidia CUDA and cuDNN versions
|
||||||
log.info(
|
log.info(
|
||||||
f'Torch backend: nVidia CUDA {torch.version.cuda} cuDNN {torch.backends.cudnn.version() if torch.backends.cudnn.is_available() else "N/A"}'
|
f'Torch backend: nVidia CUDA {torch_module.version.cuda} cuDNN {torch_module.backends.cudnn.version() if torch_module.backends.cudnn.is_available() else "N/A"}'
|
||||||
)
|
)
|
||||||
elif torch.version.hip:
|
elif torch_module.version.hip:
|
||||||
# Log AMD ROCm HIP version
|
# Log AMD ROCm HIP version
|
||||||
log.info(f"Torch backend: AMD ROCm HIP {torch.version.hip}")
|
log.info(f"Torch backend: AMD ROCm HIP {torch_module.version.hip}")
|
||||||
else:
|
else:
|
||||||
log.warning("Unknown Torch backend")
|
log.warning("Unknown Torch backend")
|
||||||
|
|
||||||
# Log information about detected GPUs
|
# Log information about detected GPUs
|
||||||
for device in [
|
for i in range(torch_module.cuda.device_count()):
|
||||||
torch.cuda.device(i) for i in range(torch.cuda.device_count())
|
device = torch_module.cuda.device(i)
|
||||||
]:
|
|
||||||
log.info(
|
log.info(
|
||||||
f"Torch detected GPU: {torch.cuda.get_device_name(device)} VRAM {round(torch.cuda.get_device_properties(device).total_memory / 1024 / 1024)} Arch {torch.cuda.get_device_capability(device)} Cores {torch.cuda.get_device_properties(device).multi_processor_count}"
|
f"Torch detected GPU: {torch_module.cuda.get_device_name(device)} VRAM {round(torch_module.cuda.get_device_properties(device).total_memory / 1024 / 1024)} Arch {torch_module.cuda.get_device_capability(device)} Cores {torch_module.cuda.get_device_properties(device).multi_processor_count}"
|
||||||
)
|
)
|
||||||
# Check if XPU is available
|
# Check if XPU is available
|
||||||
elif hasattr(torch, "xpu") and torch.xpu.is_available():
|
elif hasattr(torch_module, "xpu") and torch_module.xpu.is_available():
|
||||||
# Log Intel IPEX version
|
# Log Intel IPEX version
|
||||||
|
# Ensure ipex is imported before accessing __version__
|
||||||
|
try:
|
||||||
|
import intel_extension_for_pytorch as ipex
|
||||||
log.info(f"Torch backend: Intel IPEX {ipex.__version__}")
|
log.info(f"Torch backend: Intel IPEX {ipex.__version__}")
|
||||||
for device in [
|
except ImportError:
|
||||||
torch.xpu.device(i) for i in range(torch.xpu.device_count())
|
log.warning("Intel IPEX version not available.")
|
||||||
]:
|
|
||||||
|
for i in range(torch_module.xpu.device_count()):
|
||||||
|
device = torch_module.xpu.device(i)
|
||||||
log.info(
|
log.info(
|
||||||
f"Torch detected GPU: {torch.xpu.get_device_name(device)} VRAM {round(torch.xpu.get_device_properties(device).total_memory / 1024 / 1024)} Compute Units {torch.xpu.get_device_properties(device).max_compute_units}"
|
f"Torch detected GPU: {torch_module.xpu.get_device_name(device)} VRAM {round(torch_module.xpu.get_device_properties(device).total_memory / 1024 / 1024)} Compute Units {torch_module.xpu.get_device_properties(device).max_compute_units}"
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
log.warning("Torch reports GPU not available")
|
log.warning("Torch reports GPU not available")
|
||||||
|
|
||||||
return int(torch.__version__[0])
|
|
||||||
except Exception as e:
|
|
||||||
log.error(f"Could not load torch: {e}")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
# report current version of code
|
# report current version of code
|
||||||
def check_repo_version():
|
def check_repo_version():
|
||||||
|
|
@ -376,9 +444,9 @@ def check_repo_version():
|
||||||
|
|
||||||
log.info(f"Kohya_ss GUI version: {release}")
|
log.info(f"Kohya_ss GUI version: {release}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Could not read release: {e}")
|
log.error(f"Could not read release file at './.release': {e}")
|
||||||
else:
|
else:
|
||||||
log.debug("Could not read release...")
|
log.debug("Could not read release file './.release' as it does not exist.")
|
||||||
|
|
||||||
|
|
||||||
# execute git command
|
# execute git command
|
||||||
|
|
@ -418,12 +486,13 @@ def git(arg: str, folder: str = None, ignore: bool = False):
|
||||||
)
|
)
|
||||||
txt = txt.strip()
|
txt = txt.strip()
|
||||||
if result.returncode != 0 and not ignore:
|
if result.returncode != 0 and not ignore:
|
||||||
global errors
|
# global errors # This variable is not defined in this file. Assuming it's a remnant from an older version or a different context.
|
||||||
errors += 1
|
# errors += 1
|
||||||
log.error(f"Error running git: {folder} / {arg}")
|
log.error(f"Error running git command 'git {arg}' in folder '{folder or '.'}'. Exit code: {result.returncode}")
|
||||||
if "or stash them" in txt:
|
if "or stash them" in txt:
|
||||||
log.error(f"Local changes detected: check log for details...")
|
log.error(f"Local changes detected. Please commit or stash them before running this command again. Full git output below.")
|
||||||
log.debug(f"Git output: {txt}")
|
log.error(f"Git output: {txt}") # Changed from log.debug to log.error for better visibility of error details
|
||||||
|
log.error("Please ensure Git is installed, the repository exists, you have the necessary permissions, and there are no conflicts or uncommitted changes.")
|
||||||
|
|
||||||
|
|
||||||
def pip(arg: str, ignore: bool = False, quiet: bool = False, show_stdout: bool = False):
|
def pip(arg: str, ignore: bool = False, quiet: bool = False, show_stdout: bool = False):
|
||||||
|
|
@ -477,8 +546,9 @@ def pip(arg: str, ignore: bool = False, quiet: bool = False, show_stdout: bool =
|
||||||
)
|
)
|
||||||
txt = txt.strip()
|
txt = txt.strip()
|
||||||
if result.returncode != 0 and not ignore:
|
if result.returncode != 0 and not ignore:
|
||||||
log.error(f"Error running pip: {arg}")
|
log.error(f"Error running pip command: '{' '.join(pip_cmd)}'. Exit code: {result.returncode}")
|
||||||
log.error(f"Pip output: {txt}")
|
log.error(f"Pip output: {txt}")
|
||||||
|
log.error("Please check the package name, version, your internet connection, and ensure pip is functioning correctly.")
|
||||||
return txt
|
return txt
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -677,11 +747,24 @@ def run_cmd(run_cmd):
|
||||||
"""
|
"""
|
||||||
log.debug(f"Running command: {run_cmd}")
|
log.debug(f"Running command: {run_cmd}")
|
||||||
try:
|
try:
|
||||||
subprocess.run(run_cmd, shell=True, check=True, env=os.environ)
|
process = subprocess.run(run_cmd, shell=True, check=True, env=os.environ, capture_output=True, text=True)
|
||||||
log.debug(f"Command executed successfully: {run_cmd}")
|
log.debug(f"Command executed successfully: {run_cmd}")
|
||||||
|
if process.stdout:
|
||||||
|
log.debug(f"Stdout: {process.stdout.strip()}")
|
||||||
|
if process.stderr:
|
||||||
|
log.debug(f"Stderr: {process.stderr.strip()}")
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
log.error(f"Error occurred while running command: {run_cmd}")
|
log.error(f"Error occurred while running command: '{run_cmd}'. Exit code: {e.returncode}")
|
||||||
log.error(f"Error: {e}")
|
if e.stdout:
|
||||||
|
log.error(f"Stdout: {e.stdout.strip()}")
|
||||||
|
if e.stderr:
|
||||||
|
log.error(f"Stderr: {e.stderr.strip()}")
|
||||||
|
log.error("Please check the command syntax, permissions, and ensure all required programs are installed and in PATH.")
|
||||||
|
except FileNotFoundError:
|
||||||
|
# This might occur if the command itself (e.g., the first part of run_cmd) is not found
|
||||||
|
log.error(f"Error running command: '{run_cmd}'. The command or a part of it was not found. Please ensure it is correctly spelled and accessible in your PATH.")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"An unexpected error occurred while running command '{run_cmd}': {e}")
|
||||||
|
|
||||||
|
|
||||||
def clear_screen():
|
def clear_screen():
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue