* Update localizations for zh-TW
* Fixed merge_lycoris_gui issue
* Move additional_parameters to the very last of the run_cmd which can override the arguments.
pull/2219/head^2
Hina Chen2024-03-31 04:34:56 +08:00committed byGitHub
"(Experimental, Optional) Since the latent is close to a normal distribution, it may be a good idea to specify a value around 1/10 the noise offset.":" (選填,實驗性功能) 由於潛空間接近常態分布,或許指定一個噪聲偏移約 1/10 的數值是個不錯的作法。",
"(Experimental, Optional) Since the latent is close to a normal distribution, it may be a good idea to specify a value around 1/10 the noise offset.":" (選填,實驗性功能) 由於潛空間接近常態分布,或許指定一個噪聲偏移約 1/10 的數值是個不錯的作法。",
"(Name of the model to output)":"(要輸出的模型名稱)",
"(Name of the model to output)":"(要輸出的模型名稱)",
"(Optional) Add training comment to be included in metadata":"(選填) 在訓練的後設資料 (metadata) 加入註解。",
"(Optional) Add training comment to be included in metadata":"(選填) 在訓練的後設資料 (metadata) 加入註解。",
"(Optional) Enforce number of epoch":" (選填) 強制指定一個週期 (Epoch) 數量",
"(Optional) For Cosine with restart and polynomial only":" (選填) 只適用於餘弦函數並使用重啟 (cosine_with_restart) 和多項式 (polynomial)",
"(Optional) For Cosine with restart and polynomial only":" (選填) 只適用於餘弦函數並使用重啟 (cosine_with_restart) 和多項式 (polynomial)",
"(Optional) Override number of epoch. Default: 8":" (選填) 覆蓋週期 (Epoch) 數量。預設:8",
"(Optional) Override number of epoch. Default: 8":" (選填) 覆蓋週期 (Epoch) 數量。預設:8",
"(Optional) Save only the specified number of models (old models will be deleted)":" (選填) 僅儲存指定數量的模型 (舊有模型將被刪除) ",
"(Optional) Save only the specified number of models (old models will be deleted)":" (選填) 僅儲存指定數量的模型 (舊有模型將被刪除) ",
@ -47,10 +47,12 @@
"Blocks":"區塊",
"Blocks":"區塊",
"Bucket resolution steps need to be greater than 0":"資料儲存桶解析度步數需要大於 0",
"Bucket resolution steps need to be greater than 0":"資料儲存桶解析度步數需要大於 0",
"Bucket resolution steps":"分桶解析度間隔",
"Bucket resolution steps":"分桶解析度間隔",
"Bypass mode":"旁路模式 (Bypass mode)",
"Cache latents to disk":"暫存潛空間資料到硬碟",
"Cache latents to disk":"暫存潛空間資料到硬碟",
"Cache latents":"暫存潛空間資料",
"Cache latents":"暫存潛空間資料",
"Cache text encoder outputs":"暫存文本編碼器輸出",
"Cache text encoder outputs":"暫存文本編碼器輸出",
"Cache the outputs of the text encoders. This option is useful to reduce the GPU memory usage. This option cannot be used with options for shuffling or dropping the captions.":"暫存文本編碼器的輸出。此選項有助於減少 GPU 記憶體的使用。此選項不能與打亂或丟棄提示詞 (Shuffle/Dropout caption) 的選項一起使用。",
"Cache the outputs of the text encoders. This option is useful to reduce the GPU memory usage. This option cannot be used with options for shuffling or dropping the captions.":"暫存文本編碼器的輸出。此選項有助於減少 GPU 記憶體的使用。此選項不能與打亂或丟棄提示詞 (Shuffle/Dropout caption) 的選項一起使用。",
"Could not modify caption files with requested change because the \"Overwrite existing captions in folder\" option is not selected.":"無法修改標記文字檔案以進行所需的更改,因為未選擇「覆蓋資料夾中現有的提示詞」選項。",
"Enable multires noise (recommended values are 6-10)":"啟用多解析度噪聲 (建議使用 6-10)",
"Enable multires noise (recommended values are 6-10)":"啟用多解析度噪聲 (建議使用 6-10)",
"Enable the DoRA method for these algorithms":"為這些演算法啟用 DoRA 方法",
"Enter one sample prompt per line to generate multiple samples per cycle. Optional specifiers include: --w (width), --h (height), --d (seed), --l (cfg scale), --s (sampler steps) and --n (negative prompt). To modify sample prompts during training, edit the prompt.txt file in the samples directory.":"每行輸入一個提示詞來生成每個訓練週期的輸出範本。可以選擇指定的參數,包括:--w (寬度) ,--h (高度) ,--d (種子) ,--l (CFG 比例) ,--s (採樣器步驟) 和 --n (負面提示詞) 。如果要在訓練週期中修改提示詞,請修改範本目錄中的 prompt.txt 檔案。",
"Enter one sample prompt per line to generate multiple samples per cycle. Optional specifiers include: --w (width), --h (height), --d (seed), --l (cfg scale), --s (sampler steps) and --n (negative prompt). To modify sample prompts during training, edit the prompt.txt file in the samples directory.":"每行輸入一個提示詞來生成每個訓練週期的輸出範本。可以選擇指定的參數,包括:--w (寬度) ,--h (高度) ,--d (種子) ,--l (CFG 比例) ,--s (採樣器步驟) 和 --n (負面提示詞) 。如果要在訓練週期中修改提示詞,請修改範本目錄中的 prompt.txt 檔案。",
"Epoch":"週期 (Epoch)",
"Epoch":"週期 (Epoch)",
"Error":"錯誤",
"Error":"錯誤",
@ -165,7 +172,7 @@
"Is a normal probability dropout at the neuron level. In the case of LoRA, it is applied to the output of down. Recommended range 0.1 to 0.5":"是神經元級的正常概率捨棄。在 LoRA 的情況下,它被應用於 Down Sampler 的輸出。建議範圍 0.1 到 0.5",
"Is a normal probability dropout at the neuron level. In the case of LoRA, it is applied to the output of down. Recommended range 0.1 to 0.5":"是神經元級的正常概率捨棄。在 LoRA 的情況下,它被應用於 Down Sampler 的輸出。建議範圍 0.1 到 0.5",
"Keep n tokens":"保留 N 個提示詞",
"Keep n tokens":"保留 N 個提示詞",
"LR Scheduler":"學習率調度器 (LR Scheduler)",
"LR Scheduler":"學習率調度器 (LR Scheduler)",
"LR number of cycles":"學習率重啟週期數 (LR number of cycles)",
"LR # cycles":"學習率重啟週期數 (LR number of cycles)",
"LR power":"學習率乘冪 (LR power)",
"LR power":"學習率乘冪 (LR power)",
"LR scheduler extra arguments":"學習率調度器額外參數",
"LR scheduler extra arguments":"學習率調度器額外參數",
"LR warmup (% of total steps)":"學習率預熱 (LR warmup, 總步數的 %)",
"LR warmup (% of total steps)":"學習率預熱 (LR warmup, 總步數的 %)",
@ -188,12 +195,15 @@
"LoRA model (path to the LoRA model to verify)":"LoRA 模型 (要驗證的 LoRA 模型的檔案路徑)",
"LoRA model (path to the LoRA model to verify)":"LoRA 模型 (要驗證的 LoRA 模型的檔案路徑)",
"LoRA model types":"LoRA 模型類型",
"LoRA model types":"LoRA 模型類型",
"LoRA network weights":"LoRA 網路權重",
"LoRA network weights":"LoRA 網路權重",
"LoRA type changed...":"LoRA 類型已更改...",
"Load":"載入",
"Load":"載入",
"Load Stable Diffusion base model to":"載入穩定擴散基礎模型到",
"Load Stable Diffusion base model to":"載入穩定擴散基礎模型到",
"Load finetuned model to":"載入微調模型到",
"Load finetuned model to":"載入微調模型到",
"Load precision":"讀取精度",
"Load precision":"讀取精度",
"Load/Save Config file":"讀取/儲存設定檔案",
"Load/Save Config file":"讀取/儲存設定檔案",
"Logging folder (Optional. to enable logging and output Tensorboard log)":"紀錄資料夾(選填,啟用紀錄和輸出 Tensorboard 紀錄)",
"Logging directory (Optional. to enable logging and output Tensorboard log)":"紀錄資料夾(選填,啟用紀錄和輸出 Tensorboard 紀錄)",
"Log tracker name":"紀錄追蹤器名稱",
"Log tracker config":"紀錄追蹤器設定",
"LyCORIS model (path to the LyCORIS model)":"LyCORIS 模型 (LyCORIS 模型的檔案路徑)",
"LyCORIS model (path to the LyCORIS model)":"LyCORIS 模型 (LyCORIS 模型的檔案路徑)",
"Manual Captioning":"手動標記文字",
"Manual Captioning":"手動標記文字",
"Max Norm Regularization is a technique to stabilize network training by limiting the norm of network weights. It may be effective in suppressing overfitting of LoRA and improving stability when used with other LoRAs. See PR #545 on kohya_ss/sd_scripts repo for details. Recommended setting: 1. Higher is weaker, lower is stronger.":"最大規範正規化是一種穩定網路訓練的技術,通過限制網路權重的規範來實現。當與其他 LoRA 一起使用時,它可能會有效地抑制 LoRA 的過度擬合並提高穩定性。詳細資料請見 kohya_ss/sd_scripts Github 上的 PR#545。建議設置:1.0 越高越弱,越低越強。",
"Max Norm Regularization is a technique to stabilize network training by limiting the norm of network weights. It may be effective in suppressing overfitting of LoRA and improving stability when used with other LoRAs. See PR #545 on kohya_ss/sd_scripts repo for details. Recommended setting: 1. Higher is weaker, lower is stronger.":"最大規範正規化是一種穩定網路訓練的技術,通過限制網路權重的規範來實現。當與其他 LoRA 一起使用時,它可能會有效地抑制 LoRA 的過度擬合並提高穩定性。詳細資料請見 kohya_ss/sd_scripts Github 上的 PR#545。建議設置:1.0 越高越弱,越低越強。",
@ -239,6 +249,7 @@
"Multi GPU":"多個 GPU",
"Multi GPU":"多個 GPU",
"Multires noise iterations":"多解析度噪聲迭代",
"Multires noise iterations":"多解析度噪聲迭代",
"Name of the new LCM model":"新 LCM 模型的名稱",
"Name of the new LCM model":"新 LCM 模型的名稱",
"Name of tracker to use for logging, default is script-specific default name":"用於記錄的追蹤器名稱,預設為特定於腳本的預設名稱",
"Network Alpha":"網路 Alpha",
"Network Alpha":"網路 Alpha",
"Network Dimension (Rank)":"網路維度 (Rank)",
"Network Dimension (Rank)":"網路維度 (Rank)",
"Network Rank (Dimension)":"網路維度 (Rank)",
"Network Rank (Dimension)":"網路維度 (Rank)",
@ -268,11 +279,12 @@
"Output \"stop text encoder training\" is not yet supported. Ignoring":"輸出「停止文本編碼器訓練」尚未支援。忽略",
"Output \"stop text encoder training\" is not yet supported. Ignoring":"輸出「停止文本編碼器訓練」尚未支援。忽略",
"Output":"輸出",
"Output":"輸出",
"Output folder (where the grouped images will be stored)":"輸出資料夾 (存放分組的圖片)",
"Output folder (where the grouped images will be stored)":"輸出資料夾 (存放分組的圖片)",
"Output folder to output trained model":"輸出資料夾以輸出訓練模型",
"Output directory for trained model":"輸出資料夾以輸出訓練模型",
"Overwrite existing captions in folder":"覆蓋資料夾中現有的提示詞",
"Overwrite existing captions in folder":"覆蓋資料夾中現有的提示詞",
"Page Number":"頁碼",
"Page Number":"頁碼",
"Parameters":"參數",
"Parameters":"參數",
"Path to an existing LoRA network weights to resume training from":"現有 LoRA 檔案路徑,從現有 LoRA 中繼續訓練",
"Path to an existing LoRA network weights to resume training from":"現有 LoRA 檔案路徑,從現有 LoRA 中繼續訓練",
"Path to tracker config file to use for logging":"用於記錄的追蹤器設定檔案的路徑",
"Persistent data loader":"持續資料載入器 (Persistent data loader)",
"Persistent data loader":"持續資料載入器 (Persistent data loader)",
"Please input learning rate values.":"請輸入學習率數值。",
"Please input learning rate values.":"請輸入學習率數值。",
"Please input valid Text Encoder learning rate (between 0 and 1)":"請輸入有效的文本編碼器學習率 (在 0 到 1 之間)",
"Please input valid Text Encoder learning rate (between 0 and 1)":"請輸入有效的文本編碼器學習率 (在 0 到 1 之間)",
@ -304,7 +316,7 @@
"Recommended values are 0.05 - 0.15":"若使用時,建議使用 0.05 - 0.15",
"Recommended values are 0.05 - 0.15":"若使用時,建議使用 0.05 - 0.15",
"Recommended values are 0.8. For LoRAs with small datasets, 0.1-0.3":"建議使用 0.8。對於小數據集的 LoRA,建議使用 0.1-0.3",
"Recommended values are 0.8. For LoRAs with small datasets, 0.1-0.3":"建議使用 0.8。對於小數據集的 LoRA,建議使用 0.1-0.3",
"Regularisation images (Optional. directory containing the regularisation images)":"正規化圖片 (選填,含有正規化圖片的資料夾)",
"Regularisation images (Optional. directory containing the regularisation images)":"正規化圖片 (選填,含有正規化圖片的資料夾)",
"Regularisation images are used... Will double the number of steps required...":"使用了正規化圖片... 將使所需的步數加倍...",
"Regularisation images are used... Will double the number of steps required...":"使用了正規化圖片... 將使所需的步數加倍...",
"Repeats":"重複次數",
"Repeats":"重複次數",
@ -343,6 +355,7 @@
"Save training state":"儲存訓練狀態",
"Save training state":"儲存訓練狀態",
"Scale v prediction loss":"縮放 v 預測損失 (v prediction loss)",
"Scale v prediction loss":"縮放 v 預測損失 (v prediction loss)",
"Scale weight norms":"縮放權重標準",
"Scale weight norms":"縮放權重標準",
"SDXL Specific Parameters":"SDXL 特定參數",
"Seed":"種子 (Seed)",
"Seed":"種子 (Seed)",
"Selects trainable layers in a network, but trains normalization layers identically across methods as they lack matrix decomposition.":"選擇網路中的可訓練層,但由於缺乏矩陣分解,因此在各種方法中都以相同方式訓練規範化層。",
"Selects trainable layers in a network, but trains normalization layers identically across methods as they lack matrix decomposition.":"選擇網路中的可訓練層,但由於缺乏矩陣分解,因此在各種方法中都以相同方式訓練規範化層。",
"Set if we change the information going into the system (True) or the information coming out of it (False).":"設定為 True,若我們改變進入系統的資訊,否則由系統輸出則設定為 False。",
"Set if we change the information going into the system (True) or the information coming out of it (False).":"設定為 True,若我們改變進入系統的資訊,否則由系統輸出則設定為 False。",
@ -394,6 +407,7 @@
"The provided model C is not a file":"提供的模型 C 不是檔案",
"The provided model C is not a file":"提供的模型 C 不是檔案",
"The provided model D is not a file":"提供的模型 D 不是檔案",
"The provided model D is not a file":"提供的模型 D 不是檔案",
"The provided model is not a file":"提供的模型不是檔案",
"The provided model is not a file":"提供的模型不是檔案",
"The name of the specific wandb session":"指定 WANDB session 的名稱",
"This option appends the tags to the existing tags, instead of replacing them.":"此選項將標籤附加到現有標籤,而不是替換它們。",
"This option appends the tags to the existing tags, instead of replacing them.":"此選項將標籤附加到現有標籤,而不是替換它們。",
"This section provide Various Finetuning guides and information...":"此部分提供各種微調指南和資訊...",
"This section provide Various Finetuning guides and information...":"此部分提供各種微調指南和資訊...",
"This section provide various LoRA tools...":"此部分提供各種 LoRA 工具...",
"This section provide various LoRA tools...":"此部分提供各種 LoRA 工具...",
@ -424,7 +438,7 @@
"Train a custom model using kohya finetune python code...":"使用 kohya 微調 Python 程式訓練自定義模型",
"Train a custom model using kohya finetune python code...":"使用 kohya 微調 Python 程式訓練自定義模型",
"Train a TI using kohya textual inversion python code...":"使用 kohya 文本反轉 Python 程式訓練 TI",
"Train a TI using kohya textual inversion python code...":"使用 kohya 文本反轉 Python 程式訓練 TI",
"Train an additional scalar in front of the weight difference, use a different weight initialization strategy.":"在權重差異前訓練一個額外的標量,使用不同的權重初始化策略。",
"Train an additional scalar in front of the weight difference, use a different weight initialization strategy.":"在權重差異前訓練一個額外的標量,使用不同的權重初始化策略。",
"Train config folder (Optional. where config files will be saved)":"訓練設定資料夾(選填,設定檔案將會被儲存的資料夾)",
"Train config directory (Optional. where config files will be saved)":"訓練設定資料夾(選填,設定檔案將會被儲存的資料夾)",
"Train text encoder":"訓練文本編碼器",
"Train text encoder":"訓練文本編碼器",
"Trained Model output name":"訓練模型輸出名稱",
"Trained Model output name":"訓練模型輸出名稱",
"Training comment":"訓練註解",
"Training comment":"訓練註解",
@ -447,7 +461,7 @@
"Useful to force model re download when switching to onnx":"在切換到 ONNX 時強制重新下載模型",
"Useful to force model re download when switching to onnx":"在切換到 ONNX 時強制重新下載模型",
"Users can obtain and/or generate an api key in the their user settings on the website: https://wandb.ai/login":"使用者可以在以下網站的用戶設定中取得,或產生 API 金鑰:https://wandb.ai/login",
"Users can obtain and/or generate an api key in the their user settings on the website: https://wandb.ai/login":"使用者可以在以下網站的用戶設定中取得,或產生 API 金鑰:https://wandb.ai/login",
"V Pred like loss":"V 預測損失 (V Pred like loss)",
"V Pred like loss":"V 預測損失 (V Pred like loss)",
"VAE (Optional. path to checkpoint of vae to replace for training)":"VAE (選填,選擇要替換訓練的 VAE checkpoint 的檔案路徑)",
"VAE (Optional: Path to checkpoint of vae for training)":"VAE (選填:選擇要替換訓練的 VAE checkpoint 的檔案路徑)",
"VAE batch size":"VAE 批次大小",
"VAE batch size":"VAE 批次大小",
"Value for the dynamic method selected.":"選擇的動態方法的數值。",
"Value for the dynamic method selected.":"選擇的動態方法的數值。",
"Values greater than 0 will make the model more img2img focussed. 0 = image only":"大於 0 的數值會使模型更加聚焦在 img2img 上。0 表示僅關注於圖像生成",
"Values greater than 0 will make the model more img2img focussed. 0 = image only":"大於 0 的數值會使模型更加聚焦在 img2img 上。0 表示僅關注於圖像生成",