build: static localizations

pull/3/head
github-actions[bot] 2023-11-29 18:02:09 +00:00
parent 5b630fec1c
commit 47ba0d4663
2 changed files with 22 additions and 0 deletions

View File

@ -1760,6 +1760,7 @@
"Enable Tiled VAE": "启用 Tiled VAE",
"Enable Token Merging (faster, less VRAM, less accurate)": "启用 Token 合併 (更快,使用更少显存,精确度降低)",
"Enable tooltip on the canvas": "启用画布上的工具列提示框",
"Enable to Store file in object storage that supports the s3 protocol": "将档案储存到支援 S3 协议的物件存储",
"enable to trigger webui's face restoration on each frame during the generation": "启用以在每一帧生成过程中触發 webui 的脸部修复功能",
"Enable uploading manually created mask to SAM.": "启用手动上传遮罩到 SAM。",
"Enable upscale with extras": "启用後期處理放大",
@ -3042,6 +3043,7 @@
"Localization: refresh": "本地化: 刷新",
"Localization (requires restart)": "本地化翻译(需要重启)",
"Location": "FFmpeg 所在位置",
"Lock current settings": "锁定当前设定",
"Log": "日誌",
"Log directory": "日誌目录",
"Logo": "图标",
@ -3396,6 +3398,7 @@
"Model compile mode (experimental)": "模型编译模式(实验性)",
"Model configuration": "模型组态",
"Model Converter": "模型转换器",
"Model D": "模型 D",
"Model description/readme/notes/instructions": "模型的描述资讯",
"Model Epoch:": "模型训练週期:",
"Model filename:": "模型档名: ",
@ -4020,6 +4023,7 @@
"Play notification sound after image generation": "完成图像生成後播放通知声音",
"Please add text prompts to generate masks": "请新增文本提示词以生成遮罩",
"Please always keep values in math functions above 0.": "请始终保证数学函式中的值大於 0",
"Please click": "请点击",
"Please enable the following settings to use controlnet from this script.": "请启用以下设定以保证可以在此脚本中呼叫ControlNet。",
"Please press 'Refresh' to load selected content!": "请点击 \"刷新\" 按钮载入所选内容!",
"Please select a valid lightdiffusionflow or image file!": "请选择一个有效的 LightDiffusionFlow 或图像档案!",
@ -4646,6 +4650,7 @@
"Save depth maps": "储存深度图",
"Save EMA Weights to Generated Models": "将 EMA 权重储存到产生的模型中",
"Save every n epochs": "每 n 个 epoch 储存一次",
"Save format": "储存格式",
"Save generated images within tensorboard.": "在 TensorBoard 中储存产生的图像。",
"Save generation params text": "储存生成参数文本",
"Save grids to a subdirectory": "将宫格图储存到子目录",
@ -4716,6 +4721,7 @@
"Save to Presets": "储存到预设",
"Save to tags files": "储存标籤到档案",
"Save vector to text file": "将向量储存到文本档案",
"Save WebP in lossless format (highest quality, largest file size)": "储存为无损 WebP 格式 (最高品质,最大档案体积)",
"Save wildcards": "储存万用字元",
"Save with JSON": "储存为 JSON 格式",
"Save zip archive with images to a dedicated directory (log/images)": "储存包含图像的 .zip 档案到指定目录 (预设: log/images)",
@ -5466,6 +5472,7 @@
"(this is optional. Perform color correction on the img2img results and expect flickering to decrease. Or, you can simply change the color tone from the generated result.)": "(此步骤可选。对图生图的结果进行颜色校正,可能会减少影片闪烁。或者,你也可以简单地在生成结果中调色。)",
"This is the total number of training steps that will be performed on each instance image.": "在每张图像上执行的训练步数总数。",
"This is your models list.": "这是您的模型列表。",
"this link": "这个网址",
"This mode works ONLY with 2D/3D animation modes. Interpolation and Video Input modes aren't supported.": "此模式仅适用於2D/3D动画模式。不支持帧插值和影片输入模式。",
"this option renders N times before the final render. it is suggested to lower your steps if you up your redo. seed is randomized during redo generations and restored afterwards": "在最终渲染前进行 N 次渲染。如果增加重做次数,建议降低迭代步数。随机数种子在重做过程中被随机化,并在重做後被恢复。",
"this option takes twice as long because it generates twice in order to capture the optical flow from the previous image to the first generation, then warps the previous image and redoes the generation": "该选项需要两倍生成时间,因为它会生成两次图像,以捕获从上一幅图像到第一次生成的光流,接着变形上一幅图像并重新生成",
@ -5573,6 +5580,7 @@
"Top/Bottom Balance": "上/下平衡",
"top_centered": "顶部居中",
"torch GPU test": "Torch GPU 测试",
"to read the documentation of each parameter.": "阅读关於各参数的说明。",
"to share your creations and suggestions.": "来分享您的创作和建议。",
"total": "总计",
"(Total)": "(合计)",
@ -5798,6 +5806,7 @@
"Use common prompt": "使用常见提示词 (Common Prompt)",
"Use Concepts List": "使用概念列表",
"Use Control Net inpaint model": "使用 ControlNet 的 inpait 模型",
"Use CPU": "使用 CPU",
"Use CPU for SAM": "使用 CPU 进行 SAM 處理",
"Use CPU Only (SLOW)": "只用 CPU (很慢)",
"Use cross attention optimizations while training": "训练时开启 cross attention 最佳化",
@ -5824,6 +5833,7 @@
"Use FP16 or BF16 (if available) will help improve memory performance. Required when using 'xformers'.": "使用 FP16 或 BF16 (如可用) 有助於提高记忆体性能。启用 \"xformers\" 时需要。",
"(Use FP8 to store Linear/Conv layers' weight. Require pytorch>=2.1.0.)": "(使用 FP8 来储存 Linear/Conv 层的权重。需要 Pytorch 版本 >=2.1.0)",
"(Useful for users who use PortMaster or other software that controls the DNS)": "(对於使用 PortMaster 或其他控制 DNS 的软体的使用者有用)",
"Use GPU": "使用 GPU",
"Use GPU, only for CUDA on Windows/Linux - experimental and risky, can messed up dependencies (requires restart)": "使用 GPU, 仅限 Windows/Linux 上使用 CUDA - 实验中且高风险, 可能损坏相依 (需要重启)",
"Use half floats": "启用半精度浮点",
"Use Hash from Metadata (May have false-positives but can be useful if you've pruned models)": "使用元数据中的杂凑 (可能会有误报,但对已修剪的模型很有用)",
@ -5849,6 +5859,7 @@
"Use mask video": "使用影片遮罩",
"Use \"Maximum dimension\" for aspect ratio buttons (by default we use the max width or height)": "使用 “最大尺寸” 作为纵横比按钮 (预设使用最大宽度或最大高度)",
"use MBW": "使用分块合併",
"use Merge Block Weights": "使用分块合併",
"Use mid-control on highres pass (second pass)": "进行高解析度修复时使用中间层控制 (mid-control)",
"Use minimal area (for close faces)": "使用最小区域(适用近處的脸)",
"Use minimal area for face selection": "用最小面积进行脸部选择",

View File

@ -1760,6 +1760,7 @@
"Enable Tiled VAE": "啟用 Tiled VAE",
"Enable Token Merging (faster, less VRAM, less accurate)": "啟用 Token 合併 (更快,使用更少顯存,精確度降低)",
"Enable tooltip on the canvas": "啟用畫布上的工具列提示框",
"Enable to Store file in object storage that supports the s3 protocol": "將檔案儲存到支援 S3 協議的物件存儲",
"enable to trigger webui's face restoration on each frame during the generation": "啟用以在每一幀生成過程中觸發 webui 的臉部修復功能",
"Enable uploading manually created mask to SAM.": "啟用手動上傳遮罩到 SAM。",
"Enable upscale with extras": "啟用後期處理放大",
@ -3042,6 +3043,7 @@
"Localization: refresh": "本地化: 刷新",
"Localization (requires restart)": "本地化翻譯(需要重啟)",
"Location": "FFmpeg 所在位置",
"Lock current settings": "鎖定當前設定",
"Log": "日誌",
"Log directory": "日誌目錄",
"Logo": "圖標",
@ -3396,6 +3398,7 @@
"Model compile mode (experimental)": "模型編譯模式(實驗性)",
"Model configuration": "模型組態",
"Model Converter": "模型轉換器",
"Model D": "模型 D",
"Model description/readme/notes/instructions": "模型的描述資訊",
"Model Epoch:": "模型訓練週期:",
"Model filename:": "模型檔名: ",
@ -4020,6 +4023,7 @@
"Play notification sound after image generation": "完成圖像生成後播放通知聲音",
"Please add text prompts to generate masks": "請新增文本提示詞以生成遮罩",
"Please always keep values in math functions above 0.": "請始終保證數學函式中的值大於 0",
"Please click": "請點擊",
"Please enable the following settings to use controlnet from this script.": "請啟用以下設定以保證可以在此腳本中呼叫ControlNet。",
"Please press 'Refresh' to load selected content!": "請點擊 \"刷新\" 按鈕載入所選內容!",
"Please select a valid lightdiffusionflow or image file!": "請選擇一個有效的 LightDiffusionFlow 或圖像檔案!",
@ -4646,6 +4650,7 @@
"Save depth maps": "儲存深度圖",
"Save EMA Weights to Generated Models": "將 EMA 權重儲存到產生的模型中",
"Save every n epochs": "每 n 個 epoch 儲存一次",
"Save format": "儲存格式",
"Save generated images within tensorboard.": "在 TensorBoard 中儲存產生的圖像。",
"Save generation params text": "儲存生成參數文本",
"Save grids to a subdirectory": "將宮格圖儲存到子目錄",
@ -4716,6 +4721,7 @@
"Save to Presets": "儲存到預設",
"Save to tags files": "儲存標籤到檔案",
"Save vector to text file": "將向量儲存到文本檔案",
"Save WebP in lossless format (highest quality, largest file size)": "儲存為無損 WebP 格式 (最高品質,最大檔案體積)",
"Save wildcards": "儲存萬用字元",
"Save with JSON": "儲存為 JSON 格式",
"Save zip archive with images to a dedicated directory (log/images)": "儲存包含圖像的 .zip 檔案到指定目錄 (預設: log/images)",
@ -5466,6 +5472,7 @@
"(this is optional. Perform color correction on the img2img results and expect flickering to decrease. Or, you can simply change the color tone from the generated result.)": "(此步驟可選。對圖生圖的結果進行顏色校正,可能會減少影片閃爍。或者,你也可以簡單地在生成結果中調色。)",
"This is the total number of training steps that will be performed on each instance image.": "在每張圖像上執行的訓練步數總數。",
"This is your models list.": "這是您的模型列表。",
"this link": "這個網址",
"This mode works ONLY with 2D/3D animation modes. Interpolation and Video Input modes aren't supported.": "此模式僅適用於2D/3D動畫模式。不支持幀插值和影片輸入模式。",
"this option renders N times before the final render. it is suggested to lower your steps if you up your redo. seed is randomized during redo generations and restored afterwards": "在最終渲染前進行 N 次渲染。如果增加重做次數,建議降低迭代步數。隨機數種子在重做過程中被隨機化,並在重做後被恢復。",
"this option takes twice as long because it generates twice in order to capture the optical flow from the previous image to the first generation, then warps the previous image and redoes the generation": "該選項需要兩倍生成時間,因為它會生成兩次圖像,以捕獲從上一幅圖像到第一次生成的光流,接著變形上一幅圖像並重新生成",
@ -5573,6 +5580,7 @@
"Top/Bottom Balance": "上/下平衡",
"top_centered": "頂部居中",
"torch GPU test": "Torch GPU 測試",
"to read the documentation of each parameter.": "閱讀關於各參數的說明。",
"to share your creations and suggestions.": "來分享您的創作和建議。",
"total": "總計",
"(Total)": "(合計)",
@ -5798,6 +5806,7 @@
"Use common prompt": "使用常見提示詞 (Common Prompt)",
"Use Concepts List": "使用概念列表",
"Use Control Net inpaint model": "使用 ControlNet 的 inpait 模型",
"Use CPU": "使用 CPU",
"Use CPU for SAM": "使用 CPU 進行 SAM 處理",
"Use CPU Only (SLOW)": "只用 CPU (很慢)",
"Use cross attention optimizations while training": "訓練時開啟 cross attention 最佳化",
@ -5824,6 +5833,7 @@
"Use FP16 or BF16 (if available) will help improve memory performance. Required when using 'xformers'.": "使用 FP16 或 BF16 (如可用) 有助於提高記憶體性能。啟用 \"xformers\" 時需要。",
"(Use FP8 to store Linear/Conv layers' weight. Require pytorch>=2.1.0.)": "(使用 FP8 來儲存 Linear/Conv 層的權重。需要 Pytorch 版本 >=2.1.0)",
"(Useful for users who use PortMaster or other software that controls the DNS)": "(對於使用 PortMaster 或其他控制 DNS 的軟體的使用者有用)",
"Use GPU": "使用 GPU",
"Use GPU, only for CUDA on Windows/Linux - experimental and risky, can messed up dependencies (requires restart)": "使用 GPU, 僅限 Windows/Linux 上使用 CUDA - 實驗中且高風險, 可能損壞相依 (需要重啟)",
"Use half floats": "啟用半精度浮點",
"Use Hash from Metadata (May have false-positives but can be useful if you've pruned models)": "使用元數據中的雜湊 (可能會有誤報,但對已修剪的模型很有用)",
@ -5849,6 +5859,7 @@
"Use mask video": "使用影片遮罩",
"Use \"Maximum dimension\" for aspect ratio buttons (by default we use the max width or height)": "使用 “最大尺寸” 作為縱橫比按鈕 (預設使用最大寬度或最大高度)",
"use MBW": "使用分塊合併",
"use Merge Block Weights": "使用分塊合併",
"Use mid-control on highres pass (second pass)": "進行高解析度修復時使用中間層控制 (mid-control)",
"Use minimal area (for close faces)": "使用最小區域(適用近處的臉)",
"Use minimal area for face selection": "用最小面積進行臉部選擇",