Implement Composable LoRA/LyCORIS with steps
|
|
@ -0,0 +1,4 @@
|
|||
{
|
||||
"python.envFile": "${workspaceFolder}/.env",
|
||||
"python.defaultInterpreterPath": "${workspaceFolder}/../../sd.webui/webui/venv/Scripts/"
|
||||
}
|
||||
|
|
@ -0,0 +1,135 @@
|
|||
# Composable LoRA/LyCORIS with steps
|
||||
この拡張機能は、内部のforward LoRAプロセスを置き換え、同時にLoCon、LyCORISをサポートします。
|
||||
|
||||
この拡張機能はComposable LoRAのフォークです。
|
||||
|
||||
### 言語
|
||||
* [英語](README.md) (グーグル翻訳)
|
||||
* [台湾中国語](README.zh-tw.md)
|
||||
* [簡体字中国語](README.zh-cn.md) (ウィキペディア 従来および簡略化された変換システム)
|
||||
|
||||
## インストール
|
||||
注意: このバージョンのComposable LoRAには、元のComposable LoRAのすべての機能が含まれています。1つ選んでインストールするだけです。
|
||||
|
||||
この拡張機能は、元のバージョンのComposable LoRA拡張機能と同時に使用できません。インストールする前に、`webui\extensions\`フォルダー内の`stable-diffusion-webui-composable-lora`フォルダーを削除する必要があります。
|
||||
|
||||
次に、WebUIの\[Extensions\] -> \[Install from URL\]で以下のURLを入力します。
|
||||
```
|
||||
https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
|
||||
```
|
||||
インストールして再起動します。
|
||||
|
||||
## 機能
|
||||
### Composable-Diffusionと互換性がある
|
||||
LoRAの挿入箇所を`AND`構文と関連付け、LoRAの影響範囲を特定のサブプロンプト内に限定します(特定の`AND...AND`ブロック内)。
|
||||
|
||||
### ステップに基づく可組合性
|
||||
形式`[A:B:N]`のプロンプトにLoRAを配置し、LoRAの影響範囲を特定のグラフィックステップに制限します。
|
||||

|
||||
|
||||
### LoRA重み制御
|
||||
`[A #xxx]`構文を追加して、LoRAの各グラフィックステップでの重みを制御できます。
|
||||
現在、サポートされているものは以下のとおりです。
|
||||
* `decrease`
|
||||
- LoRAの有効なステップ数で徐々に重みを減少させ、0になります
|
||||
* `increment`
|
||||
- LoRAの有効なステップ数で0から重みを徐々に増加させます
|
||||
* `cmd(...)`
|
||||
- カスタムの重み制御コマンドで、主にPython構文を使用します。
|
||||
* 使用可能なパラメータ
|
||||
+ `weight`
|
||||
* 現在のLoRA重み
|
||||
+ `life`
|
||||
* 0-1の数字で、現在のLoRAのライフサイクルを表します。開始ステップ数にある場合は0であり、このLoRAが最後に適用されるステップ数にある場合は1です。
|
||||
+ `step`
|
||||
* 現在のステップ数
|
||||
+ `steps`
|
||||
* 全ステップ数
|
||||
* 使用可能な関数は以下の通りです
|
||||
+ `warmup(x)`
|
||||
* xは0から1までの数値で、総ステップ数に対して、xの比率以下のステップでは関数値が0から1に徐々に上昇し、x以降は1になります。
|
||||
+ `cooldown(x)`
|
||||
* xは0から1までの数値で、総ステップ数に対して、xの比率以上のステップでは関数値が1から0に徐々に減少し、0になります。
|
||||
+ sin, cos, tan, asin, acos, atan
|
||||
* すべてのステップを周期とする三角関数です。sin、cosの値は0から1に変更されます。
|
||||
+ sinr, cosr, tanr, asinr, acosr, atanr
|
||||
* 弧度単位の周期2*piの三角関数です。
|
||||
+ abs, ceil, floor, trunc, fmod, gcd, lcm, perm, comb, gamma, sqrt, cbrt, exp, pow, log, log2, log10
|
||||
* Pythonのmath関数ライブラリと同じ関数です。
|
||||
例 :
|
||||
* `[<lora:A:1>::10]`
|
||||
- 名前がAのLoRAを使用して、10ステップで停止します。
|
||||

|
||||
* `[<lora:A:1>:<lora:B:1>:10]`
|
||||
- 名前がAのLoRAを、10ステップまで使用し、10ステップから名前がBのLoRAを使用します。
|
||||

|
||||
* `[<lora:A:1>:10]`
|
||||
- 10ステップから名前がAのLoRAを使用します。
|
||||
* `[<lora:A:1>:0.5]`
|
||||
- 50%のステップから名前がAのLoRAを使用します。
|
||||
* `[[<lora:A:1>::25]:10]`
|
||||
- 10ステップから名前がAのLoRAを使用し、25ステップで使用を停止します。
|
||||

|
||||
* `[<lora:A:1> #increment:10]`
|
||||
- 名前がAのLoRAを使用する期間中に重みを0から線形に増加させ、設定された重みに到達します。そして、10ステップからこのLoRAを使用します。
|
||||

|
||||
* `[<lora:A:1> #decrease:10]`
|
||||
- 名前がAのLoRAを使用する期間中に重みを1から線形に減少させ、0に到達します。そして、10ステップからこのLoRAを使用します。
|
||||

|
||||
* `[<lora:A:1> #cmd\(warmup\(0.5\)\):10]`
|
||||
- 名前がAのLoRAを使用する期間中、重みはウォームアップ定数であり、0からこのLoRAのライフサイクルの50%に到達するまで線形に増加します。そして、10ステップからこのLoRAを使用します。
|
||||
- 
|
||||
* `[<lora:A:1> #cmd\(sin\(life\)\):20]`
|
||||
- 名前がAのLoRAを使用する期間中、重みは正弦波であり、10ステップからこのLoRAを使用します。
|
||||

|
||||
|
||||
すべての生成された画像:
|
||||

|
||||
|
||||
### 反向トークンに対する影響の消去
|
||||
内蔵のLoRAを使用する場合、反転トークンは常にLoRAの影響を受けます。これは通常、出力に負の影響を与えます。この拡張機能は、負の影響を排除するオプションを提供します。
|
||||
|
||||
## 使用方法
|
||||
### 有効化 (Enabled)
|
||||
このオプションをオンにすると、Composable LoRAの機能を使用できるようになります。
|
||||
|
||||
### Composable LoRA with step
|
||||
特定のステップでLoRAを有効または無効にする機能を使用するには、このオプションを選択する必要があります。
|
||||
|
||||
### Use Lora in uc text model encoder
|
||||
言語モデルエンコーダー(text model encoder)の逆提示語部分でLoRAを使用します。
|
||||
このオプションをオフにすると、より良い出力が期待できます。
|
||||
|
||||
### Use Lora in uc diffusion model
|
||||
拡散モデル(diffusion model)またはデノイザー(denoiser)の逆提示語部分でLoRAを使用します。
|
||||
このオプションをオフにすると、より良い出力が期待できます。
|
||||
|
||||
### plot the LoRA weight in all steps
|
||||
\[Composable LoRA with step\]が選択されている場合、LoRAの重みが各ステップでどのように変化するかを観察するために、このオプションを選択できます。
|
||||
|
||||
## 互換性
|
||||
`--always-batch-cond-uncond`は`--medvram`または`--lowvram`と一緒に使用する必要があります。
|
||||
|
||||
## 更新ログ
|
||||
### 2023-04-02
|
||||
* LoCon、LyCORISサポートを追加
|
||||
* 不具合を修正:IndexError: list index out of range
|
||||
### 2023-04-08
|
||||
* 複数の異なるANDブロックで同じLoRAを使用できるようにする
|
||||

|
||||
### 2023-04-13
|
||||
* 2023-04-08のバージョンでpull requestを提出
|
||||
### 2023-04-19
|
||||
* pytorch 2.0を使用する場合に拡張がロードされない問題を修正
|
||||
* 不具合を修正: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
||||
### 2023-04-20
|
||||
* 特定のステップでLoRAを有効または無効にする機能を実装
|
||||
* LoCon、LyCORISの拡張プログラムを参考にし、異なるANDブロックおよびステップでのLoRAの有効化/無効化アルゴリズムを改善
|
||||
### 2023-04-21
|
||||
* 異なるステップ数でのLoRAの重みを制御する方法の実装 `[A #xxx]`
|
||||
* 異なるステップ数でのLoRAの重み変化を示すグラフの作成
|
||||
|
||||
## 特別な感謝
|
||||
* [opparco: Composable LoRAの元の作者である](https://github.com/opparco)、[Composable LoRA](https://github.com/opparco/stable-diffusion-webui-composable-lora)
|
||||
* [JackEllieのStable-Siffusionコミュニティチーム](https://discord.gg/TM5d89YNwA) 、 [YouTubeチャンネル](https://www.youtube.com/@JackEllie)
|
||||
* [中文ウィキペディアのコミュニティチーム](https://discord.gg/77n7vnu)
|
||||
121
README.md
|
|
@ -1,9 +1,90 @@
|
|||
# Composable LoRA
|
||||
This extension replaces the built-in LoRA forward procedure.
|
||||
# Composable LoRA/LyCORIS with steps
|
||||
This extension replaces the built-in LoRA forward procedure and provides support for LoCon and LyCORIS.
|
||||
|
||||
This extension is forked from the Composable LoRA extension.
|
||||
|
||||
### Language
|
||||
* [繁體中文](README.zh-tw.md)
|
||||
* [简体中文](README.zh-cn.md) (Wikipedia zh converter)
|
||||
* [日本語](README.ja.md) (ChatGPT)
|
||||
|
||||
## Installation
|
||||
Note: This version of Composable LoRA already includes all the features of the original version of Composable LoRA. You only need to select one to install.
|
||||
|
||||
This extension cannot be used simultaneously with the original version of the Composable LoRA extension. Before installation, you must first delete the `stable-diffusion-webui-composable-lora` folder of the original version of the Composable LoRA extension in the `webui\extensions\` directory.
|
||||
|
||||
Next, go to \[Extension\] -> \[Install from URL\] in the webui and enter the following URL:
|
||||
```
|
||||
https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
|
||||
```
|
||||
Install and restart to complete the process.
|
||||
|
||||
## Features
|
||||
### Compatible with Composable-Diffusion
|
||||
By associating LoRA's insertion position in the prompt with "AND" syntax, LoRA's scope of influence is limited to a specific subprompt.
|
||||
By associating LoRA's insertion position in the prompt with `AND` syntax, LoRA's scope of influence is limited to a specific subprompt.
|
||||
|
||||
### Composable with step
|
||||
By placing LoRA within a prompt in the form of `[A:B:N]`, the scope of LoRA's effect is limited to specific drawing steps.
|
||||

|
||||
|
||||
### LoRA weight controller
|
||||
Added a syntax `[A #xxx]` to control the weight of LoRA at each drawing step.
|
||||
Currently supported options are:
|
||||
* `decrease`
|
||||
- Gradually decrease weight within the effective steps of LoRA until 0.
|
||||
* `increment`
|
||||
- Gradually increase weight from 0 within the effective steps of LoRA.
|
||||
* `cmd(...)`
|
||||
- A customizable weight control command, mainly using Python syntax.
|
||||
* Available parameters
|
||||
+ `weight`
|
||||
* The current weight of LoRA.
|
||||
+ `life`
|
||||
* A number between 0-1, indicating the current life cycle of LoRA. It is 0 when it is at the starting step and 1 when it is at the final step of this LoRA's effect.
|
||||
+ `step`
|
||||
* The current step number.
|
||||
+ `steps`
|
||||
* The total number of steps.
|
||||
* Available functions
|
||||
+ `warmup(x)`
|
||||
* x is a number between 0-1, representing a warmup constant. Calculated based on the total number of steps, the function value gradually increases from 0 to 1 until x is reached.
|
||||
+ `cooldown(x)`
|
||||
* x is a number between 0-1, representing a cooldown constant. Calculated based on the total number of steps, the function value gradually decreases from 1 to 0 after x.
|
||||
+ sin, cos, tan, asin, acos, atan
|
||||
* Trigonometric functions with all steps as the period. The values of sin and cos are expected to be between 0 and 1.
|
||||
+ sinr, cosr, tanr, asinr, acosr, atanr
|
||||
* Trigonometric functions in radians, with a period of 2π.
|
||||
+ abs, ceil, floor, trunc, fmod, gcd, lcm, perm, comb, gamma, sqrt, cbrt, exp, pow, log, log2, log10
|
||||
* Functions in the math library of Python.
|
||||
Example :
|
||||
* `[<lora:A:1>::10]`
|
||||
- Use LoRA named A until step 10.
|
||||

|
||||
* `[<lora:A:1>:<lora:B:1>:10]`
|
||||
- Use LoRA named A until step 10, then switch to LoRA named B.
|
||||

|
||||
* `[<lora:A:1>:10]`
|
||||
- Start using LoRA named A from step 10.
|
||||
* `[<lora:A:1>:0.5]`
|
||||
- Start using LoRA named A from 50% of the steps.
|
||||
* `[[<lora:A:1>::25]:10]`
|
||||
- Start using LoRA named A from step 10 until step 25.
|
||||

|
||||
* `[<lora:A:1> #increment:10]`
|
||||
- During the usage of LoRA named A, increment the weight linearly from 0 to the specified weight, starting from step 10.
|
||||

|
||||
* `[<lora:A:1> #decrease:10]`
|
||||
- During the usage of LoRA named A, decrease the weight linearly from 1 to 0, starting from step 10.
|
||||

|
||||
* `[<lora:A:1> #cmd\(warmup\(0.5\)\):10]`
|
||||
- During the usage of LoRA named A, set the weight to the warm-up constant and increase it linearly from 0 to the specified weight until 50% of the LoRA lifecycle is reached, starting from step 10.
|
||||
- 
|
||||
* `[<lora:A:1> #cmd\(sin\(life\)\):10]`
|
||||
- During the usage of LoRA named A, set the weight to a sine wave, starting from step 10.
|
||||

|
||||
|
||||
All the image:
|
||||

|
||||
|
||||
### Eliminate the impact on negative prompts
|
||||
With the built-in LoRA, negative prompts are always affected by LoRA. This often has a negative impact on the output.
|
||||
|
|
@ -13,6 +94,9 @@ So this extension offers options to eliminate the negative effects.
|
|||
### Enabled
|
||||
When checked, Composable LoRA is enabled.
|
||||
|
||||
### Composable LoRA with step
|
||||
Check this option to enable the feature of turning on or off LoRAs at specific steps.
|
||||
|
||||
### Use Lora in uc text model encoder
|
||||
Enable LoRA for uncondition (negative prompt) text model encoder.
|
||||
With this disabled, you can expect better output.
|
||||
|
|
@ -21,5 +105,32 @@ With this disabled, you can expect better output.
|
|||
Enable LoRA for uncondition (negative prompt) diffusion model (denoiser).
|
||||
With this disabled, you can expect better output.
|
||||
|
||||
## compatibilities
|
||||
--always-batch-cond-uncond must be enabled with --medvram or --lowvram
|
||||
### plot the LoRA weight in all steps
|
||||
If "Composable LoRA with step" is enabled, you can select this option to generate a chart that shows the relationship between LoRA weight and the number of steps after the drawing is completed. This allows you to observe the variation of LoRA weight at each step.
|
||||
|
||||
## Compatibilities
|
||||
`--always-batch-cond-uncond` must be enabled with `--medvram` or `--lowvram`
|
||||
|
||||
## Changelog
|
||||
### 2023-04-02
|
||||
* Added support for LoCon and LyCORIS
|
||||
* Fixed error: IndexError: list index out of range
|
||||
### 2023-04-08
|
||||
* Allow using the same LoRA in multiple AND blocks
|
||||

|
||||
### 2023-04-13
|
||||
* Submitted pull request for the 2023-04-08 version
|
||||
### 2023-04-19
|
||||
* Fixed loading extension failure issue when using pytorch 2.0
|
||||
* Fixed error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
||||
### 2023-04-20
|
||||
* Implemented the function of enabling or disabling LoRA at specific steps
|
||||
* Improved the algorithm for enabling or disabling LoRA in different AND blocks and steps, by referring to the code of LoCon and LyCORIS extensions
|
||||
### 2023-04-21
|
||||
* Implemented the method to control different weights of LoRA at different steps (`[A #xxx]`)
|
||||
* Plotted a chart of LoRA weight changes at different steps
|
||||
|
||||
## Acknowledgements
|
||||
* [opparco, Composable LoRA original author](https://github.com/opparco)、[Composable LoRA](https://github.com/opparco/stable-diffusion-webui-composable-lora)
|
||||
* [JackEllie's Stable-Siffusion community team](https://discord.gg/TM5d89YNwA) 、 [Youtube channel](https://www.youtube.com/@JackEllie)
|
||||
* [Chinese Wikipedia community team](https://discord.gg/77n7vnu)
|
||||
|
|
@ -0,0 +1,136 @@
|
|||
# Composable LoRA/LyCORIS with steps
|
||||
这个扩展取代了内置的 forward LoRA 过程,同时提供对LoCon、LyCORIS的支持
|
||||
|
||||
本扩展Fork自Composable LoRA扩展
|
||||
|
||||
### 语言
|
||||
* [繁体中文](README.zh-tw.md)
|
||||
* [英语](README.md) (google translate)
|
||||
* [日语](README.ja.md) (ChatGPT)
|
||||
|
||||
## 安装
|
||||
注意 : 这个版本的Composable LoRA已经包含了原版Composable LoRA的所有功能,只要选一个安装就好。
|
||||
|
||||
此扩展不能与原始版本的Composable LoRA扩展同时使用,安装前必须先删除原始版本的Composable LoRA扩展。请先到`webui\extensions\`文件夹下删除`stable-diffusion-webui-composable-lora`文件夹
|
||||
|
||||
接下来到webui的\[扩展\] -> \[从网址安装\]输入以下网址:
|
||||
```
|
||||
https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
|
||||
```
|
||||
安装并重启即可
|
||||
|
||||
## 功能
|
||||
### 与 Composable-Diffusion 兼容
|
||||
将 LoRA 在提示词中的插入位置与`AND`语法相关系,让 LoRA 的影响范围限制在特定的子提示词中 (特定 AND...AND区块中)。
|
||||
|
||||
### 在步骤数上的 Composable
|
||||
使 LoRA 支持放置在形如`[A:B:N]`的提示词语法中,让 LoRA 的影响范围限制在特定的绘图步骤上。
|
||||

|
||||
|
||||
### LoRA 权重控制
|
||||
添加了一个语法`[A #xxx]`可以用来控制LoRA在每个绘图步骤的权重
|
||||
目前支持的有:
|
||||
* `decrease`
|
||||
- 在LoRA的有效步骤数内逐渐递减权重直到0
|
||||
* `increment`
|
||||
- 在LoRA的有效步骤数内从0开始逐渐递增权重
|
||||
* `cmd(...)`
|
||||
- 自定义的权重控制指令,主要以python语法为主
|
||||
* 可用参数
|
||||
+ `weight`
|
||||
* 当前的LoRA权重
|
||||
+ `life`
|
||||
* 0-1之间的数字,表示目前LoRA的生命周期。位于起始步骤数时为0,位于此LoRA最终作用的步骤数时为1
|
||||
+ `step`
|
||||
* 目前的步骤数
|
||||
+ `steps`
|
||||
* 总共的步骤数
|
||||
* 可用函数
|
||||
+ `warmup(x)`
|
||||
* x为0-1之间的数字,表示一个预热的常数,以总步数计算,在低于x比例的步数时,函数值从0逐渐递增,直到x之后为1
|
||||
+ `cooldown(x)`
|
||||
* x为0-1之间的数字,表示一个冷却的常数,以总步数计算,在高于x比例的步数时,函数值从1逐渐递减,直到0
|
||||
+ sin, cos, tan, asin, acos, atan
|
||||
* 以所有步数为周期的三角函数。其中sin, cos的值预被改成0到1之间
|
||||
+ sinr, cosr, tanr, asinr, acosr, atanr
|
||||
* 以弧度为单位的三角函数,周期 2*pi。
|
||||
+ abs, ceil, floor, trunc, fmod, gcd, lcm, perm, comb, gamma, sqrt, cbrt, exp, pow, log, log2, log10
|
||||
* 同python的math函数库中的函数
|
||||
示例 :
|
||||
* `[<lora:A:1>::10]`
|
||||
- 使用名为A的LoRA到第10步停止
|
||||

|
||||
* `[<lora:A:1>:<lora:B:1>:10]`
|
||||
- 使用名为A的LoRA到第10步为止,从第10步开始换用名为B的LoRA
|
||||

|
||||
* `[<lora:A:1>:10]`
|
||||
- 从第10步才开始使用名为A的LoRA
|
||||
* `[<lora:A:1>:0.5]`
|
||||
- 从50%的步数才开始使用名为A的LoRA
|
||||
* `[[<lora:A:1>::25]:10]`
|
||||
- 从第10步才开始使用名为A的LoRA,并且到第25步停止使用
|
||||

|
||||
* `[<lora:A:1> #increment:10]`
|
||||
- 在名为A的LoRA使用期间,权重从0开始线性递增直到设置的权重,且从第10步才开始使用此LoRA
|
||||

|
||||
* `[<lora:A:1> #decrease:10]`
|
||||
- 在名为A的LoRA使用期间,权重从1开始线性递减直到0,且从第10步才开始使用此LoRA
|
||||

|
||||
* `[<lora:A:1> #cmd\(warmup\(0.5\)\):10]`
|
||||
- 在名为A的LoRA使用期间,权重为预热的常数,从0开始递增直到50%的此LoRA生命周期达到设置的权重,且从第10步才开始使用此LoRA
|
||||
- 
|
||||
* `[<lora:A:1> #cmd\(sin\(life\)\):20]`
|
||||
- 在名为A的LoRA使用期间,权重为正弦波,且从第10步才开始使用此LoRA
|
||||

|
||||
|
||||
所有生成的图像 :
|
||||

|
||||
|
||||
### 消除对反向提示词的影响
|
||||
使用内置的 LoRA 时,反向提示词总是受到 LoRA 的影响。 这通常会对输出产生负面影响。
|
||||
而此扩展程序提供了消除负面影响的选项。
|
||||
|
||||
## 使用方法
|
||||
### 激活 (Enabled)
|
||||
勾选此选项之后才能使用Composable LoRA的功能。
|
||||
|
||||
### Composable LoRA with step
|
||||
勾选此选项之后才能使用在特定步数上激活或不激活LoRA的功能。
|
||||
|
||||
### 在反向提示词的语言模型编码器上使用LoRA (Use Lora in uc text model encoder)
|
||||
在语言模型编码器(text model encoder)的反向提示词部分使用LoRA。
|
||||
关闭此选项后,您可以期待更好的输出。
|
||||
|
||||
### 在反向提示词的扩散模型上上使用LoRA (Use Lora in uc diffusion model)
|
||||
在扩散模型(diffusion model)或称降噪器(denoiser)的反向提示词部分使用LoRA。
|
||||
关闭此选项后,您可以期待更好的输出。
|
||||
|
||||
### 绘制LoRA权重与步数关系的图表 (plot the LoRA weight in all steps)
|
||||
如果有勾选\[Composable LoRA with step\],可以勾选此选项来观察LoRA权重在每个步骤数上的变化
|
||||
|
||||
## 兼容性
|
||||
`--always-batch-cond-uncond`必须与`--medvram`或`--lowvram`一起使用
|
||||
|
||||
## 更新日志
|
||||
### 2023-04-02
|
||||
* 新增LoCon、LyCORIS支持
|
||||
* 修正: IndexError: list index out of range
|
||||
### 2023-04-08
|
||||
* 允许在多个不同AND区块使用同一个LoRA
|
||||

|
||||
### 2023-04-13
|
||||
* 2023-04-08的版本提交pull request
|
||||
### 2023-04-19
|
||||
* 修正使用 pytorch 2.0 时,扩展加载失败的问题
|
||||
* 修正: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
||||
### 2023-04-20
|
||||
* 实现控制LoRA在指定步数激活与不激活的功能
|
||||
* 参考LoCon、LyCORIS扩展的代码,改善LoRA在不同AND区块与步数激活与不激活的算法
|
||||
### 2023-04-21
|
||||
* 实现控制LoRA在不同步骤数能有不同权重的方法`[A #xxx]`
|
||||
* 绘制LoRA权重在不同步骤数之变化的图表
|
||||
|
||||
## 铭谢
|
||||
* [Composable LoRA原始作者opparco](https://github.com/opparco)、[Composable LoRA](https://github.com/opparco/stable-diffusion-webui-composable-lora)
|
||||
* [JackEllie的Stable-Siffusion的社群团队](https://discord.gg/TM5d89YNwA) 、 [Youtube频道](https://www.youtube.com/@JackEllie)
|
||||
* [中文维基百科的社群团队](https://discord.gg/77n7vnu)
|
||||
|
|
@ -0,0 +1,136 @@
|
|||
# Composable LoRA/LyCORIS with steps
|
||||
這個擴展取代了內置的 forward LoRA 過程,同時提供對LoCon、LyCORIS的支援
|
||||
|
||||
本擴展Fork自Composable LoRA擴展
|
||||
|
||||
### 語言
|
||||
* [英文](README.md) (google翻譯)
|
||||
* [简体中文](README.zh-cn.md) (維基百科繁簡轉換系統)
|
||||
* [日文](README.ja.md) (ChatGPT翻譯)
|
||||
|
||||
## 安裝
|
||||
注意 : 這個版本的Composable LoRA已經包含了原版Composable LoRA的所有功能,只要選一個安裝就好。
|
||||
|
||||
此擴展不能與原始版本的Composable LoRA擴展同時使用,安裝前必須先刪除原始版本的Composable LoRA擴展。請先到`webui\extensions\`資料夾下刪除`stable-diffusion-webui-composable-lora`資料夾
|
||||
|
||||
接下來到webui的\[擴充功能\] -> \[從網址安裝\]輸入以下網址:
|
||||
```
|
||||
https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
|
||||
```
|
||||
安裝並重新啟動即可
|
||||
|
||||
## 功能
|
||||
### 與 Composable-Diffusion 相容
|
||||
將 LoRA 在提示詞中的插入位置與`AND`語法相關聯,讓 LoRA 的影響範圍限制在特定的子提示詞中 (特定 AND...AND區塊中)。
|
||||
|
||||
### 在步驟數上的 Composable
|
||||
使 LoRA 支援放置在形如`[A:B:N]`的提示詞語法中,讓 LoRA 的影響範圍限制在特定的繪圖步驟上。
|
||||

|
||||
|
||||
### LoRA 權重控制
|
||||
添加了一個語法`[A #xxx]`可以用來控制LoRA在每個繪圖步驟的權重
|
||||
目前支援的有:
|
||||
* `decrease`
|
||||
- 在LoRA的有效步驟數內逐漸遞減權重直到0
|
||||
* `increment`
|
||||
- 在LoRA的有效步驟數內從0開始逐漸遞增權重
|
||||
* `cmd(...)`
|
||||
- 自定義的權重控制指令,主要以python語法為主
|
||||
* 可用參數
|
||||
+ `weight`
|
||||
* 當前的LoRA權重
|
||||
+ `life`
|
||||
* 0-1之間的數字,表示目前LoRA的生命週期。位於起始步驟數時為0,位於此LoRA最終作用的步驟數時為1
|
||||
+ `step`
|
||||
* 目前的步驟數
|
||||
+ `steps`
|
||||
* 總共的步驟數
|
||||
* 可用函數
|
||||
+ `warmup(x)`
|
||||
* x為0-1之間的數字,表示一個預熱的常數,以總步數計算,在低於x比例的步數時,函數值從0逐漸遞增,直到x之後為1
|
||||
+ `cooldown(x)`
|
||||
* x為0-1之間的數字,表示一個冷卻的常數,以總步數計算,在高於x比例的步數時,函數值從1逐漸遞減,直到0
|
||||
+ sin, cos, tan, asin, acos, atan
|
||||
* 以所有步數為週期的三角函數。其中sin, cos的值預被改成0到1之間
|
||||
+ sinr, cosr, tanr, asinr, acosr, atanr
|
||||
* 以弧度為單位的三角函數,週期 2*pi。
|
||||
+ abs, ceil, floor, trunc, fmod, gcd, lcm, perm, comb, gamma, sqrt, cbrt, exp, pow, log, log2, log10
|
||||
* 同python的math函數庫中的函數
|
||||
範例 :
|
||||
* `[<lora:A:1>::10]`
|
||||
- 使用名為A的LoRA到第10步停止
|
||||

|
||||
* `[<lora:A:1>:<lora:B:1>:10]`
|
||||
- 使用名為A的LoRA到第10步為止,從第10步開始換用名為B的LoRA
|
||||

|
||||
* `[<lora:A:1>:10]`
|
||||
- 從第10步才開始使用名為A的LoRA
|
||||
* `[<lora:A:1>:0.5]`
|
||||
- 從50%的步數才開始使用名為A的LoRA
|
||||
* `[[<lora:A:1>::25]:10]`
|
||||
- 從第10步才開始使用名為A的LoRA,並且到第25步停止使用
|
||||

|
||||
* `[<lora:A:1> #increment:10]`
|
||||
- 在名為A的LoRA使用期間,權重從0開始線性遞增直到設定的權重,且從第10步才開始使用此LoRA
|
||||

|
||||
* `[<lora:A:1> #decrease:10]`
|
||||
- 在名為A的LoRA使用期間,權重從1開始線性遞減直到0,且從第10步才開始使用此LoRA
|
||||

|
||||
* `[<lora:A:1> #cmd\(warmup\(0.5\)\):10]`
|
||||
- 在名為A的LoRA使用期間,權重為預熱的常數,從0開始遞增直到50%的此LoRA生命週期達到設定的權重,且從第10步才開始使用此LoRA
|
||||
- 
|
||||
* `[<lora:A:1> #cmd\(sin\(life\)\):20]`
|
||||
- 在名為A的LoRA使用期間,權重為正弦波,且從第10步才開始使用此LoRA
|
||||

|
||||
|
||||
所有生成的圖像 :
|
||||

|
||||
|
||||
### 消除對反向提示詞的影響
|
||||
使用內建的 LoRA 時,反向提示詞總是受到 LoRA 的影響。 這通常會對輸出產生負面影響。
|
||||
而此擴展程序提供了消除負面影響的選項。
|
||||
|
||||
## 使用方法
|
||||
### 啟用 (Enabled)
|
||||
勾選此選項之後才能使用Composable LoRA的功能。
|
||||
|
||||
### Composable LoRA with step
|
||||
勾選此選項之後才能使用在特定步數上啟用或不啟用LoRA的功能。
|
||||
|
||||
### 在反向提示詞的語言模型編碼器上使用LoRA (Use Lora in uc text model encoder)
|
||||
在語言模型編碼器(text model encoder)的反向提示詞部分使用LoRA。
|
||||
關閉此選項後,您可以期待更好的輸出。
|
||||
|
||||
### 在反向提示詞的擴散模型上上使用LoRA (Use Lora in uc diffusion model)
|
||||
在擴散模型(diffusion model)或稱降噪器(denoiser)的反向提示詞部分使用LoRA。
|
||||
關閉此選項後,您可以期待更好的輸出。
|
||||
|
||||
### 繪製LoRA權重與步數關聯的圖表 (plot the LoRA weight in all steps)
|
||||
如果有勾選\[Composable LoRA with step\],可以勾選此選項來觀察LoRA權重在每個步驟數上的變化
|
||||
|
||||
## 相容性
|
||||
`--always-batch-cond-uncond`必須與`--medvram`或`--lowvram`一起使用
|
||||
|
||||
## 更新日誌
|
||||
### 2023-04-02
|
||||
* 新增LoCon、LyCORIS支援
|
||||
* 修正: IndexError: list index out of range
|
||||
### 2023-04-08
|
||||
* 允許在多個不同AND區塊使用同一個LoRA
|
||||

|
||||
### 2023-04-13
|
||||
* 2023-04-08的版本提交pull request
|
||||
### 2023-04-19
|
||||
* 修正使用 pytorch 2.0 時,擴展載入失敗的問題
|
||||
* 修正: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
||||
### 2023-04-20
|
||||
* 實作控制LoRA在指定步數啟用與不啟用的功能
|
||||
* 參考LoCon、LyCORIS擴展的程式碼,改善LoRA在不同AND區塊與步數啟用與不啟用的演算法
|
||||
### 2023-04-21
|
||||
* 實作控制LoRA在不同步驟數能有不同權重的方法`[A #xxx]`
|
||||
* 繪製LoRA權重在不同步驟數之變化的圖表
|
||||
|
||||
## 銘謝
|
||||
* [Composable LoRA原始作者opparco](https://github.com/opparco)、[Composable LoRA](https://github.com/opparco/stable-diffusion-webui-composable-lora)
|
||||
* [JackEllie的Stable-Siffusion的社群團隊](https://discord.gg/TM5d89YNwA) 、 [Youtube頻道](https://www.youtube.com/@JackEllie)
|
||||
* [中文維基百科的社群團隊](https://discord.gg/77n7vnu)
|
||||
|
|
@ -1,16 +1,26 @@
|
|||
from typing import List, Dict
|
||||
import re
|
||||
import torch
|
||||
|
||||
import composable_lora_step
|
||||
import plot_helper
|
||||
from modules import extra_networks, shared
|
||||
|
||||
re_AND = re.compile(r"\bAND\b")
|
||||
|
||||
|
||||
def load_prompt_loras(prompt: str):
|
||||
global is_single_block
|
||||
global full_controllers
|
||||
global first_log_drawing
|
||||
prompt_loras.clear()
|
||||
prompt_blocks.clear()
|
||||
lora_controllers.clear()
|
||||
drawing_data.clear()
|
||||
full_controllers.clear()
|
||||
drawing_lora_names.clear()
|
||||
|
||||
subprompts = re_AND.split(prompt)
|
||||
tmp_prompt_loras = []
|
||||
tmp_prompt_blocks = []
|
||||
for i, subprompt in enumerate(subprompts):
|
||||
loras = {}
|
||||
_, extra_network_data = extra_networks.parse_prompt(subprompt)
|
||||
|
|
@ -20,30 +30,108 @@ def load_prompt_loras(prompt: str):
|
|||
loras[name] = multiplier
|
||||
|
||||
tmp_prompt_loras.append(loras)
|
||||
tmp_prompt_blocks.append(subprompt)
|
||||
is_single_block = (len(tmp_prompt_loras) == 1)
|
||||
prompt_loras.extend(tmp_prompt_loras * num_batches)
|
||||
|
||||
tmp_lora_controllers = composable_lora_step.parse_step_rendering_syntax(prompt)
|
||||
lora_controllers.extend(tmp_lora_controllers * num_batches)
|
||||
prompt_blocks.extend(tmp_prompt_blocks * num_batches)
|
||||
for controller_it in tmp_lora_controllers:
|
||||
full_controllers += controller_it
|
||||
first_log_drawing = False
|
||||
|
||||
def reset_counters():
|
||||
global text_model_encoder_counter
|
||||
global diffusion_model_counter
|
||||
global step_counter
|
||||
global should_print
|
||||
|
||||
# reset counter to uc head
|
||||
text_model_encoder_counter = -1
|
||||
diffusion_model_counter = 0
|
||||
step_counter += 1
|
||||
should_print = True
|
||||
|
||||
def reset_step_counters():
|
||||
global step_counter
|
||||
global should_print
|
||||
|
||||
should_print = True
|
||||
step_counter = 0
|
||||
|
||||
def add_step_counters():
|
||||
global step_counter
|
||||
global should_print
|
||||
|
||||
should_print = True
|
||||
step_counter += 1
|
||||
|
||||
if step_counter > num_steps:
|
||||
step_counter = 0
|
||||
else:
|
||||
if opt_plot_lora_weight:
|
||||
log_lora()
|
||||
|
||||
def log_lora():
|
||||
import lora
|
||||
tmp_data : List[float] = []
|
||||
for m_lora in lora.loaded_loras:
|
||||
current_lora = m_lora.name
|
||||
multiplier = m_lora.multiplier
|
||||
if opt_composable_with_step:
|
||||
multiplier = composable_lora_step.check_lora_weight(full_controllers, current_lora, step_counter, num_steps)
|
||||
index = -1
|
||||
if current_lora in drawing_lora_names:
|
||||
index = drawing_lora_names.index(current_lora)
|
||||
else:
|
||||
index = len(drawing_lora_names)
|
||||
drawing_lora_names.append(current_lora)
|
||||
if index >= len(tmp_data):
|
||||
for i in range(len(tmp_data), index):
|
||||
tmp_data.append(0.0)
|
||||
tmp_data.append(multiplier)
|
||||
else:
|
||||
tmp_data[index] = multiplier
|
||||
drawing_data.append(tmp_data)
|
||||
|
||||
def plot_lora():
|
||||
max_size = -1
|
||||
drawing_data.insert(0, drawing_lora_first_index)
|
||||
for datalist in drawing_data:
|
||||
datalist_len = len(datalist)
|
||||
if datalist_len > max_size:
|
||||
max_size = datalist_len
|
||||
for i, datalist in enumerate(drawing_data):
|
||||
datalist_len = len(datalist)
|
||||
if datalist_len < max_size:
|
||||
drawing_data[i].extend([0.0]*(max_size - datalist_len))
|
||||
return plot_helper.plot_lora_weight(drawing_data, drawing_lora_names)
|
||||
|
||||
def lora_forward(compvis_module, input, res):
|
||||
global text_model_encoder_counter
|
||||
global diffusion_model_counter
|
||||
|
||||
global step_counter
|
||||
global should_print
|
||||
global first_log_drawing
|
||||
global drawing_lora_first_index
|
||||
import lora
|
||||
|
||||
if not first_log_drawing:
|
||||
first_log_drawing = True
|
||||
print("Composable LoRA load successful.")
|
||||
if opt_plot_lora_weight:
|
||||
log_lora()
|
||||
drawing_lora_first_index = drawing_data[0]
|
||||
|
||||
if len(lora.loaded_loras) == 0:
|
||||
return res
|
||||
|
||||
lora_layer_name: str | None = getattr(compvis_module, 'lora_layer_name', None)
|
||||
if lora_layer_name is None:
|
||||
lora_layer_name_loading : str | None = getattr(compvis_module, 'lora_layer_name', None)
|
||||
if lora_layer_name_loading is None:
|
||||
return res
|
||||
#let it type is actually a string
|
||||
lora_layer_name : str = str(lora_layer_name_loading)
|
||||
del lora_layer_name_loading
|
||||
|
||||
num_loras = len(lora.loaded_loras)
|
||||
if text_model_encoder_counter == -1:
|
||||
|
|
@ -52,9 +140,8 @@ def lora_forward(compvis_module, input, res):
|
|||
tmp_check_loras = [] #store which lora are already apply
|
||||
tmp_check_loras.clear()
|
||||
|
||||
# print(f"lora.forward lora_layer_name={lora_layer_name} in.shape={input.shape} res.shape={res.shape} num_batches={num_batches} num_prompts={num_prompts}")
|
||||
for lora in lora.loaded_loras:
|
||||
module = lora.modules.get(lora_layer_name, None)
|
||||
for m_lora in lora.loaded_loras:
|
||||
module = m_lora.modules.get(lora_layer_name, None)
|
||||
if module is None:
|
||||
#fix the loCon issue
|
||||
if lora_layer_name.endswith("_11_mlp_fc2"): # locon doesn't has _11_mlp_fc2 layer
|
||||
|
|
@ -67,9 +154,10 @@ def lora_forward(compvis_module, input, res):
|
|||
# c1 c2 .. uc
|
||||
if diffusion_model_counter >= (len(prompt_loras) + num_batches) * num_loras:
|
||||
diffusion_model_counter = 0
|
||||
add_step_counters()
|
||||
continue
|
||||
|
||||
current_lora = lora.name
|
||||
|
||||
current_lora = m_lora.name
|
||||
lora_already_used = False
|
||||
for check_lora in tmp_check_loras:
|
||||
if current_lora == check_lora:
|
||||
|
|
@ -81,32 +169,50 @@ def lora_forward(compvis_module, input, res):
|
|||
#if current lora already apply, skip this lora
|
||||
if lora_already_used == True:
|
||||
continue
|
||||
|
||||
if shared.opts.lora_apply_to_outputs and res.shape == input.shape:
|
||||
patch = module.up(module.down(res))
|
||||
else:
|
||||
patch = module.up(module.down(input))
|
||||
|
||||
alpha = module.alpha / module.up.weight.shape[1] if module.alpha else 1.0
|
||||
if shared.opts.lora_apply_to_outputs and res.shape == input.shape:
|
||||
if hasattr(module, 'inference'):
|
||||
patch = module.inference(res)
|
||||
elif hasattr(module, 'up'):
|
||||
patch = module.up(module.down(res))
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"Your settings, extensions or models are not compatible with each other."
|
||||
)
|
||||
else:
|
||||
if hasattr(module, 'inference'):
|
||||
patch = module.inference(input)
|
||||
elif hasattr(module, 'up'):
|
||||
patch = module.up(module.down(input))
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"Your settings, extensions or models are not compatible with each other."
|
||||
)
|
||||
|
||||
alpha : float = 1.0
|
||||
if hasattr(module, 'up'):
|
||||
alpha = (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0)
|
||||
else: #handle if module.up is undefined
|
||||
alpha = (module.alpha / module.dim if module.alpha else 1.0)
|
||||
|
||||
num_prompts = len(prompt_loras)
|
||||
|
||||
# print(f"lora.name={lora.name} lora.mul={lora.multiplier} alpha={alpha} pat.shape={patch.shape}")
|
||||
# print(f"lora.name={m_lora.name} lora.mul={m_lora.multiplier} alpha={alpha} pat.shape={patch.shape}")
|
||||
if enabled:
|
||||
if lora_layer_name.startswith("transformer_"): # "transformer_text_model_encoder_"
|
||||
#
|
||||
if 0 <= text_model_encoder_counter // num_loras < len(prompt_loras):
|
||||
# c
|
||||
loras = prompt_loras[text_model_encoder_counter // num_loras]
|
||||
multiplier = loras.get(lora.name, 0.0)
|
||||
multiplier = loras.get(m_lora.name, 0.0)
|
||||
if multiplier != 0.0:
|
||||
# print(f"c #{text_model_encoder_counter // num_loras} lora.name={lora.name} mul={multiplier}")
|
||||
# print(f"c #{text_model_encoder_counter // num_loras} lora.name={m_lora.name} mul={multiplier} lora_layer_name={lora_layer_name}")
|
||||
res += multiplier * alpha * patch
|
||||
else:
|
||||
# uc
|
||||
if opt_uc_text_model_encoder and lora.multiplier != 0.0:
|
||||
# print(f"uc #{text_model_encoder_counter // num_loras} lora.name={lora.name} lora.mul={lora.multiplier}")
|
||||
res += lora.multiplier * alpha * patch
|
||||
if (opt_uc_text_model_encoder or is_single_block) and m_lora.multiplier != 0.0:
|
||||
# print(f"uc #{text_model_encoder_counter // num_loras} lora.name={m_lora.name} lora.mul={m_lora.multiplier} lora_layer_name={lora_layer_name}")
|
||||
res += m_lora.multiplier * alpha * patch
|
||||
|
||||
if lora_layer_name.endswith("_11_mlp_fc2"): # last lora_layer_name of text_model_encoder
|
||||
text_model_encoder_counter += 1
|
||||
|
|
@ -123,16 +229,24 @@ def lora_forward(compvis_module, input, res):
|
|||
for b in range(num_batches):
|
||||
# c
|
||||
for p, loras in enumerate(prompt_loras):
|
||||
multiplier = loras.get(lora.name, 0.0)
|
||||
multiplier = loras.get(m_lora.name, 0.0)
|
||||
if opt_composable_with_step:
|
||||
prompt_block_id = p
|
||||
lora_controller = lora_controllers[prompt_block_id]
|
||||
multiplier = composable_lora_step.check_lora_weight(lora_controller, m_lora.name, step_counter, num_steps)
|
||||
if multiplier != 0.0:
|
||||
# print(f"tensor #{b}.{p} lora.name={lora.name} mul={multiplier}")
|
||||
# print(f"tensor #{b}.{p} lora.name={m_lora.name} mul={multiplier} lora_layer_name={lora_layer_name}")
|
||||
res[tensor_off] += multiplier * alpha * patch[tensor_off]
|
||||
tensor_off += 1
|
||||
|
||||
# uc
|
||||
if opt_uc_diffusion_model and lora.multiplier != 0.0:
|
||||
# print(f"uncond lora.name={lora.name} lora.mul={lora.multiplier}")
|
||||
res[uncond_off] += lora.multiplier * alpha * patch[uncond_off]
|
||||
if (opt_uc_diffusion_model or is_single_block) and m_lora.multiplier != 0.0:
|
||||
# print(f"uncond lora.name={m_lora.name} lora.mul={m_lora.multiplier} lora_layer_name={lora_layer_name}")
|
||||
multiplier = m_lora.multiplier
|
||||
if is_single_block and opt_composable_with_step:
|
||||
multiplier = composable_lora_step.check_lora_weight(full_controllers, m_lora.name, step_counter, num_steps)
|
||||
res[uncond_off] += multiplier * alpha * patch[uncond_off]
|
||||
|
||||
uncond_off += 1
|
||||
else:
|
||||
# tensor.shape[1] != uncond.shape[1]
|
||||
|
|
@ -144,49 +258,92 @@ def lora_forward(compvis_module, input, res):
|
|||
for off in range(cur_num_prompts):
|
||||
if base + off < prompt_len:
|
||||
loras = prompt_loras[base + off]
|
||||
multiplier = loras.get(lora.name, 0.0)
|
||||
multiplier = loras.get(m_lora.name, 0.0)
|
||||
if opt_composable_with_step:
|
||||
prompt_block_id = base + off
|
||||
lora_controller = lora_controllers[prompt_block_id]
|
||||
multiplier = composable_lora_step.check_lora_weight(lora_controller, m_lora.name, step_counter, num_steps)
|
||||
if multiplier != 0.0:
|
||||
# print(f"c #{base + off} lora.name={lora.name} mul={multiplier}", lora_layer_name=lora_layer_name)
|
||||
# print(f"c #{base + off} lora.name={m_lora.name} mul={multiplier} lora_layer_name={lora_layer_name}")
|
||||
res[off] += multiplier * alpha * patch[off]
|
||||
else:
|
||||
# uc
|
||||
if opt_uc_diffusion_model and lora.multiplier != 0.0:
|
||||
# print(f"uc {lora_layer_name} lora.name={lora.name} lora.mul={lora.multiplier}")
|
||||
res += lora.multiplier * alpha * patch
|
||||
if (opt_uc_diffusion_model or is_single_block) and m_lora.multiplier != 0.0:
|
||||
# print(f"uc {lora_layer_name} lora.name={m_lora.name} lora.mul={m_lora.multiplier}")
|
||||
multiplier = m_lora.multiplier
|
||||
if is_single_block and opt_composable_with_step:
|
||||
multiplier = composable_lora_step.check_lora_weight(full_controllers, m_lora.name, step_counter, num_steps)
|
||||
|
||||
res += multiplier * alpha * patch
|
||||
|
||||
if lora_layer_name.endswith("_11_1_proj_out"): # last lora_layer_name of diffusion_model
|
||||
diffusion_model_counter += cur_num_prompts
|
||||
# c1 c2 .. uc
|
||||
if diffusion_model_counter >= (len(prompt_loras) + num_batches) * num_loras:
|
||||
diffusion_model_counter = 0
|
||||
add_step_counters()
|
||||
else:
|
||||
# default
|
||||
if lora.multiplier != 0.0:
|
||||
# print(f"default {lora_layer_name} lora.name={lora.name} lora.mul={lora.multiplier}")
|
||||
res += lora.multiplier * alpha * patch
|
||||
if m_lora.multiplier != 0.0:
|
||||
# print(f"default {lora_layer_name} lora.name={m_lora.name} lora.mul={m_lora.multiplier}")
|
||||
res += m_lora.multiplier * alpha * patch
|
||||
else:
|
||||
# default
|
||||
if lora.multiplier != 0.0:
|
||||
# print(f"DEFAULT {lora_layer_name} lora.name={lora.name} lora.mul={lora.multiplier}")
|
||||
res += lora.multiplier * alpha * patch
|
||||
|
||||
if m_lora.multiplier != 0.0:
|
||||
# print(f"DEFAULT {lora_layer_name} lora.name={m_lora.name} lora.mul={m_lora.multiplier}")
|
||||
res += m_lora.multiplier * alpha * patch
|
||||
|
||||
return res
|
||||
|
||||
|
||||
def lora_Linear_forward(self, input):
|
||||
if (not self.weight.is_cuda) and input.is_cuda: #if variables not on the same device (between cpu and gpu)
|
||||
self_weight_cuda = self.weight.cuda() #pass to GPU
|
||||
to_del = self.weight
|
||||
self.weight = None #delete CPU variable
|
||||
del to_del
|
||||
del self.weight #avoid pytorch 2.0 throwing exception
|
||||
self.weight = self_weight_cuda #load GPU data to self.weight
|
||||
return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
|
||||
|
||||
|
||||
def lora_Conv2d_forward(self, input):
|
||||
if (not self.weight.is_cuda) and input.is_cuda:
|
||||
self_weight_cuda = self.weight.cuda()
|
||||
to_del = self.weight
|
||||
self.weight = None
|
||||
del to_del
|
||||
del self.weight #avoid "cannot assign XXX as parameter YYY (torch.nn.Parameter or None expected)"
|
||||
self.weight = self_weight_cuda
|
||||
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
|
||||
|
||||
def should_reload():
|
||||
#pytorch 2.0 should reload
|
||||
match = re.search(r"\d+\.\d+",str(torch.__version__))
|
||||
if not match:
|
||||
return True
|
||||
ver = float(match.group(0))
|
||||
return ver >= 2.0
|
||||
|
||||
enabled = False
|
||||
opt_composable_with_step = False
|
||||
opt_uc_text_model_encoder = False
|
||||
opt_uc_diffusion_model = False
|
||||
opt_plot_lora_weight = False
|
||||
verbose = True
|
||||
|
||||
drawing_lora_names : List[str] = []
|
||||
drawing_data : List[List[float]] = []
|
||||
drawing_lora_first_index : List[float] = []
|
||||
first_log_drawing : bool = False
|
||||
|
||||
is_single_block : bool = False
|
||||
num_batches: int = 0
|
||||
num_steps: int = 20
|
||||
prompt_loras: List[Dict[str, float]] = []
|
||||
text_model_encoder_counter: int = -1
|
||||
diffusion_model_counter: int = 0
|
||||
step_counter: int = 0
|
||||
|
||||
should_print : bool = True
|
||||
prompt_blocks: List[str] = []
|
||||
lora_controllers: List[List[composable_lora_step.LoRA_Controller_Base]] = []
|
||||
full_controllers: List[composable_lora_step.LoRA_Controller_Base] = []
|
||||
|
|
|
|||
|
|
@ -0,0 +1,541 @@
|
|||
from typing import List, Dict
|
||||
import re
|
||||
import json
|
||||
import math
|
||||
import sys
|
||||
import traceback
|
||||
import random
|
||||
|
||||
from modules import extra_networks, shared
|
||||
|
||||
re_AND = re.compile(r"\bAND\b")
|
||||
|
||||
class LoRA_data:
|
||||
def __init__(self, name : str, weight : float):
|
||||
self.name = name
|
||||
self.weight = weight
|
||||
def __repr__(self):
|
||||
return f"LoRA:{self.name}:{self.weight}"
|
||||
def __str__(self):
|
||||
return f"LoRA:{self.name}:{self.weight}"
|
||||
|
||||
class LoRA_Weight_CMD:
|
||||
def getWeight(self, weight : float, progress: float, step : int, all_step : int):
|
||||
return weight
|
||||
|
||||
class LoRA_Weight_decrement(LoRA_Weight_CMD):
|
||||
def getWeight(self, weight : float, progress: float, step : int, all_step : int):
|
||||
return weight * (1 - progress)
|
||||
|
||||
class LoRA_Weight_increment(LoRA_Weight_CMD):
|
||||
def getWeight(self, weight : float, progress: float, step : int, all_step : int):
|
||||
return weight * progress
|
||||
|
||||
def raise_(ex):
|
||||
raise ex
|
||||
def not_allow(name):
|
||||
return lambda: raise_(Exception(f'function {name} is not allow in LoRA Controller'))
|
||||
|
||||
LoRA_Weight_eval_scope = {
|
||||
"abs": abs,
|
||||
"ceil": math.ceil, "floor": math.floor, "trunc": math.trunc,
|
||||
"fmod": math.fmod,
|
||||
"gcd": math.gcd, "lcm": math.lcm,
|
||||
"perm": math.perm, "comb": math.comb, "gamma": math.gamma,
|
||||
"sqrt": math.sqrt, "cbrt": lambda x: pow(x, 1.0 / 3.0),
|
||||
"exp": math.exp, "pow": math.pow,
|
||||
"log": math.log, "ln": math.log, "log2": math.log2, "log10": math.log10,
|
||||
"clamp": lambda x: 1.0 if x > 1 else (0.0 if x < 0 else x),
|
||||
"asin": lambda x: (math.acos(1.0 - x * 2.0) + 2.0 * math.pi) / (2.0 * math.pi),
|
||||
"acos": lambda x: (math.acos(x * 2.0 - 1.0) + 2.0 * math.pi) / (2.0 * math.pi),
|
||||
"atan": lambda x: (math.atan(x) + math.pi) / (2.0 * math.pi),
|
||||
"sin": lambda x: (math.sin(x * 2.0 * math.pi - (math.pi / 2.0)) + 1.0) / 2.0,
|
||||
"cos": lambda x: (math.sin(x * 2.0 * math.pi + (math.pi / 2.0)) + 1.0) / 2.0,
|
||||
"tan": lambda x: math.tan(x * 2.0 * math.pi),
|
||||
"sinr": math.sin, "cosr": math.cos, "tanr": math.tan,
|
||||
"asinr": math.asin, "acosr": math.acos, "atanr": math.atan,
|
||||
"sinh": math.sinh, "cosh": math.cosh, "tanh": math.tanh,
|
||||
"asinh": math.asinh, "acosh": math.acosh, "atanh": math.atanh,
|
||||
"abssin": lambda x: abs(math.sin(x * 2 * math.pi)),
|
||||
"abscos": lambda x: abs(math.cos(x * 2 * math.pi)),
|
||||
"random": random.random,
|
||||
"pi": math.pi, "nan": math.nan, "inf": math.inf,
|
||||
#not allow functions
|
||||
"eval": not_allow("eval"),
|
||||
"exec": not_allow("exec"),
|
||||
"compile": not_allow("compile"),
|
||||
"breakpoint": not_allow("breakpoint"),
|
||||
"__import__": not_allow("__import__")
|
||||
}
|
||||
|
||||
class LoRA_Weight_eval(LoRA_Weight_CMD):
|
||||
def __init__(self, command : str):
|
||||
self.command = command
|
||||
self.is_error = False
|
||||
def getWeight(self, weight : float, progress: float, step : int, all_step : int):
|
||||
|
||||
result = None
|
||||
#setup local variables
|
||||
LoRA_Weight_eval_scope["weight"] = weight
|
||||
LoRA_Weight_eval_scope["life"] = progress
|
||||
LoRA_Weight_eval_scope["step"] = step
|
||||
LoRA_Weight_eval_scope["steps"] = all_step
|
||||
LoRA_Weight_eval_scope["warmup"] = lambda x: progress / x if progress < x else 1.0
|
||||
LoRA_Weight_eval_scope["cooldown"] = lambda x: (1 - progress) / (1 - x) if progress > x else 1.0
|
||||
try:
|
||||
result = eval(self.command, LoRA_Weight_eval_scope)
|
||||
try:
|
||||
result = float(result) * weight
|
||||
except Exception:
|
||||
raise Exception(f"LoRA Controller command result must be a numble, but got {type(result)}")
|
||||
if math.isnan(result):
|
||||
raise Exception(f"Can not apply a NaN weight to LoRA.")
|
||||
if math.isinf(result):
|
||||
raise Exception(f"Can not apply a infinity weight to LoRA.")
|
||||
except:
|
||||
if not self.is_error:
|
||||
print(f"CommandError: {self.command}")
|
||||
traceback.print_exception(*sys.exc_info())
|
||||
self.is_error = True
|
||||
return weight
|
||||
|
||||
return result
|
||||
def __repr__(self):
|
||||
return f"LoRA_Weight_eval:{self.command}"
|
||||
def __str__(self):
|
||||
return f"LoRA_Weight_eval:{self.command}"
|
||||
|
||||
class LoRA_Controller_Base:
|
||||
def __init__(self):
|
||||
self.base_weight = 1.0
|
||||
self.Weight_Controller = LoRA_Weight_CMD()
|
||||
def getWeight(self, weight : float, progress: float, step : int, all_step : int):
|
||||
return self.Weight_Controller.getWeight(weight, progress, step, all_step)
|
||||
def test(self, test_lora : str, step : int, all_step : int):
|
||||
return self.base_weight
|
||||
|
||||
#normal lora
|
||||
class LoRA_Controller(LoRA_Controller_Base):
|
||||
def __init__(self, name : str, weight : float):
|
||||
super().__init__()
|
||||
self.name = name
|
||||
self.weight = float(weight)
|
||||
def test(self, test_lora : str, step : int, all_step : int):
|
||||
if test_lora == self.name:
|
||||
return self.getWeight(self.weight, float(step) / float(all_step), step, all_step)
|
||||
return 0.0
|
||||
def __repr__(self):
|
||||
return f"LoRA_Controller:{self.name}[weight={self.weight}]"
|
||||
def __str__(self):
|
||||
return f"LoRA_Controller:{self.name}[weight={self.weight}]"
|
||||
|
||||
#lora with start and end
|
||||
class LoRA_StartEnd_Controller(LoRA_Controller_Base):
|
||||
def __init__(self, name : str, weight : float, start : float | int, end : float | int):
|
||||
super().__init__()
|
||||
self.name = name
|
||||
self.weight = float(weight)
|
||||
self.start = float(start)
|
||||
self.end = float(end)
|
||||
def test(self, test_lora : str, step : int, all_step : int):
|
||||
if test_lora == self.name:
|
||||
start = self.start
|
||||
end = self.end
|
||||
if start < 1:
|
||||
start = self.start * all_step
|
||||
if end < 1:
|
||||
end = self.end * all_step
|
||||
if end < 0:
|
||||
end = all_step
|
||||
if (step >= start) and (step <= end):
|
||||
return self.getWeight(self.weight, float(step - start) / float(end - start), step, all_step)
|
||||
return 0.0
|
||||
def __repr__(self):
|
||||
return f"LoRA_StartEnd_Controller:{self.name}[weight={self.weight},start at={self.start},end at={self.end}]"
|
||||
def __str__(self):
|
||||
return f"LoRA_StartEnd_Controller:{self.name}[weight={self.weight},start at={self.start},end at={self.end}]"
|
||||
|
||||
#switch lora
|
||||
class LoRA_Switcher_Controller(LoRA_Controller_Base):
|
||||
def __init__(self, lora_dist : List[LoRA_data], start : float | int, end : float | int):
|
||||
super().__init__()
|
||||
self.lora_dist = lora_dist
|
||||
the_list : List[str] = []
|
||||
self.lora_list = the_list
|
||||
self.start = float(start)
|
||||
self.end = float(end)
|
||||
for lora_item in self.lora_dist:
|
||||
self.lora_list.append(lora_item.name)
|
||||
def test(self, test_lora : str, step : int, all_step : int):
|
||||
lora_count = len(self.lora_dist)
|
||||
if test_lora == self.lora_list[step % lora_count]:
|
||||
start = self.start
|
||||
end = self.end
|
||||
if start < 1:
|
||||
start = self.start * all_step
|
||||
if end < 1:
|
||||
end = self.end * all_step
|
||||
if end < 0:
|
||||
end = all_step
|
||||
if (step >= start) and (step <= end):
|
||||
return self.getWeight(self.lora_dist[step % lora_count].weight, float(step - start) / float(end - start), step, all_step)
|
||||
return 0.0
|
||||
def __repr__(self):
|
||||
return f"LoRA_Switcher_Controller:{self.lora_dist}[start at={self.start},end at={self.end}]"
|
||||
def __str__(self):
|
||||
return f"LoRA_Switcher_Controller:{self.lora_dist}[start at={self.start},end at={self.end}]"
|
||||
|
||||
|
||||
def parse_step_rendering_syntax(prompt: str):
|
||||
lora_controllers : List[List[LoRA_Controller_Base]] = []
|
||||
subprompts = re_AND.split(prompt)
|
||||
for i, subprompt in enumerate(subprompts):
|
||||
tmp_lora_controllers: List[LoRA_Controller_Base] = []
|
||||
step_rendering_list, pure_loratext = get_all_step_rendering_in_prompt(subprompt)
|
||||
for item in step_rendering_list:
|
||||
tmp_lora_controllers += get_LoRA_Controllers(item)
|
||||
lora_list = get_lora_list(pure_loratext)
|
||||
for lora_item in lora_list:
|
||||
tmp_lora_controllers.append(LoRA_Controller(lora_item.name, lora_item.weight))
|
||||
lora_controllers.append(tmp_lora_controllers)
|
||||
return lora_controllers
|
||||
|
||||
def check_lora_weight(controllers : List[LoRA_Controller_Base], test_lora : str, step : int, all_step : int):
|
||||
result_weight = 0.0
|
||||
for controller in controllers:
|
||||
calc_weight = controller.test(test_lora, step, all_step)
|
||||
if calc_weight > result_weight:
|
||||
result_weight = calc_weight
|
||||
return result_weight
|
||||
|
||||
def get_lora_list(prompt: str):
|
||||
result : List[LoRA_data] = []
|
||||
_, extra_network_data = extra_networks.parse_prompt(prompt)
|
||||
for params in extra_network_data['lora']:
|
||||
name = params.items[0]
|
||||
multiplier = float(params.items[1]) if len(params.items) > 1 else 1.0
|
||||
result.append(LoRA_data(name, multiplier))
|
||||
|
||||
if len(result) <= 0:
|
||||
result.append(LoRA_data("", 0.0))
|
||||
|
||||
return result
|
||||
|
||||
def get_or_list(prompt: str):
|
||||
return prompt.split("|")
|
||||
|
||||
re_start_end = re.compile(r"\[\s*\[\s*([^\:\]]+)\:\s*\:([^\]]+)\]\s*\:\s*([^\]]+)\]")
|
||||
re_strat_at = re.compile(r"\[\s*([^\:\]]+)\:\s*([0-9\.]+)\s*\]")
|
||||
re_bucket_inside = re.compile(r"\[([^\]]+)\]")
|
||||
re_extra_net = re.compile(r"<([^>]+):([^>]+)>")
|
||||
re_python_escape = re.compile(r"\$\$PYTHON_OBJ\$\$(\d+)\^")
|
||||
re_python_escape_x = re.compile(r"\$\$PYTHON_OBJX?\$\$(\d+)\^")
|
||||
re_sd_step_render = re.compile(r"\[[^\[\]]+\]")
|
||||
re_super_cmd = re.compile(r"#([^:#\[\]]+)")
|
||||
|
||||
class MySearchResult:
|
||||
def __init__(self):
|
||||
group : List[str] = []
|
||||
self.group = group
|
||||
|
||||
def extra_net_split(input_str : str, pattern : str):
|
||||
result : List[str] = []
|
||||
extra_net_list : List[str] = []
|
||||
escape_obj_list : List[str] = []
|
||||
def preprossing_escape(match_pt : re.Match):
|
||||
escape_obj_list.append(str(match_pt.group(0)))
|
||||
return f"$$PYTHON_OBJX$${len(escape_obj_list)-1}^"
|
||||
def preprossing_extra_net(match_pt : re.Match):
|
||||
extra_net_list.append(str(match_pt.group(0)))
|
||||
return f"$$PYTHON_OBJ$${len(extra_net_list)-1}^"
|
||||
def unstrip_extra_net_pattern(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
try:
|
||||
index = int(match_pt.group(1))
|
||||
return extra_net_list[index]
|
||||
except Exception:
|
||||
return input_str
|
||||
def unstrip_text_pattern_obj(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
try:
|
||||
index = int(match_pt.group(1))
|
||||
return escape_obj_list[index]
|
||||
except Exception:
|
||||
return input_str
|
||||
txt : str = input_str
|
||||
txt = re.sub(re_python_escape_x, preprossing_escape, txt)
|
||||
txt = re.sub(re_extra_net, preprossing_extra_net, txt)
|
||||
pre_result = txt.split(pattern)
|
||||
for i in range(len(pre_result)):
|
||||
try:
|
||||
cur_pattern = str(pre_result[i])
|
||||
cur_result = re.sub(re_python_escape, unstrip_extra_net_pattern, cur_pattern)
|
||||
cur_result = re.sub(re_python_escape_x, unstrip_text_pattern_obj, cur_result)
|
||||
result.append(cur_result)
|
||||
except Exception as ex:
|
||||
break
|
||||
if len(result) <= 0:
|
||||
return [input_str]
|
||||
return result
|
||||
|
||||
def extra_net_re_search(pattern : str | re.Pattern[str], input_str : str):
|
||||
result = MySearchResult()
|
||||
extra_net_list : List[str] = []
|
||||
escape_obj_list : List[str] = []
|
||||
def preprossing_escape(match_pt : re.Match):
|
||||
escape_obj_list.append(str(match_pt.group(0)))
|
||||
return f"$$PYTHON_OBJX$${len(escape_obj_list)-1}^"
|
||||
def preprossing_extra_net(match_pt : re.Match):
|
||||
extra_net_list.append(str(match_pt.group(0)))
|
||||
return f"$$PYTHON_OBJ$${len(extra_net_list)-1}^"
|
||||
def unstrip_extra_net_pattern(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
try:
|
||||
index = int(match_pt.group(1))
|
||||
return extra_net_list[index]
|
||||
except Exception:
|
||||
return input_str
|
||||
def unstrip_text_pattern_obj(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
try:
|
||||
index = int(match_pt.group(1))
|
||||
return escape_obj_list[index]
|
||||
except Exception:
|
||||
return input_str
|
||||
txt : str = input_str
|
||||
txt = re.sub(re_python_escape_x, preprossing_escape, txt)
|
||||
txt = re.sub(re_extra_net, preprossing_extra_net, txt)
|
||||
pre_result = re.search(pattern, txt)
|
||||
for i in range(1000):
|
||||
try:
|
||||
cur_pattern = str(pre_result.group(i))
|
||||
cur_result = re.sub(re_python_escape, unstrip_extra_net_pattern, cur_pattern)
|
||||
cur_result = re.sub(re_python_escape_x, unstrip_text_pattern_obj, cur_result)
|
||||
result.group.append(cur_result)
|
||||
except Exception as ex:
|
||||
break
|
||||
if len(result.group) <= 0:
|
||||
return None
|
||||
return result
|
||||
|
||||
def unescape_string(input_string : str):
|
||||
result = ''
|
||||
unicode_list = ['u','x']
|
||||
|
||||
i = 0 #for(var i=0; i<input_string.length; ++i)
|
||||
while i < len(input_string):
|
||||
current_char = input_string[i]
|
||||
if current_char == '\\':
|
||||
i += 1
|
||||
if i >= len(input_string):
|
||||
break
|
||||
string_body = input_string[i]
|
||||
if(string_body.lower() in unicode_list):
|
||||
result += f"{current_char}{string_body}"
|
||||
else:
|
||||
char_added = False
|
||||
try:
|
||||
unescaped = json.loads(f"\"{current_char}{string_body}\"")
|
||||
if unescaped:
|
||||
result += unescaped
|
||||
char_added = True
|
||||
except Exception:
|
||||
pass
|
||||
if not char_added:
|
||||
result += string_body
|
||||
else:
|
||||
result += current_char
|
||||
i += 1
|
||||
return str(json.loads(json.dumps(result, indent=4).replace("\\\\", "\\")))
|
||||
|
||||
|
||||
def get_LoRA_Controllers(prompt: str):
|
||||
result = extra_net_re_search(re_start_end, prompt)
|
||||
super_cmd = re.search(re_super_cmd, prompt)
|
||||
Weight_Controller = LoRA_Weight_CMD()
|
||||
if super_cmd:
|
||||
super_cmd_text = unescape_string(super_cmd.group(1)).strip()
|
||||
if super_cmd_text.startswith("cmd("):
|
||||
Weight_Controller = LoRA_Weight_eval(super_cmd_text[4:-1])
|
||||
elif super_cmd_text.startswith("decrease"):
|
||||
Weight_Controller = LoRA_Weight_decrement()
|
||||
elif super_cmd_text.startswith("increment"):
|
||||
Weight_Controller = LoRA_Weight_increment()
|
||||
def set_Weight_Controller(controller_list : list[LoRA_Controller_Base], the_controller : LoRA_Weight_CMD):
|
||||
for i, the_item in enumerate(controller_list):
|
||||
controller_list[i].Weight_Controller = the_controller
|
||||
return controller_list
|
||||
result_list: List[LoRA_Controller_Base] = []
|
||||
if result:
|
||||
or_list = get_or_list(result.group[1])
|
||||
if len(or_list) == 1: #LoRA with start and end
|
||||
lora_list = get_lora_list(or_list[0])
|
||||
for lora_item in lora_list:
|
||||
try:
|
||||
result_list.append(LoRA_StartEnd_Controller(lora_item.name, lora_item.weight, float(result.group[3]), float(result.group[2])))
|
||||
except Exception:
|
||||
continue
|
||||
return set_Weight_Controller(result_list, Weight_Controller)
|
||||
lora_lists : List[List[LoRA_data]] = []
|
||||
max_len = -1
|
||||
for or_block in or_list: #or
|
||||
lora_list = get_lora_list(or_block)
|
||||
lora_list_len = len(lora_list)
|
||||
if lora_list_len > max_len:
|
||||
max_len = lora_list_len
|
||||
lora_lists.append(lora_list)
|
||||
if max_len > 0:
|
||||
for i in range(max_len):
|
||||
tmp_lora_list : List[LoRA_data] = []
|
||||
for it_lora_list in lora_lists:
|
||||
tmp_lora = LoRA_data("", 0.0)
|
||||
if i < len(it_lora_list):
|
||||
tmp_lora = it_lora_list[i]
|
||||
tmp_lora_list.append(tmp_lora)
|
||||
result_list.append(LoRA_Switcher_Controller(tmp_lora_list, float(result.group[3]), float(result.group[2])))
|
||||
return set_Weight_Controller(result_list, Weight_Controller)
|
||||
result = extra_net_re_search(re_strat_at, prompt)
|
||||
if result:
|
||||
or_list = get_or_list(result.group[1])
|
||||
if len(or_list) == 1: #LoRA with start and end
|
||||
lora_list = get_lora_list(or_list[0])
|
||||
for lora_item in lora_list:
|
||||
try:
|
||||
result_list.append(LoRA_StartEnd_Controller(lora_item.name, lora_item.weight, float(result.group[2]), -1.0))
|
||||
except Exception:
|
||||
continue
|
||||
return set_Weight_Controller(result_list, Weight_Controller)
|
||||
lora_lists : List[List[LoRA_data]] = []
|
||||
max_len = -1
|
||||
for or_block in or_list: #or
|
||||
lora_list = get_lora_list(or_block)
|
||||
lora_list_len = len(lora_list)
|
||||
if lora_list_len > max_len:
|
||||
max_len = lora_list_len
|
||||
lora_lists.append(lora_list)
|
||||
if max_len > 0:
|
||||
for i in range(max_len):
|
||||
tmp_lora_list : List[LoRA_data] = []
|
||||
for it_lora_list in lora_lists:
|
||||
tmp_lora = LoRA_data("", 0.0)
|
||||
if i < len(it_lora_list):
|
||||
tmp_lora = it_lora_list[i]
|
||||
tmp_lora_list.append(tmp_lora)
|
||||
result_list.append(LoRA_Switcher_Controller(tmp_lora_list, float(result.group[2]), -1.0))
|
||||
return set_Weight_Controller(result_list, Weight_Controller)
|
||||
result = extra_net_re_search(re_bucket_inside, prompt)
|
||||
if result:
|
||||
bucket_inside = result.group[1]
|
||||
split_by_colon = extra_net_split(bucket_inside,":")
|
||||
if len(split_by_colon) == 1 and (("|" in bucket_inside) or ("#" in bucket_inside)):
|
||||
split_by_colon.append('')
|
||||
split_by_colon.append('-1')
|
||||
if len(split_by_colon) > 2:
|
||||
should_pass = False
|
||||
or_list = get_or_list(split_by_colon[0])
|
||||
if len(or_list) == 1: #LoRA with start and end
|
||||
lora_list = get_lora_list(or_list[0])
|
||||
for lora_item in lora_list:
|
||||
try:
|
||||
result_list.append(LoRA_StartEnd_Controller(lora_item.name, lora_item.weight, 0.0, float(split_by_colon[2])))
|
||||
except Exception:
|
||||
continue
|
||||
should_pass = True
|
||||
if not should_pass:
|
||||
lora_lists : List[List[LoRA_data]] = []
|
||||
max_len = -1
|
||||
for or_block in or_list: #or
|
||||
lora_list = get_lora_list(or_block)
|
||||
lora_list_len = len(lora_list)
|
||||
if lora_list_len > max_len:
|
||||
max_len = lora_list_len
|
||||
lora_lists.append(lora_list)
|
||||
if max_len > 0:
|
||||
for i in range(max_len):
|
||||
tmp_lora_list : List[LoRA_data] = []
|
||||
for it_lora_list in lora_lists:
|
||||
tmp_lora = LoRA_data("", 0.0)
|
||||
if i < len(it_lora_list):
|
||||
tmp_lora = it_lora_list[i]
|
||||
tmp_lora_list.append(tmp_lora)
|
||||
result_list.append(LoRA_Switcher_Controller(tmp_lora_list, 0.0, float(split_by_colon[2])))
|
||||
should_pass = False
|
||||
or_list = get_or_list(split_by_colon[1])
|
||||
if len(or_list) == 1: #LoRA with start and end
|
||||
lora_list = get_lora_list(or_list[0])
|
||||
for lora_item in lora_list:
|
||||
try:
|
||||
result_list.append(LoRA_StartEnd_Controller(lora_item.name, lora_item.weight, float(split_by_colon[2]), -1.0))
|
||||
except Exception:
|
||||
continue
|
||||
should_pass = True
|
||||
if not should_pass:
|
||||
lora_lists : List[List[LoRA_data]] = []
|
||||
max_len = -1
|
||||
for or_block in or_list: #or
|
||||
lora_list = get_lora_list(or_block)
|
||||
lora_list_len = len(lora_list)
|
||||
if lora_list_len > max_len:
|
||||
max_len = lora_list_len
|
||||
lora_lists.append(lora_list)
|
||||
if max_len > 0:
|
||||
for i in range(max_len):
|
||||
tmp_lora_list : List[LoRA_data] = []
|
||||
for it_lora_list in lora_lists:
|
||||
tmp_lora = LoRA_data("", 0.0)
|
||||
if i < len(it_lora_list):
|
||||
tmp_lora = it_lora_list[i]
|
||||
tmp_lora_list.append(tmp_lora)
|
||||
result_list.append(LoRA_Switcher_Controller(tmp_lora_list, float(split_by_colon[2]), -1.0))
|
||||
return set_Weight_Controller(result_list, Weight_Controller)
|
||||
return set_Weight_Controller(result_list, Weight_Controller)
|
||||
|
||||
def get_all_step_rendering_in_prompt(input_prompt : str):
|
||||
read_rendering_item_list : List[str] = []
|
||||
escape_obj_list : List[str] = []
|
||||
rendering_item_list : List[str] = []
|
||||
def preprossing_step_rendering_item(match_pt : re.Match):
|
||||
read_rendering_item_list.append(str(match_pt.group(0)))
|
||||
return f"$$PYTHON_OBJ$${len(read_rendering_item_list)-1}^"
|
||||
def preprossing_step_rendering_text(match_pt : re.Match):
|
||||
escape_obj_list.append(str(match_pt.group(0)))
|
||||
return f"$$PYTHON_OBJX$${len(escape_obj_list)-1}^"
|
||||
def load_step_rendering_item(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
rendering_item_list.append(input_str)
|
||||
return input_str
|
||||
def unstrip_rendering_text_pattern(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
try:
|
||||
index = int(match_pt.group(1))
|
||||
return read_rendering_item_list[index]
|
||||
except Exception:
|
||||
return input_str
|
||||
def unstrip_rendering_text_pattern_obj(match_pt : re.Match):
|
||||
input_str = str(match_pt.group(0))
|
||||
try:
|
||||
index = int(match_pt.group(1))
|
||||
return escape_obj_list[index]
|
||||
except Exception:
|
||||
return input_str
|
||||
def unstrip_rendering_text(input_str : str):
|
||||
old_result : str = "None"
|
||||
result : str = input_str
|
||||
while old_result != result:
|
||||
old_result = result
|
||||
result = re.sub(re_python_escape, unstrip_rendering_text_pattern, result)
|
||||
old_result = "None"
|
||||
while old_result != result:
|
||||
old_result = result
|
||||
result = re.sub(re_python_escape_x, unstrip_rendering_text_pattern_obj, result)
|
||||
return result
|
||||
txt : str = input_prompt
|
||||
txt = re.sub(re_python_escape_x, preprossing_step_rendering_text, txt)
|
||||
old_txt : str = "None"
|
||||
while old_txt != txt:
|
||||
old_txt = txt
|
||||
txt = re.sub(re_sd_step_render, preprossing_step_rendering_item, txt)
|
||||
re.sub(re_python_escape, load_step_rendering_item, txt)
|
||||
for i, the_item in enumerate(rendering_item_list):
|
||||
rendering_item_list[i] = unstrip_rendering_text(the_item)
|
||||
return rendering_item_list, txt
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
from dataclasses import dataclass
|
||||
from typing import Tuple, List, Union
|
||||
import io
|
||||
import matplotlib
|
||||
matplotlib.use('Agg')
|
||||
import pandas as pd
|
||||
from pandas.plotting._matplotlib.style import get_standard_colors
|
||||
from PIL import Image
|
||||
|
||||
@dataclass
|
||||
class YAxis:
|
||||
name: str
|
||||
columns: List[str]
|
||||
|
||||
@dataclass
|
||||
class PlotDefinition:
|
||||
title: str
|
||||
x_axis: str
|
||||
y_axis: List[YAxis]
|
||||
|
||||
def plot_lora_weight(lora_weights, lora_names):
|
||||
data = pd.DataFrame(lora_weights, columns=lora_names)
|
||||
ax = data.plot()
|
||||
ax.set_xlabel("Steps")
|
||||
ax.set_ylabel("LoRA weight")
|
||||
ax.set_title("LoRA weight in all steps")
|
||||
ax.legend(loc=0)
|
||||
result_image = fig2img(ax)
|
||||
matplotlib.pyplot.close(ax.figure)
|
||||
del ax # RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`). Consider using `matplotlib.pyplot.close()`.
|
||||
return result_image
|
||||
|
||||
def fig2img(fig):
|
||||
buf = io.BytesIO()
|
||||
fig.figure.savefig(buf)
|
||||
buf.seek(0)
|
||||
img = Image.open(buf)
|
||||
return img
|
||||
|
||||
|
||||
def plot_graph(
|
||||
data: pd.DataFrame,
|
||||
plot_definition: PlotDefinition,
|
||||
spacing: float = 0.1,
|
||||
):
|
||||
colors = get_standard_colors(num_colors=(len(plot_definition.y_axis) + 7))
|
||||
loss_color = colors[0]
|
||||
avg_colors = colors[1:]
|
||||
for i, yi in enumerate(plot_definition.y_axis):
|
||||
if i == 0:
|
||||
ax = data.plot(
|
||||
x=plot_definition.x_axis,
|
||||
y=yi.columns,
|
||||
title=plot_definition.title,
|
||||
color=[loss_color] * len(yi.columns)
|
||||
)
|
||||
ax.set_ylabel(ylabel=yi.name)
|
||||
|
||||
else:
|
||||
# Multiple y-axes
|
||||
ax_new = ax.twinx()
|
||||
ax_new.spines["right"].set_position(("axes", 1 + spacing * (i - 1)))
|
||||
data.plot(
|
||||
ax=ax_new,
|
||||
x=plot_definition.x_axis,
|
||||
y=yi.columns,
|
||||
color=[avg_colors[yl] for yl in range(len(yi.columns))]
|
||||
)
|
||||
ax_new.set_ylabel(ylabel=yi.name)
|
||||
|
||||
ax.legend(loc=0)
|
||||
|
||||
return ax
|
||||
|
After Width: | Height: | Size: 1.0 MiB |
|
After Width: | Height: | Size: 20 KiB |
|
After Width: | Height: | Size: 23 KiB |
|
After Width: | Height: | Size: 22 KiB |
|
After Width: | Height: | Size: 24 KiB |
|
After Width: | Height: | Size: 26 KiB |
|
After Width: | Height: | Size: 22 KiB |
|
After Width: | Height: | Size: 26 KiB |
|
After Width: | Height: | Size: 3.2 MiB |
|
After Width: | Height: | Size: 1.8 MiB |
|
|
@ -9,12 +9,10 @@ import modules.scripts as scripts
|
|||
from modules import script_callbacks
|
||||
from modules.processing import StableDiffusionProcessing
|
||||
|
||||
|
||||
def unload():
|
||||
torch.nn.Linear.forward = torch.nn.Linear_forward_before_lora
|
||||
torch.nn.Conv2d.forward = torch.nn.Conv2d_forward_before_lora
|
||||
|
||||
|
||||
if not hasattr(torch.nn, 'Linear_forward_before_lora'):
|
||||
torch.nn.Linear_forward_before_lora = torch.nn.Linear.forward
|
||||
|
||||
|
|
@ -26,7 +24,6 @@ torch.nn.Conv2d.forward = composable_lora.lora_Conv2d_forward
|
|||
|
||||
script_callbacks.on_script_unloaded(unload)
|
||||
|
||||
|
||||
class ComposableLoraScript(scripts.Script):
|
||||
def title(self):
|
||||
return "Composable Lora"
|
||||
|
|
@ -38,20 +35,37 @@ class ComposableLoraScript(scripts.Script):
|
|||
with gr.Group():
|
||||
with gr.Accordion("Composable Lora", open=False):
|
||||
enabled = gr.Checkbox(value=False, label="Enabled")
|
||||
opt_composable_with_step = gr.Checkbox(value=False, label="Composable LoRA with step")
|
||||
opt_uc_text_model_encoder = gr.Checkbox(value=False, label="Use Lora in uc text model encoder")
|
||||
opt_uc_diffusion_model = gr.Checkbox(value=False, label="Use Lora in uc diffusion model")
|
||||
opt_plot_lora_weight = gr.Checkbox(value=False, label="plot the LoRA weight in all steps")
|
||||
|
||||
return [enabled, opt_uc_text_model_encoder, opt_uc_diffusion_model]
|
||||
return [enabled, opt_composable_with_step, opt_uc_text_model_encoder, opt_uc_diffusion_model, opt_plot_lora_weight]
|
||||
|
||||
def process(self, p: StableDiffusionProcessing, enabled: bool, opt_uc_text_model_encoder: bool, opt_uc_diffusion_model: bool):
|
||||
def process(self, p: StableDiffusionProcessing, enabled: bool, opt_composable_with_step: bool, opt_uc_text_model_encoder: bool, opt_uc_diffusion_model: bool, opt_plot_lora_weight: bool):
|
||||
composable_lora.enabled = enabled
|
||||
composable_lora.opt_uc_text_model_encoder = opt_uc_text_model_encoder
|
||||
composable_lora.opt_uc_diffusion_model = opt_uc_diffusion_model
|
||||
composable_lora.opt_composable_with_step = opt_composable_with_step
|
||||
composable_lora.opt_plot_lora_weight = opt_plot_lora_weight
|
||||
|
||||
composable_lora.num_batches = p.batch_size
|
||||
composable_lora.num_steps = p.steps
|
||||
|
||||
if composable_lora.should_reload() and enabled:
|
||||
torch.nn.Linear.forward = composable_lora.lora_Linear_forward
|
||||
torch.nn.Conv2d.forward = composable_lora.lora_Conv2d_forward
|
||||
|
||||
composable_lora.reset_step_counters()
|
||||
|
||||
prompt = p.all_prompts[0]
|
||||
composable_lora.load_prompt_loras(prompt)
|
||||
if opt_composable_with_step:
|
||||
print("Loading LoRA step controller...")
|
||||
|
||||
def process_batch(self, p: StableDiffusionProcessing, *args, **kwargs):
|
||||
composable_lora.reset_counters()
|
||||
|
||||
def postprocess(self, p, processed, *args):
|
||||
if composable_lora.enabled and composable_lora.opt_plot_lora_weight:
|
||||
processed.images.extend([composable_lora.plot_lora()])
|
||||
|
|
|
|||