Properly normalized the optical flow field before warping and after warping based on width and height. Now, because the range of the values are between -1 and 1 (usually much smaller), the flow doesn't get corrupted by the grid_sample for 3D or the warpPerspective for 2D anymore. So, I was able to remove all workarounds and just fix the abs to rel and rel to abs functions.
removed excess options, removed dev code, cleaned up code, ready for production.
- removed depth tensor autocontrast options
- made equalization a part of the normalization process (not an optional arg now)
- removed my mechanism for 0-based normalization. Decided it wasn't any better. Maybe worse... mostly similar.
- cracked open a bottle of champagne on this function's hull
also added a file for the first steps of making an auto-navigation module to be used in the future inside the transform_image_3d function! It generates a rotation matrix based on greatest or least depth in the tensor, instead of using animation keys.
makes progress advance smoothly during cadence rather than skipping ahead suddenly after cadence completes
left line commented for state.current_image, since I'm not sure what that variable actually does. It doesn't update the preview when I try it, even when turning up the speed of preview updates in auto1111. But, if we decide to make a preview mechanism or need that state var for cadence, it can just be uncommented. It matches the format of the one in the main non-cadence loop.
added consistency flow masks
- there is now an option to use flow consistency masks and an attached option for consistency mask blur, defaulted to 2.
- if you save extra frames, it also save consistency masks now
- you can see the effect on the flow in the flow outputs as well
- it doesn't work as great with cadence because you see afterimages, but if you up the blur it can be a little better.
fixed Frames to Video
- made the ffmpeg routine that Frames to Video uses able to take image files other than png. If png, it includes the -vcodec png as normal. But, if anything else it includes vcodec libx264, which works for jpgs. (jpgs don't work if using vcodec png, so I made it switchable). I haven't tested it with other filetypes, but I bet it works with others too. The png vcodec was specific to png.
- also added two more lines of instruction for how to use the file string.
- I also changed a few ransac functions for future use. They work as normal, but now have a switching behavior if passed depth. But, I'm not passing depth to them for now.
- a few minor code var name edits in hybrid video to align code better (mostly changed matrices to M, as is often convention
- commented a bunch of unused imports in render.py
- I'll leave it up to someone else to delete them after it's verified that everything works fine with them commented. I searched and didn't find them in that file. VSCode showed them as gray automatically, but I also verified.
It wasn't working anyway, so I removed that mechanism and restored it to the previous behavior of just color matching after generation when using Image of Video Input.
Also cleaned up some code and added console reporting about Redo cycles.
Discovered that RAFT wasn't actually working due to an issue with the function that got the flows. There was a missing "elif". So, the RAFT flow would get calculated and stored in the variable 'r', but then 'r' would always be overwritten by the default Farneback at the end. We were fooling ourselves into thinking that was RAFT, when in actuality the RAFT flow is invalid and causes an error if actually used.
- Changed function call for flow methods so that this can never happen. Now, each case returns directly.
- Added to deprecation utils for now. We can remove the RAFT to Farneback conversion when we get RAFT working
Realignment of the way I handled frame indexes in motion routines to align everything more clearly
Major improvement to motion using prev_img during cadence!
- added a prev_img during cadence so that there is a prev_img to refer to for the flow
Fixed color matching issue with first frame on Image and Video Init modes
- first frame color match can't be done beforehand, so it's done afterwards. But, that normally makes for a very bad first frame. So, I added a redo for it to clean up the color matched image on first frame.
Major improvement to RANSAC
- switched to use SIFT for feature matching instead of Lucas-Kanade
- changed all border_mode to REFLECT_101, which matched how optical flow handled it, removed all the excess silly border_mode translations. This works much better.