- A couple of users have reported that switching back and forth
between ckpt models is causing a "GPU out of memory" crash.
Traceback suggests there is actually a CPU RAM issue.
- This speculative test simply performs a round of garbage collection
before the point where the crash occurs.
* Initial Localization Implementation
* Fix Initial Spinner
* Language Picker Dropdown
* RU Localization Update
Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>
* Fixed localization breaking themes
* useUpdateTranslation Hook
To force trigger translations for data objects
* Localize Tab Data
* Localize Prompt Input & Current Image Buttons
* Localize Gallery & Bug FIxes
Fix a bug where the delete image from the context menu wasn't working. Removed tooltips that were broken as they don't work in context menu.
* Fix localization breaking in production
* Add Toast Localization Support
* Localize Unified Canvas
* Localize WIP Tabs
* Localize Hotkeys
* Localize Settings
* RU Localization Update
Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>
* Add Support for Italian and Portuguese
* Localize Toasts
* Fix width of language picker items
* Localize Backend Messages
* Disable Debug Messages
* Add Support for French
* Fix missing localization for a string in the SettingsModal
* Disable French
* Styling updates to normalize text and accommodate other langs
* Add Portuguese Brazilian
* Fix Hotkey headers not being localized.
* Fix styling issue on models tag in Settings
* Fix Slider Styling to accommodate different languages
* Fix slider styling in light mode.
* Add German
* Add Italian
* Add Polish
* Update Italian
* Localized Frontend Build
* Updated RU Translations
* Fresh Build with updated RU changes
* Bug Fixes and Loc Updates
* Updated Frontend Build
* Fresh Build
Co-authored-by: Artur <83028930+netsvetaev@users.noreply.github.com>
* add example of using -from_file to read from a script
Addresses #1654, #473, #566, #1008 at least partially.
* fix bug in code example
* improve docs for !fetch and !replay
* enable rendering of images in GH WebUI
also fix indention in some bullet lists
Co-authored-by: mauwii <Mauwii@outlook.de>
When using the inpainting model, the following sequence of events
would cause a predictable crash:
1. Use unified canvas to outcrop a portion of the image.
2. Accept outcropped image and import into img2img
3. Try any img2img operation
This closes#1596.
The crash was:
```
operands could not be broadcast together with shapes (320,512) (512,576)
Traceback (most recent call last):
File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1125, in generate_images
self.generate.prompt2image(
File "/data/lstein/InvokeAI/ldm/generate.py", line 492, in prompt2image
results = generator.generate(
File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 98, in generate
image = make_image(x_T)
File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 138, in make_image
return self.sample_to_image(samples)
File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 173, in sample_to_image
corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 148, in repaste_and_color_correct
mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (320,512) (512,576)
```
This error was caused by the image and its mask not being of identical
size due to the outcropping operation. The ultimate cause of this
error has something to do with different code paths being followed in
the `inpaint` vs the `omnibus` modules.
Since omnibus will be obsoleted by diffusers, I have chosen just to
work around the problem rather than track it down to its source. The
only ill effect is that color correction will not be applied to the
first image created by `img2img` after applying the outcrop and
immediately importing into the img2img canvas. Since the inpainting
model has less of a color drift problem than the standard model, this
is unlikely to be problematic.