mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
e68c49167a
# Aider chat conversation: USER: `invokeai/backend/model_manager/convert_ckpt_to_diffusers.py` Im having issues with cpu memory, not gpu memory ASSISTANT: To address CPU memory issues, we can make the following changes: 1. Use `del` to explicitly delete large objects when they are no longer needed. 2. Use `gc.collect()` to force garbage collection after deleting large objects. Here are the *SEARCH/REPLACE* blocks: invokeai/backend/model_manager/convert_ckpt_to_diffusers.py ```python <<<<<<< SEARCH vae = AutoencoderKL(**vae_config) with torch.no_grad(): vae.load_state_dict(converted_vae_checkpoint) vae.to(precision) torch.cuda.empty_cache() ======= vae = AutoencoderKL(**vae_config) with torch.no_grad(): vae.load_state_dict(converted_vae_checkpoint) del converted_vae_checkpoint # Free memory import gc gc.collect() vae.to(precision) >>>>>>> REPLACE ``` invokeai/backend/model_manager/convert_ckpt_to_diffusers.py ```python <<<<<<< SEARCH pipe = download_from_original_stable_diffusion_ckpt(Path(checkpoint_path).as_posix(), **kwargs) with torch.no_grad(): pipe = pipe.to(precision) torch.cuda.empty_cache() ======= pipe = download_from_original_stable_diffusion_ckpt(Path(checkpoint_path).as_posix(), **kwargs) with torch.no_grad(): del kwargs # Free memory import gc gc.collect() pipe = pipe.to(precision) >>>>>>> REPLACE ``` invokeai/backend/model_manager/convert_ckpt_to_diffusers.py ```python <<<<<<< SEARCH pipe = download_controlnet_from_original_ckpt(checkpoint_path.as_posix(), **kwargs) with torch.no_grad(): pipe = pipe.to(precision) torch.cuda.empty_cache() ======= pipe = download_controlnet_from_original_ckpt(checkpoint_path.as_posix(), **kwargs) with torch.no_grad(): del kwargs # Free memory import gc gc.collect() pipe = pipe.to(precision) >>>>>>> REPLACE ``` |
||
---|---|---|
.. | ||
app | ||
assets | ||
backend | ||
configs | ||
frontend | ||
invocation_api | ||
version | ||
__init__.py |