hi there, love the project! i noticed a small typo when going over the
install process.
when copying the automated install instructions from the docs into a
terminal, the line to install the python packages failed as it was
missing the `-y` flag.
when copying the automated install instructions from the docs into a terminal, the line to install the python packages failed as it was missing the `-y` flag.
Seems like this is the only change needed for the existing inpaint code
to work as a node. Kyle said on Discord that inpaint shouldn't be a
node, so feel free to just reject this if this code is going to be gone
soon.
# Intro
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions:
```
### A critical error
*** A non-fatal error
** A warning
>> Informational message
| Debugging message
```
Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add InvokeAI's
informational message decorations, while `ialog.error("foo")` will add
the decorations.
# Usage:
This is a thin wrapper around the standard Python logging module. It can
be used in several ways:
## Module-level logging style
This style logs everything through a single default logging object and
is identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus to
logging:
```
import invokeai.backend.util.logging as logger
logger.debug('this is a debugging message')
logger.info('this is a informational message')
logger.log(level=logging.CRITICAL, 'get out of dodge')
logger.disable(level=logging.INFO)
logger.basicConfig(filename='/var/log/invokeai.log')
logger.error('this will be logged to console and to invokeai.log')
```
Internally these functions all go through a custom logging object named
"invokeai". You can access it to perform additional customization in
either of these ways:
```
logger = logger.getLogger()
logger = logger.getLogger('invokeai')
```
## Object-oriented style
For more control, the logging module's object-oriented logging style is
also supported. The API is identical to the vanilla logging usage. In
fact, the only thing that has changed is that the getLogger() method
adds a custom formatter to the log messages.
```
import logging
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.getLogger(__name__)
fh = logging.FileHandler('/var/invokeai.log')
logger.addHandler(fh)
logger.critical('this will be logged to both the console and the log file')
```
## Within the nodes API
From within the nodes API, the logger module is stored in the `logger`
slot of InvocationServices during dependency initialization. For
example, in a router, the idiom is:
```
from ..dependencies import ApiDependencies
logger = ApiDependencies.invoker.services.logger
logger.warning('uh oh')
```
Currently, to change the logger used by the API, one must change the
logging module passed to `ApiDependencies.initialize()` in `api_app.py`.
However, this will eventually be replaced with a method to select the
preferred logging module using the configuration file (dependent on
merging of PR #3221)
- I've sorted out the issues that make *not* persisting troublesome, these will be rolled out with canvas
- Also realized that persisting gallery images very quickly fills up localStorage, so we can't really do it anyways
vastly improves the gallery performance when many images are loaded.
- `react-virtuoso` to do the virtualized list
- `overlayscrollbars` for a scrollbar
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.
Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)
Fixes#3289