Commit Graph

4084 Commits

Author SHA1 Message Date
Lincoln Stein
4627910c5d added a wrapper model_manager_service and model events 2023-05-11 00:09:19 -04:00
Lincoln Stein
fa6a580452 merge with main 2023-05-10 00:03:32 -04:00
Lincoln Stein
99c692f397 check that model name matches format 2023-05-09 23:46:59 -04:00
Lincoln Stein
3d85e769ce clean up ckpt handling
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
Lincoln Stein
9cb962cad7 ckpt model conversion now done in ModelCache 2023-05-08 23:39:44 -04:00
Mary Hipp
853c83d0c2 surface detail field for 403 errors 2023-05-09 12:40:19 +10:00
Lincoln Stein
a108155544 added StALKeR779's great model size calculating routine 2023-05-08 21:47:03 -04:00
Mary Hipp
1809990ed4 if backend returns an error, show it in toast 2023-05-09 11:09:36 +10:00
Eugene
79d49853d2 use websocket transport first for socket.io 2023-05-09 11:01:02 +10:00
Lincoln Stein
1f608d3743
add v2.3 branch to push trigger (#3363)
Update the push trigger with the branch which should deploy the docs,
also bring over the updates to the workflow from the v2.3 branch and:

- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method (`pip install ".[docs]"`)
2023-05-08 16:26:06 -04:00
mauwii
df024dd982
bring changes from v2.3 branch over
- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method
2023-05-08 21:50:00 +02:00
mauwii
45da85765c
add v2.3 branch to push trigger 2023-05-08 21:10:20 +02:00
Lincoln Stein
c15b49c805 implement StALKeR7779 requested API for fetching submodels 2023-05-07 23:18:17 -04:00
Lincoln Stein
fd63e36822 optimize subfolder so that it returns submodel if parent is in RAM 2023-05-07 21:39:11 -04:00
Lincoln Stein
4649920074 adjust t2i to work with new model structure 2023-05-07 19:06:49 -04:00
Lincoln Stein
667171ed90 cap model cache size using bytes, not # models 2023-05-07 18:07:28 -04:00
blessedcoolant
8618e41b32
Deploy documentation from v2.3 branch rather than main (#3356)
This PR instructs github to deploy documentation pages from the v2.3
branch.
2023-05-07 21:43:44 +12:00
blessedcoolant
4687f94141
Merge branch 'main' into actions/mkdocs-deploy 2023-05-07 21:43:18 +12:00
psychedelicious
440912dcff feat(ui): make base log level debug 2023-05-07 15:36:37 +10:00
psychedelicious
8b87a26e7e feat(ui): support collect nodes 2023-05-07 15:36:37 +10:00
Lincoln Stein
44ae93df3e Deploy documentation from v2.3 branch rather than main 2023-05-06 23:56:04 -04:00
Lincoln Stein
647ffb2a0f defined abstract baseclass for model manager service 2023-05-06 22:41:19 -04:00
Lincoln Stein
05a27bda5e generalize model loading support, include loras/embeds 2023-05-06 15:58:44 -04:00
Lincoln Stein
2b213da967
add -y to the automated install instructions (#3349)
hi there, love the project! i noticed a small typo when going over the
install process.

when copying the automated install instructions from the docs into a
terminal, the line to install the python packages failed as it was
missing the `-y` flag.
2023-05-06 13:34:37 -04:00
Lincoln Stein
e91e1eb9aa
Merge branch 'main' into patch-1 2023-05-06 13:34:12 -04:00
Lincoln Stein
b24129fb3e
Fix logger namespace clash in web server (#3344)
This PR fixes a bug that appeared in the legacy web server after the
logging PR was merged.

closes #3343
2023-05-06 08:35:13 -04:00
Lincoln Stein
350b1421bb
Merge branch 'main' into lstein/bugfix/logger-namespace 2023-05-06 08:14:44 -04:00
Lincoln Stein
a8cfa3565c Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager 2023-05-06 08:14:15 -04:00
Lincoln Stein
e0214a32bc mostly ported to new manager API; needs testing 2023-05-06 00:44:12 -04:00
Steve Martinelli
f01c79a94f
add -y to the automated install instructions
when copying the automated install instructions from the docs into a terminal, the line to install the python packages failed as it was missing the `-y` flag.
2023-05-05 21:28:00 -04:00
blessedcoolant
463f6352ce
Add compel node and conditioning field type (#3265)
Done as I said in title, but need to test(and understand) how cli works,
as previously it uses single prompt and now it's positive and negative.
2023-05-06 13:05:04 +12:00
Lincoln Stein
af8c7c7d29 model manager rewritten to use model_cache; API changed! 2023-05-05 19:32:28 -04:00
StAlKeR7779
a80fe05e23 Rename compel node 2023-05-05 21:30:16 +03:00
StAlKeR7779
58d7833c5c Review changes 2023-05-05 21:09:29 +03:00
StAlKeR7779
5012f61599 Separate conditionings back to positive and negative 2023-05-05 15:47:51 +03:00
Lincoln Stein
a4e36bc02a when model is forcibly moved into RAM update loaded_models set 2023-05-04 23:28:03 -04:00
Lincoln Stein
2e9bec15e7
Merge branch 'main' into lstein/new-model-manager 2023-05-04 23:19:38 -04:00
Lincoln Stein
68bc0112fa implement lazy GPU offloading and ref counting 2023-05-04 23:15:32 -04:00
blessedcoolant
85c33823c3
Merge branch 'main' into feat/compel_node 2023-05-05 14:41:45 +12:00
blessedcoolant
c83a112669
Fix inpaint node (#3284)
Seems like this is the only change needed for the existing inpaint code
to work as a node. Kyle said on Discord that inpaint shouldn't be a
node, so feel free to just reject this if this code is going to be gone
soon.
2023-05-05 14:41:13 +12:00
psychedelicious
e04ada1319
Merge branch 'main' into patch-1 2023-05-05 10:38:45 +10:00
Lincoln Stein
d866dcb3d2 close #3343 2023-05-04 20:30:59 -04:00
StAlKeR7779
81ec476f3a Revert seed field addition 2023-05-04 21:50:40 +03:00
StAlKeR7779
1e6adf0a06 Fix default graph and test 2023-05-04 21:14:31 +03:00
StAlKeR7779
7d221e2518 Combine conditioning to one field(better fits for multiple type conditioning like perp-neg) 2023-05-04 20:14:22 +03:00
Lincoln Stein
a273bdbdc1
Merge branch 'main' into lstein/new-model-manager 2023-05-03 18:09:29 -04:00
StAlKeR7779
56d3cbead0
Merge branch 'main' into feat/compel_node 2023-05-04 00:28:33 +03:00
Lincoln Stein
5e8c97f1ba
[Enhancement] Regularize logging messages (#3176)
# Intro

This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions:

```
 ### A critical error
 *** A non-fatal error
 ** A warning
  >> Informational message
        | Debugging message
```

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add InvokeAI's
informational message decorations, while `ialog.error("foo")` will add
the decorations.
    
# Usage:

This is a thin wrapper around the standard Python logging module. It can
be used in several ways:


## Module-level logging style
 
This style logs everything through a single default logging object and
is identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus to
logging:
    
```
      import invokeai.backend.util.logging as logger
    
      logger.debug('this is a debugging message')
      logger.info('this is a informational message')
      logger.log(level=logging.CRITICAL, 'get out of dodge')

      logger.disable(level=logging.INFO)
      logger.basicConfig(filename='/var/log/invokeai.log')
      logger.error('this will be logged to console and to invokeai.log')
```    

Internally these functions all go through a custom logging object named
"invokeai". You can access it to perform additional customization in
either of these ways:

```
logger = logger.getLogger()
logger = logger.getLogger('invokeai')
```
    
## Object-oriented style

For more control, the logging module's object-oriented logging style is
also supported. The API is identical to the vanilla logging usage. In
fact, the only thing that has changed is that the getLogger() method
adds a custom formatter to the log messages.
    
```
     import logging
     from invokeai.backend.util.logging import InvokeAILogger
    
     logger = InvokeAILogger.getLogger(__name__)
     fh = logging.FileHandler('/var/invokeai.log')
     logger.addHandler(fh)
     logger.critical('this will be logged to both the console and the log file')
```

## Within the nodes API

From within the nodes API, the logger module is stored in the `logger`
slot of InvocationServices during dependency initialization. For
example, in a router, the idiom is:

```
from ..dependencies import ApiDependencies
logger = ApiDependencies.invoker.services.logger
logger.warning('uh oh')
```

Currently, to change the logger used by the API, one must change the
logging module passed to `ApiDependencies.initialize()` in `api_app.py`.
However, this will eventually be replaced with a method to select the
preferred logging module using the configuration file (dependent on
merging of PR #3221)
2023-05-03 15:00:05 -04:00
Lincoln Stein
4687ad4ed6
Merge branch 'main' into enhance/invokeai-logs 2023-05-03 13:36:06 -04:00
Lincoln Stein
8a0ec0fa0f
Merge branch 'main' into lstein/new-model-manager 2023-05-03 13:30:50 -04:00