fix(nodes): depth anything processor (#5956) (#5961)

We were passing a PIL image when we needed to pass the np image.

Closes #5956

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

We were passing a PIL image when we needed to pass the np image.

Closes #5956

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #5956

## QA Instructions, Screenshots, Recordings

Depth anything processor should work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
This commit is contained in:
blessedcoolant 2024-03-14 14:57:40 +05:30 committed by GitHub
commit ed20255abf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -90,8 +90,8 @@ class DepthAnythingDetector:
np_image = np_image[:, :, ::-1] / 255.0
image_height, image_width = np_image.shape[:2]
np_image = transform({"image": image})["image"]
tensor_image = torch.from_numpy(image).unsqueeze(0).to(choose_torch_device())
np_image = transform({"image": np_image})["image"]
tensor_image = torch.from_numpy(np_image).unsqueeze(0).to(choose_torch_device())
with torch.no_grad():
depth = self.model(tensor_image)