Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reimplement support for rknn detector #11365

Merged
merged 11 commits into from
May 21, 2024
Merged

Conversation

MarcA711
Copy link
Contributor

This PR is not complete. I just want to get some early feedback.

One thing that bothers me is that some parts (especially the prerequisites and setup parts) of the video processing and object detection docs overlap.
Compare https://github.com/MarcA711/frigate-rockchip/blob/1e1a11ae2bffab188e788e742c9ef0f801c7d002/docs/docs/configuration/hardware_acceleration.md?plain=1#L367-L397
and https://github.com/MarcA711/frigate-rockchip/blob/1e1a11ae2bffab188e788e742c9ef0f801c7d002/docs/docs/configuration/object_detectors.md?plain=1#L317-L346

Any suggestions how to handle this better?

Generally, it would be better for the user to describe the configuration for both video processing and object detection in one place. Now, the user has to find two pages. But I think this is again rockchip specific and difficult to fix without breaking other stuff.

Moreover, was it considered to move the post-processing part of all detectors in one file? As far as I know every detector implements its own post-processing for each supported model. However, post-processing of the same model (like yolo-nas, yolox etc.) should be the same so there is some duplicate code.

Copy link

netlify bot commented May 13, 2024

Deploy Preview for frigate-docs canceled.

Name Link
🔨 Latest commit 1837257
🔍 Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/664bace3166935000888643f

@NickM-27
Copy link
Sponsor Collaborator

If there are commonalities then you could put that in the installation section of the docs then have the hardware and object detector link to that as a prerequisite for specific setup

as far as post processing goes, it could make sense to have a util.py in detectors that handles this

@MarcA711
Copy link
Contributor Author

as far as post processing goes, it could make sense to have a util.py in detectors that handles this

I agree. I will try to implement post-processing for all detectors. I will use the mode: model_type: config option that is already implemented to handle this. But I will also include a way to return the pure detections (similar to how its already done) to allow the detectors to handle post-processing differently (e.g. using opencl on GPU).

@MarcA711
Copy link
Contributor Author

This is now ready to merge

@blakeblackshear
Copy link
Owner

I think it's best to add some information about the license on your repo where you published the models. Also, probably best to note that here as well. We wouldn't want someone using Frigate for commercial purposes to assume that these models are free for commercial use.

@blakeblackshear
Copy link
Owner

Lastly, how painful would it be to update this so the models are downloaded at runtime? That prevents frigate from actually redistributing the models itself.

@MarcA711
Copy link
Contributor Author

Good idea. I will add notes in my repo and frigate and I will exclude the models from the image. Maybe I will add some other models (maybe yolox) to the image later.

@MarcA711
Copy link
Contributor Author

I added a note to my repo, the docs and to the logger on each start, so that the user knows about the license situation.

I also excluded the models from the image.

@MarcA711
Copy link
Contributor Author

I see that Frigate 0.14 is coming out soon. Are there any plans for a beta phase? When should I finish my bigger changes so that they make it in that version?

@NickM-27
Copy link
Sponsor Collaborator

I don't know that I would call it "soon". There should be a beta within a week or two, and I am sure there will be at least a few betas to follow that.

@MarcA711
Copy link
Contributor Author

This PR is ready to merge from my side.

@blakeblackshear blakeblackshear merged commit e91f3d8 into blakeblackshear:dev May 21, 2024
10 checks passed
@tenten8401
Copy link

tenten8401 commented Jun 5, 2024

@blakeblackshear would you be open to shipping a yolo-nas model in the image if it was trained using no pre-trained weights? According to many of their github responses you aren't bound by the hostile license if you train your own model using their architecture only.

I ask because I have a super-gradients python environment set up already with an rtx 4090 and I wouldn't mind throwing a few days of compute at training a model for Frigate because it seems like a cool project :)
I'm already training a yolo-nas pose model up currently with no pre-trained weights off the coco dataset for a VR full body tracker because of the licensing.

Apache 2.0 could apply if a court determines the architecture as an input to the process instead of part of the tool: Deci-AI/super-gradients#1993 (comment)

..or, it could be any license if the architecture is deemed as part of the tool to create the model and not an input to the process. It's a large legal gray area but the worst outcome here is the license is Apache 2.0 which is pretty unrestrictive.

@blakeblackshear
Copy link
Owner

I would be open to it. I have a 4090 as well and plan to look at supporting yolo-nas for frigate+ too.

@tenten8401
Copy link

tenten8401 commented Jun 5, 2024

Sweet, I'll see if I can train up a model in a few days when mine is done :p

What size do you think would be the best to use in Frigate?

Also, FYI, if you're wanting to train up models with your computer as well, when you install super-gradients, you need to use master, stable PyPi and github both have broken dependencies.
pip install git+https://github.com/Deci-AI/super-gradients.git@master or clone the git repo and pip install -e . inside dir

@blakeblackshear
Copy link
Owner

I haven't looked at what's possible yet, but I was hoping to try a quantisized 320x320 version of the small model. Since frigate crops the image ahead of time, smaller objects aren't as important. Hard to say without comparing real world performance of a few options.

@MarcA711
Copy link
Contributor Author

MarcA711 commented Jun 6, 2024

Hey @tenten8401,
this would be really great!
I don't know if you are specifically interested in yolo-nas on rockchip devices or if you just posted your idea here. I use the pre-trained image from super-gradients but I'm worried about the license. I would be interested in using your model for the rknn detector if you can provide one.

You might consider hosting a repo and upload your models + weights there. I think other people might be interested in using a pre-trained yolo-nas model without these license terms as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants