Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using self compiled libfreenect2 with CUDA support? #47

Open
oneandonlyoddo opened this issue Mar 4, 2020 · 4 comments
Open

Using self compiled libfreenect2 with CUDA support? #47

oneandonlyoddo opened this issue Mar 4, 2020 · 4 comments

Comments

@oneandonlyoddo
Copy link

oneandonlyoddo commented Mar 4, 2020

Hey,
Thanks for this awesome addon. I built libfreenect2 from source on my machine with CUDA support. I was wondering how would I go for using it instead of the precompiled lib that is coming with this addon. Is it as simple as replacing the /libs/libfreenect2 folder? Or is there more that needs to be done?

I am trying to get a little bit more performance and stability out of this. I keep getting BSODs on one of my test machines when using multiple Kinects.

@ofTheo
Copy link
Owner

ofTheo commented Mar 4, 2020 via email

@ofTheo
Copy link
Owner

ofTheo commented Mar 4, 2020 via email

@oneandonlyoddo
Copy link
Author

Hey Theo,
I didn't actually manage to make it work so far and it slipped down on my priority list because my current setup runs very well. But I'll pick this up when there is time.

Regarding the blue screens / bsods and for overall transparency I am going to drop some information here because it was a pain for me to get to this point. This isn't all directly ofxKinectV2 related but I hope it is okay to leave that info here in case somebody else is struggling as well.

I build a positional tracking system with 4 Kinects. Our problem was the overall area of about 10m x 10m and a relatively low ceiling height and our own installation in the way. We tried traditional cameras and looked into lidars. Cameras were to distorted and extremely hard to merge. Lidar would be tricky with people casting "shadows" and it wouldn't give us enough information (like radius including stretched out arms).

Our Solution:

  1. One Kinect in every corner overlay/merge their point clouds.
  2. Create a digital top view camera and render this merged scene into a low res FBO.
  3. Render points below a certain threshold as black and white above.
  4. Blur and threshold that FBO and then do blob detection or contour finding in openCV.
  5. Pair with a simple centroid tracking.
  6. Voila distortionless tracking.

The problems along the way:
Bandwidth.
You will need a pcie card with a separate chip per bus. We used the Startech pexusb3s44v. Which works fine BUT you will need a CPU with enough pcie lanes. Our original entry level i7 only had 16 lanes. Using a dedicated GPU there isn't a lot of bandwidth left. Our current CPU has 40 lanes and is doing a great job. The other caveat is the design of the pcie card. The USB slots are vertical instead of horizontal which stupidly can make it impossible to plugin devices in some PC cases.

Extension cables.
Extending the Kinect is very tricky and we had to deal with several headaches. Some 20m active USB3 extension cables would work when directly plugged in to the motherboard but not on the Startech pcie card. They also completly dropped out when close to other cables like speaker cables. We found some Lindy 30m fibre extension cables which work reasonably well but they are expensive and fragile.

Bluescreens / BSOD.
Running 3 Kinects of the Startech card worked fine most of the time but adding a 4th made the whole system very unstable and it would regularly crash with a blue screen MULTIPLE_IRP_COMPLETE_REQUEST . I wasn't really able to figure this out. This would happen with my code, the example from this repo and the libfreenect Protonect.exe on two different machines. My current workaround is having a tiny app per Kinect that only gets the pointcloud data and sends it locally via zeroMQ to the main app (for a basic example see: ofKinectPCLReceive and ofKinectPCLPublish). There might be a neater way to encapsulate the Kinect update into separate threads which might help but this solution is working for me right now and naturally makes a better use of the multiple cores of our CPU. So performance went way up.

As I said this isn't entirely related to this addon but I hope somebody who is looking for help will find this.

@ofTheo
Copy link
Owner

ofTheo commented Mar 12, 2020

Thanks for this detailed report @oneandonlyoddo !
Its funny we're actually in a very similar setup right now with 3x Kinect V2s running over extenders and with the same card, with the annoying vertical USB ports :)

One thing we did which we recently pushed to the addon is make it possible to completely disable the color stream while still being able to get registered pixels ( for point cloud approaches ).

See: 3912839

This helped a lot for us to reduce bandwidth and to make the system more stable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants