You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for open-sourcing this great piece of work!
I am trying to implement your code and faced some problem,I see that in the main.py file is calculating loss2 (photometric_loss), you use rgb_curr_ (the image of the current frame), warped_ (the image of the neighboring frame predicted by the current frame image), and proofread through the mask to calculate the photometric_loss . So my question is
Is my understanding of rgb_curr_ and warped_ correct? If not, I hope to get your corrections.
Why use the current frame and predicted neighboring frame images to calculate photometric_loss, instead of using the current frame and predicted current frame to calculate photometric_loss.
In your paper, I have seen guidance on using RGB images for depth prediction, Is it the only way to calculate the photometric_loss using the RGB guide? If not, I hope you can give me your advice. Do you have any suggestions?
I have just come into contact with this knowledge, and there may be something wrong. I hope you forgive me.
Thanks for the help!
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for open-sourcing this great piece of work!
I am trying to implement your code and faced some problem,I see that in the main.py file is calculating loss2 (photometric_loss), you use rgb_curr_ (the image of the current frame), warped_ (the image of the neighboring frame predicted by the current frame image), and proofread through the mask to calculate the photometric_loss . So my question is
Is my understanding of rgb_curr_ and warped_ correct? If not, I hope to get your corrections.
Why use the current frame and predicted neighboring frame images to calculate photometric_loss, instead of using the current frame and predicted current frame to calculate photometric_loss.
In your paper, I have seen guidance on using RGB images for depth prediction, Is it the only way to calculate the photometric_loss using the RGB guide? If not, I hope you can give me your advice. Do you have any suggestions?
I have just come into contact with this knowledge, and there may be something wrong. I hope you forgive me.
Thanks for the help!
The text was updated successfully, but these errors were encountered: