Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noisy evaluate_*.py Results #7

Open
richpaulyim opened this issue Apr 29, 2020 · 0 comments
Open

Noisy evaluate_*.py Results #7

richpaulyim opened this issue Apr 29, 2020 · 0 comments

Comments

@richpaulyim
Copy link

Hello Jonas,

I am very interested in your work and have been trying to reproduce your results from your learned primal dual github repo for your paper, "Learned Primal-Dual Reconstruction."

I understand code is a bit old, and I have been attempting to train and evaluate the code to produce results that are as good as yours, especially those images on figure 3, page 8, of the paper. I've taken the following steps to reproduce your code with the most up-to-date libraries as there is no requirements.txt to download the exact library versions used in the paper:

  • I've updated the syntax for tensorflow 1 code to tensorflow 2, with tensorflow 2's code update command in the shell. I've done this for unet_reference.py, learned_primal_dual.py and primal_dual.py.
  • I've performed some reshaping of the data in the evaluate*.py scripts towards the end of each of these evaluation scripts, reshaping the primal_values_result to matching dimensions for the space.element() function.
  • I have the required .adler backend installed and recognized in my environment path.

At first I ran the code with the original parameters, for which each were uniformly set to run for 100,000 iterations, with a learning rate of 0.001, but the loss for every k iterations blew up to nan. So, I adjusted your code to run for 500,000 iterations with a learning rate of 0.0001. This helped the validation error remain stable throughout training, but it produced terrible images.

I've attached the related images below. I've been training the model on an nvidia rtx 2080ti, considerably better than the gtx 1080ti used in the paper, but it still takes time to train and evaluate the results.

Have you revisited the code within the last 10 months since the latest commit? I'm terribly interested in reproducing these awesome results for myself, but have completely failed in doing so these past few weeks. Do you have a requirements.txt file associated with this specific repo, so that I can run the correct library versions without any modifications to the original code?

Thank you,

Richard Yim

ellipses.zip
x
x_eval_0
x_9

I can't do a pip freeze to show what versions I have entirely on the default python environment because I don't admin privileges on the workstation I'm using, but I am running on TF2, and altest scipy related libraries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant