Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong order of arguments in sqare_loss? #18

Open
yuriy-babenko opened this issue Aug 7, 2024 · 2 comments
Open

Wrong order of arguments in sqare_loss? #18

yuriy-babenko opened this issue Aug 7, 2024 · 2 comments

Comments

@yuriy-babenko
Copy link

yuriy-babenko commented Aug 7, 2024

https://github.com/t-kalinowski/deep-learning-with-R-2nd-edition-code/blob/5d666f93d52446511a8a8e4eb739eba1c0ffd199/ch03.R#L266C1-L270C5

Can it be that the order of arguments in the function is wrong?

loss <- square_loss(predictions, targets)

We defined square previously as:

square_loss <- function(targets, predictions) {
  per_sample_losses <- (targets - predictions)^2
  mean(per_sample_losses)
}

In text they say "the training loss .... stabilized around 0.025" which I only get once I change the order of arguments: loss <- square_loss( targets, predictions)

@t-kalinowski
Copy link
Owner

Good catch! The order of the arguments in the call is different from the order expected by the function signature. For some loss functions, this matters! But it does not for MSE, since the magnitude is unchanged regardless of which order you supply the args. You can work through it on paper to confirm, or do a quick check at the R repl:

x <- runif(100)
y <- runif(100)
all(((x-y)^2) == ((y-x)^2))

@yuriy-babenko
Copy link
Author

Ha-ha. Indeed, as this is a squared function! Thanks for the explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants