DISCUSSION: comparison for uninitialized forecast #571
aaronspring
started this conversation in
General
Replies: 3 comments
-
also see: https://gist.github.com/aaronspring/033fc98c2fc0edeb910dabd52f1532fe |
Beta Was this translation helpful? Give feedback.
0 replies
-
Still not decided how to do this for perfect model |
Beta Was this translation helpful? Give feedback.
0 replies
-
But closed for hindcast |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Here, I am thinking out loud about uninitialized skill. Does the comparison argument make sense here?
Why I think about this now? I started comparing monthly skill from other MPIESM initialized ensembles.
So far I have been using it with a comparison keyword: I first construct an uninitialized ensemble, and the pipe that into the same machinery as I did for init skill.
What this means for perfect-models in climpred for the comparison
m2e
, I then compare the uninitialized ensemble mean to every uninitialized member. (CURRENTLY) So I am comparing uninitialized forecast against uninitialized verification, asking how well can an uninitialized member forecast another uninitialized member. (ALTERNATIVE) Another way of doing would be to that the same verification members as I use for the initialized skill, asking how well can an uninitialized member forecast an initialized member. The second option sounds closer to what it done in Hindcasts.Beta Was this translation helpful? Give feedback.
All reactions