Talk:Numerical weather prediction
Jump to navigation Jump to search
"The problem with purely automatic modelling is that predictions beyond about six hours are useless – the model is simply too complex". Seems anbiguous.
Do you mean one of these:
- The volume of calculations increases exponentially, so it's not practical to comtinue them for a longer forecast. Or
- The degree of error in the forecast is so great that predictions beyond six hours would be useless. [in which case, the model is not just complex, it incorporates inadequate assumptions]
- I have not done any of this math, but I think it is "accuracy" that suffers. If you are at a probability that a certain situation produces itself of less than 50%, "coin-tossing" may be a better model. Note also, that we are talking "massive" amounts of data; "common" weather sensors probably have no trouble generating 10 measurements a minute. That'd be 3600 measurements (for each point and "type of data"), for the 6 hour period. Realistically we get 2 hrs, at a maximum, to calculate what the weather is like in 6hrs time (4hrs from when we publish the data). Since the system is dynamic, the "base data" to use will have changed by then. A butterfly flaps its wings, and all the nice predictions are worthless... --Eptalon (talk) 20:31, 24 March 2010 (UTC)
- see ,  for two freely available scientific articles on the subject. As I see it the basic problem lies in the fact that these models need to be "parametrized". That is to say "weight" needs to be attached to the different factors. This is not just about "throwing data" at a model. The other basic problem is that we seem to be talking about "confidence intervals". The longer such a system runs, the lower its "skill" at predicting, esp. in environments which are changing. State of the art seems to be to "combine" several models for the prediction. --Eptalon (talk) 20:59, 24 March 2010 (UTC)