Lec: Global SA

13 important questions on Lec: Global SA

Why do we have uncertainty in the input of a model?

  • Scaling problems
  • Measurement problems
  • Model structure errors

What are the 3 main characteristics of LSA?

  1. Local method: results depend on base-run
  2. Results depend on assumed scale of variation (st dev) of parameters
  3. Computationally cheap, only one extra run per parameter

What is a risk with LSA?

If the uncertainty in your parameter is very wide, you further extend your linear model, increasing risk of under/over estimating the st dev of your model output.

Worst case: when you think there is no slope (your model is not sensitive to your parameter at all) when actually this is not true
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

Other behaviour in models that give issues for LSA?

Relation between model and parameter could show blocky behaviour (happens if you have if-statements in your model, eg thresholds for >temp; snowmelt occurs). This is not something that works well with LSA, because you'd try to take a slope of a horizontal line, which is not representative.

In case of snow parameter, it only shows sensitivity based on the temperature, meaning sensitivity can change based on situation > blocky structures.

GSA: SOBOL' methods

SOBOL' is the most common GSA

What is the definition of the explained variance in GSA?

Compare total original variance of the model output to the total variance-the conditional expectation

Note: total variance - conditional expectation = basically the residuals

How does GSA work in steps?

  1. Sample parameter based on chosen distribution
  2. Run model many many times
  3. Determine conditional expectation for each parameter
  4. Calculate residuals
  5. Check which part of the model variance we can explain for this particular parameter
  6. Compare to total model variance to determine which parameter explains most of the variance, bc this is the parameter our model is most sensitive to
  7. Determine the portion of model variance explained by interaction of parameters

When would you do local and when global SA?

  1. Start with GSA: sample full range of parameters
  2. Look at which parameters influence your model most
  3. Calibrate these parameters
  4. After calibration, you do LSA; find best guess parameters
  5. Look in near vicinity of best guess parameters, to see what the uncertianty in your projections is


Model runtime: problem with GSA
But GSA can be done on a small range of a parameter; or we can sample full parameter space and determine which influence model results most and select these for calibration

What are the main take aways for variation analysis through sampling/GSA/SOBOL' (all same thing)?

  • Global method
  • Parameter space is sampled
    • For identification of calibration parameters: take wide par space
    • For sensitivity only: take smaller space
  • Also works for highly non-linear functions
  • Results depend on assumed distribution of parameters
  • Computationally expensive; many model runs required

How will we sample our parameters?

Monte Carlo is most straightforward

What is Monte Carlo sampling?

  • Pick distribution
  • Randomly sample within distribution - Sample more in regions with high probability
  • Run model


Issue: if your MC sample is not big enough and does not capture your distribution well, it will lead to different results each time

Thus MC is least efficient sampling strategy

2D Monte Carlo sampling vs 2D Quantile sampling

MC: random, might not capture full distribution
Q: we do capture full distribution of parameter; but it is not efficient for many parameters!

10 samples per par
6 pars in our model
run time: 1 sec

10^6 sec = 11 days

Give a summary of when to choose which sampling technique and give arguments why!

1D:
MC: too much insecurity due to random sampling, while we want to capture distribution correctly
Quantile: more efficient and more stable in representing distribution

2D or higher:
MC: even more expensive
Random quantile sampling: also gets too expensive
>> Latin Hypercube sampling! Kind of a mix between MC and quantile, which saves a lot of computational power. But we do have to assume the parameters are independent/do not interact

The question on the page originate from the summary of the following study material:

  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
Remember faster, study better. Scientifically proven.
Trustpilot Logo