Hypothesis Testing and Confidence Intervals
4 important questions on Hypothesis Testing and Confidence Intervals
Why hypothesis testing?
(mean-Value)/(SD/n^0.5)
For hypothesis testing, it is easier to test the thing that you don't expect to be true. Via this way you can statistically falsify whether the null hypothesis can be rejected and if so assume the alternative hypothesis.
Two or one tailed?
Although no CL is required it is often still used as a reference, to say that 95% of 99% is enough confidence. Remember that in statistics there is no way you can be absolutely certain.
What about Chebyshev's inequality?
Normal distribution can be very anti-conservative. Assuming normality when a random variable is in fact not normal can lead to severe underestimation of risk!!!
- Higher grades + faster learning
- Never study anything twice
- 100% sure, 100% understanding
What about Value-at-Risk?
Ambiguity in terms of definition is often the case, as a loss is presented by VaR as a positive number. But this is the normal convention used by most risk managers. Either VaR is a loss of 400 or a return of -400.
The question on the page originate from the summary of the following study material:
- A unique study and practice tool
- Never study anything twice again
- Get the grades you hope for
- 100% sure, 100% understanding