Summary: Iebe Lecture Slides
- This + 400k other summaries
- A unique study and practice tool
- Never study anything twice again
- Get the grades you hope for
- 100% sure, 100% understanding
Read the summary and the most important questions on IEBE Lecture slides
-
1 Simple linear regression With One Regressor: Estimation
-
1.1.1.2 The OLS estimator
This is a preview. There are 1 more flashcards available for chapter 1.1.1.2
Show more cards here -
What does the OLS estimator do?
It minimizes the squared difference between the actual values of Y_i and the predicted Y's. -
1.3 Maximum likelihood estimation
This is a preview. There are 1 more flashcards available for chapter 1.3
Show more cards here -
Explain the general idea of the ML estimation
ML estimation "chooses" the coefficients that will most likely have "produced" a given dataset. -
Explain the differences between OLS and ML estimation
2differences ;
A. The objectivefunction is different;
ForOLS : the sum of squared errors (or least squares?)
ForML : the joint densityfunction
B.Minimize /maximize
ForOLS ; we want thesmallest errors so we aim to minimize the value of the objectivefunction
ForML ; we want to find the highestprobability , so we aim tomaximize the value of the joint densityfunction -
1.3.1 The likelihood function in general
This is a preview. There are 3 more flashcards available for chapter 1.3.1
Show more cards here -
Show how to turn the density function into a likelihood function.
All you gotta do is switch the conditional statement and data set around;
Here is the joint pdf conditional on θ:
f(x1, ..., xn|θ)
Here is the likelihood function that comes from it:
L(θ|x1, ..., xn)
Simply switched around -
Provide the order of progression starting from: "a random variable x, conditional on a set of parameters, θ" and ending at: the log-likelihood function
1. Define the function:f (x|θ )
2. Consider the joint density function
3. Turn density function into likelihood function
4. Take the log -
The big picture of ML estimation is to maximize the joint density function (which we have transformed into a log-likelihood function). Show the steps involved to maximize the function.
1. Take thederivative with respect to theestimator , θ.
2.Set ourderivative equal to 0.
3.Solving for theparameter estimate gives its ML estimate -
After the 3 steps involved in maximizing the joint density(log-likelihood) function, what is one (optional) additional step we are sometimes asked to take and what is its purpose?
We have already calculated the first derivative, sometimes we check the second derivative to "verify that we have found a maximum" -
1.3.2 ML estimation of sample mean & variance, normal pdf
This is a preview. There are 1 more flashcards available for chapter 1.3.2
Show more cards here -
For the following model:xi = µ + ei ei ∼ N(0, σ2 ) Estimate the sample mean
This should bethe answer (seeimage ): -
2 Simple linear regression with One regressor: Inference
-
2.1 LS assumptions for the linear model
-
In what way specifically, do the classical assumptions from week 1, differ from the more realistic assumptions we discuss in week 2?
1. Regressors are no longer assumed to be fixed
2. Variances are no longer assumed to be constant
3. Error terms are no longer assumed to be normally distributed -
2.2.1 Unbiasedness
This is a preview. There are 2 more flashcards available for chapter 2.2.1
Show more cards here -
Starting with the (regression) formula for both Yi and Y-bar, prove unbiasedness of the B1 estimator
Abc
- Higher grades + faster learning
- Never study anything twice
- 100% sure, 100% understanding