Summary: Iebe Lecture Slides

Study material generic cover image
  • This + 400k other summaries
  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
PLEASE KNOW!!! There are just 65 flashcards and notes available for this material. This summary might not be complete. Please search similar or other summaries.
Use this summary
Remember faster, study better. Scientifically proven.
Trustpilot Logo

Read the summary and the most important questions on IEBE Lecture slides

  • 1 Simple linear regression With One Regressor: Estimation

  • 1.1.1.2 The OLS estimator

    This is a preview. There are 1 more flashcards available for chapter 1.1.1.2
    Show more cards here

  • What does the OLS estimator do?

    It minimizes the squared difference between the actual values of Y_i and the predicted Y's.
  • 1.3 Maximum likelihood estimation

    This is a preview. There are 1 more flashcards available for chapter 1.3
    Show more cards here

  • Explain the general idea of the ML estimation

    ML estimation "chooses" the coefficients that will most likely have "produced" a given dataset.
  • Explain the differences between OLS and ML estimation

    2 differences;
    A. The objective function is different;
    For OLS: the sum of squared errors (or least squares?)
    For ML: the joint density function

    B. Minimize/maximize
    For OLS; we want the smallest errors so we aim to minimize the value of the objective function
    For ML; we want to find the highest probability, so we aim to maximize the value of the joint density function
  • 1.3.1 The likelihood function in general

    This is a preview. There are 3 more flashcards available for chapter 1.3.1
    Show more cards here

  • Show how to turn the density function into a likelihood function.

    All you gotta do is switch the conditional statement and data set around;
    Here is the joint pdf conditional on θ:
    f(x1, ..., xn|θ)

    Here is the likelihood function that comes from it:
    L(θ|x1, ..., xn)

    Simply switched around
  • Provide the order of progression starting from: "a  random variable x, conditional on a set of parameters, θ" and ending at: the log-likelihood function

    1. Define the function: f(x|θ)
    2. Consider the joint density function
    3. Turn density function into likelihood function
    4. Take the log
  • The big picture of ML estimation is to maximize the joint density function (which we have transformed into a log-likelihood function). Show the steps involved to maximize the function.

    1. Take the derivative with respect to the estimator, θ.
    2. Set our derivative equal to 0.
    3. Solving for the parameter estimate gives its ML estimate
  • After the 3 steps involved in maximizing the joint density(log-likelihood) function, what is one (optional) additional step we are sometimes asked to take and what is its purpose?

    We have already calculated the first derivative, sometimes we check the second derivative to "verify that we have found a maximum"
  • 1.3.2 ML estimation of sample mean & variance, normal pdf

    This is a preview. There are 1 more flashcards available for chapter 1.3.2
    Show more cards here

  • For the following model:xi = µ + ei          ei ∼ N(0, σ2 ) Estimate the sample mean

    This should be the answer (see image):
  • 2 Simple linear regression with One regressor: Inference

  • 2.1 LS assumptions for the linear model

  • In what way specifically, do the classical assumptions from week 1, differ from the more realistic assumptions we discuss in week 2?

    1. Regressors are no longer assumed to be fixed
    2. Variances are no longer assumed to be constant
    3. Error terms are no longer assumed to be normally distributed
  • 2.2.1 Unbiasedness

    This is a preview. There are 2 more flashcards available for chapter 2.2.1
    Show more cards here

  • Starting with the (regression) formula for both Yi and Y-bar, prove unbiasedness of the B1 estimator

    Abc
PLEASE KNOW!!! There are just 65 flashcards and notes available for this material. This summary might not be complete. Please search similar or other summaries.

To read further, please click:

Read the full summary
This summary +380.000 other summaries A unique study tool A rehearsal system for this summary Studycoaching with videos
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart