Clicker Q

to go with Probability & Statistics by DeGroot and Schervish. Math 152 - Statistical Theory.


  1. The Central Limit Theorem (CLT) says:1
    1. The sample average (statistic) converges to the true average (parameter)
    2. The sample average (statistic) converges to some point
    3. The distribution of the sample average (statistic) converges to a normal distribution
    4. The distribution of the sample average (statistic) converges to some distribution
    5. I have no idea what the CLT says

  1. Which cab company was involved (see example 2.2 in the notes)?2
    1. Very likely the Blue Cab company
    2. Sort of likely the Blue Cab company
    3. Equally likely Blue and Green Cab companies
    4. Sort of likely the Green Cab company
    5. Very likely the Green Cab company

  1. Consider a continuous probability density function (pdf) given by \(f( x | \theta ).\) Which of the following is FALSE:3
    1. \(f( x | \theta ) = P(X = x | \theta)\)
    2. \(f( x | \theta )\) provides info for calculating probabilities of X.
    3. \(P(X = x) = 0\) if X is continuous.
    4. \(f( x | \theta ) = L(\theta | x)\) is the likelihood function

  1. To find a marginal distribution of X from a joint distribution of X & Y you should (assume everything is continuous),4
    1. differentiate the joint distribution with respect to X.
    2. differentiate the joint distribution with respect to Y.
    3. integrate the joint distribution with respect to X.
    4. integrate the joint distribution with respect to Y.
    5. I have no idea what a marginal distribution is.

  1. A continuous pdf (of a random variable \(X\) with parameter \(\theta\)) should5
    1. Integrate to a constant (\(dx\))
    2. Integrate to a constant (\(d\theta\))
    3. Integrate to 1 (\(dx\))
    4. Integrate to 1 (\(d\theta\))
    5. not need to integrate to anything special.

  1. R / R Studio
    1. all good
    2. started, progress is slow and steady
    3. started, very stuck
    4. haven’t started yet
    5. what do you mean by “R”?

  1. In terms of the R for the homework…
    1. I was able to do the whole thing.
    2. I understood the code part, but I couldn’t get the Markdown file to compile.
    3. I didn’t understand the code at all.
    4. I couldn’t get R or R Studio installed.
    5. I haven’t tried to work on the homework yet.

  1. A beta distribution6
    1. has support on [0,1]
    2. has parameters \(\alpha\) and \(\beta\) which represent, respectively, the mean and variance
    3. is discrete
    4. has equal mean and variance
    5. has equal mean and standard deviation

  1. What types of distributions are the following?7
    1. prior = marginal & posterior = joint
    2. prior = joint & posterior = conditional
    3. prior = conditional & posterior = joint
    4. prior = marginal & posterior = conditional
    5. prior = joint & posterior = marginal

  1. Which of these are incorrect conclusions?8
    1. \(\theta | \underline{X} \sim\) Beta (4,12)
    2. \(\xi(\theta | \underline{X}) \sim\) Beta (4,12)
    3. \(\xi(\theta | \underline{X}) \propto\) Beta (4,12)
    4. \(\xi(\theta | \underline{X}) \propto \theta^{4-1} (1-\theta)^{12-1}\)
    5. \(\xi(\theta | \underline{X}) = \frac{1}{B(4,12)} \theta^{4-1}(1-\theta)^{12-1}\)

  1. What is the integrating constant for the pdf, \(h(w)\)?9
    1. \(\frac{\Gamma(w+k)}{\Gamma(w)\Gamma(k)}\)
    2. 1/[\(w^k \Gamma(k)\)]
    3. 1 / \(\sqrt{2\pi k^2}\)
    4. 1/[\(\Gamma(k/2)\)]
    5. 1/[\(2^{k/2} \Gamma(k/2)\)]

\[h(w) \propto w^{k/2-1}e^{-w/2} \ \ \ \ \ \ \ \ \ w>0\]


  1. Suppose the data come from an exponential distribution with a parameter whose prior is given by a gamma distribution. The posterior is known to be conjugate, so its distribution must be in what family?10
    1. exponential
    2. gamma
    3. normal
    4. beta
    5. Poisson

  1. A prior is improper if11
    1. it conveys no real information.
    2. it isn’t conjugate.
    3. it doesn’t integrate to one.
    4. it swears a lot.
    5. it isn’t on your distribution sheet.

  1. Given a prior: \(\theta \sim N(\mu_0, \nu_0^2)\)
    And a data likelihood: \(X | \theta \sim N(\theta, \sigma^2)\)
    You collect n data values, what is your best guess of \(\theta?\)12
    1. \(\overline{X}\)
    2. \(\mu_0\)
    3. \(\mu_1 = \frac{\sigma^2 \mu_0 + n \nu_0^2 \overline{X}}{\sigma^2 + n \nu_0^2}\)
    4. median of \(N(\mu_1, \nu_1^2 = \frac{\sigma^2 \nu_0^2}{\sigma^2 + n \nu_0^2})\)
    5. 47

  1. The Bayes estimator is sensitive to13
    1. the posterior mean
    2. the prior mean
    3. the sample size
    4. the data values
    5. some of the above

  1. The range (output) of the Bayesian MSE includes:14
    1. theta
    2. the data

  1. The range (output) of the frequentist MSE includes:15
    1. theta
    2. the data

  1. To find the maximum likelihood estimator, we take the derivative of the likelihood16
    1. with respect to \(X\)
    2. with respect to \(\underline{X}\)
    3. with respect to \(\theta\)
    4. with respect to \(f\)
    5. with respect to \(\ln(f)\)

  1. Consider an MLE, \(\hat{\theta},\) and the related log likelihood function \(L = \ln(f).\) \(\delta(X)\) is another estimate of \(\theta\). Which statement is necessarily false:17
    1. L(\(\delta(X)\)) < L(\(\theta\))
    2. L(\(\hat{\theta}\)) < L(\(\theta\))
    3. L(\(\theta\)) < L(\(\delta(X)\))
    4. L(\(\delta(X)\)) < L(\(\hat{\theta}\))
    5. L(\(\theta\)) < L(\(\hat{\theta}\))

  1. The MLE is popular because it18
    1. maximizes \(R^2\)
    2. minimizes the sum of squared errors
    3. has desirable sampling distribution properties
    4. maximizes both the likelihood and the log likelihood
    5. always exists

  1. MOM is popular because it:19
    1. has desirable sampling properties
    2. is often straightforward to compute
    3. always produces values inside the parameter space (e.g., in [0,1] for prob)
    4. always exists

  1. The Central Limit Theorem (CLT) says:20
    1. The sample average (statistic) converges to the true average (parameter)
    2. The sample average (statistic) converges to some point
    3. The distribution of the sample average (statistic) converges to a normal distribution
    4. The distribution of the sample average (statistic) converges to some distribution
    5. I have no idea what the CLT says

  1. A sampling distribution is21
    1. the true distribution of the data
    2. the estimated distribution of the data
    3. the distribution of the population
    4. the distribution of the statistic in repeated samples
    5. the distribution of the statistic from your one sample of data

  1. The distribution of a random variable can be uniquely determined by22
    1. the cdf: F(x)
    2. the pdf (pmf): f(x)
    3. the moment generating function (mgf), if it exists: \(\Psi(t) = E[e^{tX}]\)
    4. the mean and variance of the distribution
    5. more than one of the above (which ones??)

  1. A moment generating function23
    1. gives the probability of the RV at any value of X
    2. gives all theoretical moments of the distribution
    3. gives all sample moments of the data
    4. gives the cumulative probability of the RV at any value of X

  1. The sampling distribution is important because24
    1. it describes the behavior (distribution) of the statistic
    2. it describes the behavior (distribution) of the data
    3. it gives us the ability to measure the likelihood of the statistic or more extreme under particular settings (i.e. null)
    4. it gives us the ability to make inferences about the population parameter
    5. more than one of the above (which ones??)

  1. The following result: \(\frac{\sum_{i=1}^n (X_i - \overline{X})^2}{\sigma^2} \sim \chi^2_{n-1}\) allows us to isolate and conduct inference on what parameter?25
    1. \(\overline{X}\)
    2. \(s\)
    3. \(\mu\)
    4. \(\sigma^2\)
    5. \(\chi\)

  1. The following result: \(\frac{\overline{X} - \mu}{s/\sqrt{n}} \sim t_{n-1}\) allows us to isolate and conduct inference on what parameter?26
    1. \(\overline{X}\)
    2. \(s\)
    3. \(\mu\)
    4. \(\sigma^2\)
    5. \(\chi\)

  1. What would you expect the standard deviation of the t statistic to be?27
    1. a little bit less than 1
    2. 1
    3. a little bit more than 1
    4. unable to tell because it depends on the sample size and the variability of the data

  1. You have a sample of size n = 50. You sample with replacement 1000 times to get 1000 bootstrap samples. What is the sample size of each bootstrap sample?28
    1. 50
    2. 1000

  1. You have a sample of size n = 50. You sample with replacement 1000 times to get 1000 bootstrap samples. How many bootstrap statistics will you have?29
    1. 50
    2. 1000

  1. The bootstrap distribution of \(\hat{\theta}\) is centered around the30
    1. population parameter
    2. sample statistic
    3. bootstrap statistic
    4. bootstrap parameter

  1. The bootstrap theory relies on31
    1. Resampling with replacement from the original sample.
    2. Resampling from the original sample, leaving one observation out each time (e.g., cross validation)
    3. Estimating the population using the sample.
    4. Permuting the data values within the sample.

  1. Bias of a statistic refers to32
    1. The difference between a statistic and the actual parameter
    2. Whether or not questions were worded fairly.
    3. The difference between a sampling distribution mean and the actual parameter.

  1. The mean of a sample is 22.5. The mean of 1000 bootstrapped samples is 22.491. The bias of the bootstrap mean is33
    1. -0.009
    2. -0.0045
    3. -0.09
    4. 0.009
    5. 0.09

  1. The following result: \(\frac{\sum_{i=1}^n (X_i - \overline{X})^2}{\sigma^2} \sim \chi^2_{n-1}\) allows us to isolate and conduct inference on what parameter?34
    1. \(\overline{X}\)
    2. \(s\)
    3. \(\mu\)
    4. \(\sigma^2\)
    5. \(\chi\)

  1. The following result: \(\frac{\overline{X} - \mu}{s/\sqrt{n}} \sim t_{n-1}\)
    allows us to isolate and conduct inference on what parameter?35
    1. \(\overline{X}\)
    2. \(s\)
    3. \(\mu\)
    4. \(\sigma^2\)
    5. \(\chi\)

  1. Consider an asymmetric confidence interval for \(\sigma\) which is derived using:
    \(P(c_1 \leq \frac{\sum_{i=1}^{n}(X_i - \overline{X})^2}{\sigma^2} \leq c_2) = 0.95\)
    The resulting 95% interval with the shortest width has:36
    1. \(c_1\) and \(c_2\) as the .025 & .975 quantiles
    2. \(c_1\) set to zero
    3. \(c_2\) set to infinity
    4. \(c_1\) and \(c_2\) as different quantiles than (a) but that contain .95 probability.
    5. Find \(c_1\) and let \(c_2 = -c_1\)

  1. A 90% CI for the average number of chocolate chips in a Chips Ahoy cookie is: [3.7 chips, 17.2 chips]
    What is the correct interpretation?37
    1. There is a 0.9 prob that the true average number of chips is between 3.7 & 17.2.
    2. 90% of cookies have between 3.7 & 17.2 chips.
    3. We are 90% confident that in our sample, the sample average number of chips is between 3.7 and 17.2.
    4. In many repeated samples, 90% of sample averages will be between 3.7 and 17.2.
    5. In many repeated samples, 90% of intervals like this one will contain the true average number of chips.

  1. A 90% CI for the average number of chocolate chips in a Chips Ahoy cookie: [3.9 chips, \(\infty\))
    What is the correct interpretation?38
    1. There is a 0.9 prob that the true average number of chips is bigger than 3.9
    2. 90% of cookies have more than 3.9 chips
    3. We are 90% confident that in our sample, the sample average number of chips is bigger than 3.9.
    4. In many repeated samples, 90% of sample averages will be bigger than 3.9
    5. In many repeated samples, 90% of intervals like this one will contain the true average number of chips.

  1. Consider a Bayesian posterior interval for \(\mu\) of the form: \(\overline{X} \pm t^*_{n-1} s / \sqrt{n}\)
    What was the prior on \(\mu\)?39
    1. N(0,0)
    2. N(\(\overline{X}\),0)
    3. N(0, 1/0)
    4. N(\(\overline{X}\),1/0)
    5. N(1/0, 0)

Some review questions:

  1. If we need to find the distribution of a function of one variable (g(X) = X), the easiest route is probably:40
    1. find the pdf
    2. find the cdf
    3. find the MGF
    4. find the expected value and variance

  1. If we need to find the distribution of a sum of random variables, the easiest route is probably:41
    1. find the pdf
    2. find the cdf
    3. find the MGF
    4. expected value and variance

  1. FREQUENTIST: consider the sampling distribution of \(\hat{\theta}.\) The parameters in the sampling distribution are given by:42
    1. the data
    2. the parameters from the likelihood
    3. the prior parameters
    4. the statistic
    5. \(\theta\)

  1. BAYESIAN: consider the posterior distribution of \(\theta | \underline{X}.\) The parameters in the posterior distribution are a function of:43
    1. the data
    2. the parameters from the likelihood
    3. the prior parameters
    4. the statistic
    5. \(\theta\)

  1. A sample of size 8 had a mean of 22.5. It was bootstrapped 1000 times and the mean of the bootstrap distribution was 22.491. The standard deviation of the bootstrap was 2.334. The 95% BS SE confidence interval for the population mean is44
    1. 22.491 \(\pm\) z(.975) * 2.334
    2. 22.491 \(\pm\) z(.95) * 2.334
    3. 22.5 \(\pm\) z(.975) * 2.334
    4. 22.5 \(\pm\) z(.95) * 2.334
    5. 22.5 \(\pm\) z(.975) * 2.334 / \(\sqrt{8}\)

  1. Which is most accurate?45
    1. A BS SE confidence interval
    2. A bootstrap-t confidence interval
    3. A bootstrap percentile interval
    4. A bootstrap BCa interval

  1. What is the primary reason to bootstrap a CI (instead of creating a CI from calculus)?46
    1. larger coverage probabilities
    2. narrower intervals
    3. more resistant to outliers
    4. can be done for statistics with unknown sampling distributions

  1. What does the Fisher Information tell us?47
    1. the variability of the MLE from sample to sample.
    2. the bias of the MLE from sample to sample.
    3. the variability of the data from sample to sample.
    4. the bias of the data from sample to sample.

  1. Why do we care about the variability of the MLE?48
    1. determines whether MOM or MLE is better.
    2. determines whether Bayes’ estimator or MLE is better.
    3. determines how precise the estimator is.
    4. allows us to do inference (about the population value).

  1. Why do we care about the sampling distribution of the MLE?49
    1. determines whether MOM or MLE is better.
    2. determines whether Bayes’ estimator or MLE is better.
    3. determines how precise the estimator is.
    4. allows us to do inference (about the population value).

  1. Consider an estimator, \(\hat{\theta}\), such that \(E[\hat{\theta}] = m(\theta)\).
    \(\hat{\theta}\) is unbiased for \(\theta\) if:50
    1. \(m(\theta)\) is a function of \(\theta\).
    2. \(m(\theta)\) is NOT a function of \(\theta\).
    3. \(m(\theta)= \theta\).
    4. \(m(\theta)= 0\).
    5. \(m(\theta)\) is the expected value of \(\hat{\theta}\).

  1. If \(\hat{\theta}\) is unbiased, \(m'(\theta)\) is51
    1. zero
    2. one
    3. \(\theta\)
    4. \(\theta^2\)
    5. some other function of \(\theta\), depending on \(m(\theta)\)

  1. The MLE is52
    1. consistent
    2. efficient
    3. asymptotically normally distributed
    4. all of the above

  1. Why don’t we set up our test as: always reject \(H_0?\)53
    1. type I error too high
    2. type II error too high
    3. level of sig too high
    4. power too high

  1. Why do we care about the distribution of the test statistic?54
    1. Better estimator
    2. To find the rejection region / critical region
    3. To minimize the power
    4. Because we love the Central Limit Theorem

  1. Given a statistic T = r(X), how do we find a (good) test?55
    1. Maximize power when \(H_1\) is true
    2. Minimize type I error
    3. Control type I error
    4. Minimize type II error
    5. Control type II error

  1. We can find the probability of type II error (at a given \(\theta \in \Omega_1)\) as56
    1. a value of the power curve (at \(\theta)\)
    2. 1 – P(type I error at \(\theta)\)
    3. \(\pi(\theta | \delta)\)
    4. 1- \(\pi(\theta | \delta)\)
    5. we can’t ever find the probability of a type II error

  1. Why don’t we use the power function to also control the type II error?57 (We want the power to be big in \(\Omega_1\), so we’d control it by keeping the power from getting too small.)
    1. \(\inf_{\theta \in \Omega_1} \pi(\theta | \delta)\) does not exist
    2. \(\inf_{\theta \in \Omega_1} \pi(\theta | \delta)\) =0
    3. \(\inf_{\theta \in \Omega_1} \pi(\theta | \delta)\) = always really big
    4. \(\inf_{\theta \in \Omega_1} \pi(\theta | \delta)\) =1
    5. \(\inf_{\theta \in \Omega_1} \pi(\theta | \delta)\) = always really small

  1. With two simple hypotheses, hypothesis testing simplifies because we can now control (i.e., compute):58
    1. the size of the test.
    2. the power of the test.
    3. the probability of type I error.
    4. the probability of type II error.
    5. a rejection region.

  1. The likelihood ratio is super awesome because59
    1. it provides the test statistic
    2. it provides the critical region
    3. it provides the type I error
    4. it provides the type II error
    5. it provides the power

  1. A uniformly most powerful (UMP) test60
    1. has the highest possible power in \(\Omega_1\).
    2. has the lowest possible power in \(\Omega_1\).
    3. has the same power over all \(\theta \in \Omega_1\).
    4. has the highest possible power in \(\Omega_1\) subject to controlling \(\alpha(\delta).\)
    5. is a test we try to avoid.

  1. A monotone likelihood ratio statistic is awesome because61
    1. it is the MLE
    2. it is easy to compute
    3. its distribution is known
    4. it is unbiased
    5. it is monotonic with respect to the likelihood ratio

  1. Likelihood Ratio Test62
    1. gives a statistic for comparing likelihoods
    2. is always UMP
    3. works only with some types of hypotheses
    4. works only with hypotheses about one parameter
    5. gives the distribution of the test statistic

  1. Increasing sample size63
    1. Increases power (over \(\Omega_1\))
    2. Decreases power (over \(\Omega_1\))

  1. Making significance level more stringent (\(\alpha_0\) smaller)64
    1. Increases power (over \(\Omega_1\))
    2. Decreases power (over \(\Omega_1\))

  1. A more extreme alternative is true65
    1. Increases power (over \(\Omega_1\))
    2. Decreases power (over \(\Omega_1\))

  1. Given the situation where \(H_1: \mu_1 - \mu_2 \ne 0\) is TRUE. Consider 100 CIs (for \(\mu_1 - \mu_2\)), the power of the test can be approximated by:66
    1. The proportion that contain the true mean.
    2. The proportion that do not contain the true mean.
    3. The proportion that contain zero.
    4. The proportion that do not contain zero.

  1. It is hard to find the power associated with the t-test because:67
    1. the non-central t-distribution is tricky.
    2. two-sided power is difficult to find.
    3. we don’t know the variance.
    4. the t-distribution isn’t integrable.

  1. Consider the likelihood ratio statistic: \[\Lambda(x) = \frac{\sup_{\Omega_1} f(\underline{x} | \theta)}{\sup_{\Omega_0} f(\underline{x} | \theta)}\] Why do we assume that the MLE maximizes the numerator?68
    1. The MLE is always in the alternative space.
    2. The MLE is always in the null space.
    3. If the MLE is in the alternative space, we won’t reject \(H_0\).
    4. If the MLE is in the null space, we won’t reject \(H_0\).
    5. If the MLE is in the alternative space, we will reject \(H_0\).

  1. Consider the likelihood ratio statistic:69 \[\Lambda(x) = \frac{\sup_{\Omega_1} f(\underline{x} | \theta)}{\sup_{\Omega_0} f(\underline{x} | \theta)}\]
    1. \(\Lambda(x) \geq 1\)
    2. \(\Lambda(x) \leq 1\)
    3. \(\Lambda(x) \geq 0\)
    4. \(\Lambda(x) \leq 0\)
    5. bounds on \(\Lambda(x)\) depend on hypotheses

  1. When using the chi-square goodness of fit test, the smaller the value of the chi-square test statistic, the more likely we are to reject the null hypothesis.70
    1. True
    2. False

  1. A chi-square test is71
    1. one-sided, and we only consider the upper end of the sampling distribution
    2. one-sided, and we consider both ends of the sampling distribution
    3. two-sided, and we only consider the upper end of the sampling distribution
    4. two-sided, and we consider both ends of the sampling distribution

  1. To test whether the data are Poisson, why can’t we use the Poisson likelihood instead of the multinomial?72
    1. Likelihood under \(H_0\) is too hard to write down
    2. Likelihood under \(H_1\) is too hard to write down
    3. Don’t know the distribution of the corresponding test statistic
    4. Don’t have any data to use

  1. The \(\chi^2\) test statistic is being used to test whether the assumption of normality is reasonable for a given population distribution. The sample consists of 5000 observations and is divided into 6 categories (intervals). What are the degrees of freedom associated with the test statistic?73
    1. 4999
    2. 6
    3. 5
    4. 4
    5. 3

  1. For a chi-square test for independence, the null hypothesis states that the two variables74
    1. are mutually exclusive.
    2. form a contingency table with r rows and c columns.
    3. have (r –1) and (c –1) degrees of freedom where r and c are the number of rows and columns, respectively.
    4. are statistically independent.
    5. are normally distributed.

  1. You read a paper where a chi-square test produces a p-value of 0.999 (not 0.001). You think:75
    1. \(H_0\) is definitely true
    2. \(H_0\) is definitely not true
    3. The authors’ hypothesis is in the wrong direction.
    4. Maybe they falsified their data?

Footnotes

    1. The distribution of the sample average (statistic) converges to a normal distribution
    ↩︎
    1. Sort of likely the Green Cab company
    ↩︎
    1. \(f( x | \theta ) = P(X = x | \theta)\)
    ↩︎
    1. integrate the joint distribution with respect to Y.
    ↩︎
    1. Integrate to 1 (\(dx\))
    ↩︎
    1. has support on [0,1]
    ↩︎
    1. prior = marginal & posterior = conditional
    ↩︎
  1. Both (b) \(\xi(\theta | \underline{X}) \sim\) Beta (4,12) and (c) \(\xi(\theta | \underline{X}) \propto\) Beta (4,12) are incorrect. (b) because the value to the left of the \(\sim\) must be a random variable. (c) because the value to the right of the \(\propto\) must be a function.↩︎

    1. 1/[\(2^{k/2} \Gamma(k/2)\)]
    ↩︎
    1. gamma
    ↩︎
    1. it doesn’t integrate to one.
    ↩︎
  2. \(\mu_1 = \frac{\sigma^2 \mu_0 + n \nu_0 \overline{X}}{\sigma^2 + n \nu_0^2}\)↩︎

    1. some of the above (the Bayes estimator is the posterior mean, it is sensitive to the rest of it.)
    ↩︎
    1. the data
    ↩︎
    1. theta
    ↩︎
  3. with respect to \(\theta\)↩︎

  4. L(\(\hat{\theta}\)) < L(\(\theta\))↩︎

    1. has desirable sampling distribution properties and (d) maximizes both the likelihood and the log likelihood (although (c) is really the reason it is popular)
    ↩︎
    1. is often straightforward to compute (it does not always exist, look at Cauchy. it does not always produce estimates inside the parameter space.)
    ↩︎
    1. The distribution of the sample average (statistic) converges to a normal distribution
    ↩︎
    1. the distribution of the statistic in repeated samples
    ↩︎
    1. the cdf, the pdf/pmf, and the mgf
    ↩︎
    1. gives all theoretical moments of the distribution
    ↩︎
  5. (e): (a), (c), (d)↩︎

    1. \(\sigma^2\) (the first two are statistics, not parameters, we can’t isolate \(\mu\) because it isn’t involved, and \(\chi\) also isn’t a parameter)
    ↩︎
    1. \(\mu\) (the first two are statistics, not parameters, we can’t isolate \(\sigma^2\) because it isn’t involved, and \(\chi\) also isn’t a parameter)
    ↩︎
    1. a little bit more than 1 (dividing by \(s\) instead of \(\sigma\) adds variability to the distribution)
    ↩︎
    1. 50 observations in each bootstrap sample
    ↩︎
    1. 1000
    ↩︎
    1. the sample statistic
    ↩︎
    1. Resampling with replacement from the original sample. Although I suppose (c) is also true.
    ↩︎
    1. The difference between a sampling distribution mean and the actual parameter.
    ↩︎
    1. -0.009 Bias is what the statistic is (on average) minus the true value. Recall, we are using the data as a proxy for the population, so the “truth” is the data. So in the bootstrap setting, the average is over the bootstrapped values and the true value is the sample mean.
    ↩︎
    1. \(\sigma^2\) (the first two are statistics, not parameters, we can’t isolate \(\mu\) because it isn’t involved, and \(\chi\) also isn’t a parameter)
    ↩︎
    1. \(\mu\) (the first two are statistics, not parameters, we can’t isolate \(\sigma^2\) because it isn’t involved, and \(\chi\) also isn’t a parameter)
    ↩︎
    1. \(c_2\) set to infinity
    ↩︎
    1. In many repeated samples, 90% of intervals like this one will contain the true average number of chips.
    ↩︎
    1. In many repeated samples, 90% of intervals like this one will contain the true average number of chips.
    ↩︎
    1. N(0,1/0). Or rather, to get the frequentist result, you need the joint improper priors to have \(\mu_0 = \lambda_0 = \beta_0 = 0\) and \(\alpha_0 = -1/2\).
    ↩︎
    1. The MGF is usually easiest if g is any kind of linear combination. If not, you might need (b) find the cdf. You’ll need to find the cdf to get the pdf, which you might need to identify the distribution. (note: can’t identify a distribution using only the first two moments, (d))
    ↩︎
    1. find the MGF (note: can’t identify a distribution using only the first two moments, (d))
    ↩︎
    1. the parameters from the likelihood
    ↩︎
    1. the data and (c) the prior parameters
    ↩︎
    1. 22.5 \(\pm\) z(.975) * 2.334
    ↩︎
    1. A bootstrap BCa interval (although out of the ones we’ve covered, (b) A bootstrap-t confidence interval is most accurate)
    ↩︎
    1. can be done for statistics with unknown sampling distributions
    ↩︎
    1. the variability of the MLE from sample to sample.
    ↩︎
    1. determines how precise the estimator is.
    ↩︎
    1. allows us to do inference (about the population value).
    ↩︎
    1. \(m(\theta)= \theta\).
    ↩︎
    1. one
    ↩︎
    1. all of the above
    ↩︎
    1. type I error too high
    ↩︎
    1. To find the rejection region / critical region
    ↩︎
    1. Control type I error
    ↩︎
    1. 1- \(\pi(\theta | \delta)\)
    ↩︎
    1. \(\inf_{\theta \in \Omega_1} \pi(\theta | \delta)\) = always really small
    ↩︎
    1. the power of the test. or (d) the probability of type II error. (they are functions of one another)
    ↩︎
    1. it provides the test statistic
    ↩︎
    1. has the highest possible power in \(\Omega_1\) subject to controlling \(\alpha(\delta).\)
    ↩︎
    1. it is monotonic with respect to the likelihood ratio
    ↩︎
    1. gives the distribution of the test statistic
    ↩︎
    1. Increases power (over \(\Omega_1\))
    ↩︎
    1. Decreases power (over \(\Omega_1\))
    ↩︎
    1. Increases your power (over \(\Omega_1\))
    ↩︎
    1. The proportion that do not contain zero.
    ↩︎
    1. the non-central t-distribution is tricky.
    ↩︎
    1. If the MLE is in the null space, we won’t reject \(H_0\).
    ↩︎
  6. \(\Lambda(x) \geq 1\)↩︎

    1. False
    ↩︎
    1. two-sided, and we only consider the upper end of the sampling distribution
    ↩︎
    1. Likelihood under H1 is too hard to write down (what likelihood would we use for the situation of “not Poisson”?)
    ↩︎
    1. 5
    ↩︎
    1. are statistically independent.
    ↩︎
    1. Maybe they falsified their data?
    ↩︎

Reuse