Douglas C. Montgomery - Introduction to Linear Regression Analysis

Здесь есть возможность читать онлайн «Douglas C. Montgomery - Introduction to Linear Regression Analysis» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Introduction to Linear Regression Analysis: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Introduction to Linear Regression Analysis»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

A comprehensive and current introduction to the fundamentals of regression analysis Introduction to Linear Regression Analysis, 6th Edition The new edition focuses on four key areas of improvement over the fifth edition:
New exercises and data sets New material on generalized regression techniques The inclusion of JMP software in key areas Carefully condensing the text where possible
skillfully blends theory and application in both the conventional and less common uses of regression analysis in today's cutting-edge scientific research. The text equips readers to understand the basic principles needed to apply regression model-building techniques in various fields of study, including engineering, management, and the health sciences.

Introduction to Linear Regression Analysis — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Introduction to Linear Regression Analysis», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

21 2.21 Consider the wine quality of young red wines data in Table B.19. The winemakers believe that the sulfur content has a negative impact on the taste (thus, the overall quality) of the wine. Perform a thorough analysis of these data. Do the data support the winemakers’ belief?

22 2.22 Consider the methanol oxidation data in Table B.20. The chemist believes that ratio of inlet oxygen to the inlet methanol controls the conversion process. Perform a through analysis of these data. Do the data support the chemist’s belief?

23 2.23 Consider the simple linear regression model y = 50 + 10x + ε where ε is NID (0, 16). Suppose that n = 20 pairs of observations are used to fit this model. Generate 500 samples of 20 observations, drawing one observation for each level of x = 1, 1.5, 2, …, 10 for each sample.a. For each sample compute the least-squares estimates of the slope and intercept. Construct histograms of the sample values of and . Discuss the shape of these histograms.b. For each sample, compute an estimate of E(y|x = 5). Construct a histogram of the estimates you obtained. Discuss the shape of the histogram.c. For each sample, compute a 95% CI on the slope. How many of these intervals contain the true value β1 = 10? Is this what you would expect?d. For each estimate of E(y|x = 5) in part b, compute the 95% CI. How many of these intervals contain the true value of E(y|x = 5) = 100? Is this what you would expect?

24 2.24 Repeat Problem 2.23 using only 10 observations in each sample, drawing one observation from each level x = 1, 2, 3, …, 10. What impact does using n = 10 have on the questions asked in Problem 2.23? Compare the lengths of the CIs and the appearance of the histograms.

25 2.25 Consider the simple linear regression model y = β0 + β1x + ε, with E(ε) = 0, Var(ε) = σ2, and ε uncorrelated.a. Show that .b. Show that .

26 2.26 Consider the simple linear regression model y = β0 + β1x + ε, with E(ε) = 0, Var(ε) = σ2, and ε uncorrelated.a. Show that .b. Show that E(MSRes) = σ2.

27 2.27 Suppose that we have fit the straight-line regression model but the response is affected by a second variable x2 such that the true regression function isa. Is the least-squares estimator of the slope in the original simple linear regression model unbiased?b. Show the bias in .

28 2.28 Consider the maximum-likelihood estimator of σ2 in the simple linear regression model. We know that is a biased estimator for σ2.a. Show the amount of bias in .b. What happens to the bias as the sample size n becomes large?

29 2.29 Suppose that we are fitting a straight line and wish to make the standard error of the slope as small as possible. Suppose that the “region of interest” for x is −1 ≤ x ≤ 1. Where should the observations x1, x2, …, xn be taken? Discuss the practical aspects of this data collection plan.

30 2.30 Consider the data in Problem 2.12 and assume that steam usage and average temperature are jointly normally distributed.a. Find the correlation between steam usage and monthly average ambient temperature.b. Test the hypothesis that ρ = 0.c. Test the hypothesis that ρ = 0.5.d. Find a 99% CI for ρ.

31 2.31 Prove that the maximum value of R2 is less than 1 if the data contain repeated (different) observations on y at the same value of x.

32 2.32 Consider the simple linear regression modelwhere the intercept β0 is known.a. Find the least-squares estimator of β1 for this model. Does this answer seem reasonable?b. What is the variance of the slope for the least-squares estimator found in part a?c. Find a 100(1 − α) percent CI for β1. Is this interval narrower than the estimator for the case where both slope and intercept are unknown?

33 2.33 Consider the least-squares residuals , i = 1, 2, …, n, from the simple linear regression model. Find the variance of the residuals Var(ei). Is the variance of the residuals a constant? Discuss.

34 2.34 Consider the baseball regression model from Section 2.8and assume that wins and ERA are jointly normally distributed.a. Find the correlation between wins and team ERA.b. Test the hypothesis that ρ = 0.c. Test the hypothesis that ρ = 0.5.d. Find a 95% CI for ρ.

35 2.35 Consider the baseball data in Table B.22. Fit a regression model to team wins using total runs scored as the predictor. How does that model compare to the one developed in Section 2.8using team ERA as the predictor?

36 2.36 Table B.24 contains data on median family home rental price and other data for 51 US cities. Fit a linear regression model using the median home rental price as the response variable and median price per square foot as the predictor variable.a. Test for significance of regression.b. Find a 95% CI on the slope in this model.c. Does this predictor do an adequate job of explaining the variability in home rental prices?

37 2.37 Consider the rental price data in Table B.24. Assume that median home rental price and median price per square foot are jointly normally distributed.a. Find the correlation between home rental price and home price per square foot.b. Test the hypothesis that ρ = 0.c. Test the hypothesis that ρ = 0.5.d. Find a 95% CI for ρ.

38 2.38 You have fit a linear regression model to a sample of 20 observations. The total sum of squares is 100 and the regression sum of squares is 80. The estimate of the error variance isa. 1.5b. 1.2c. 2.0d. 1.88e. None of the above.

39 2.39 You have fit a simple linear regression model to a sample of 25 observations. The value of the t-statistic for testing that the slope is zero is 2.75. An upper bound on the P-value for this test isa. 0.05b. 0.025c. 0.01d. None of the above.

40 2.40 A linear regression model with an intercept term will always pass through the centroid of the data.a. Trueb. False

41 2.41 The variance of the predicted response in a linear regression model is a minimum at the average value of the predictor variable.a. Trueb. False

42 2.42 The confidence interval on the mean response at a particular value of the predictor variable is always wider than the prediction interval on a new observation at the same point.a. Trueb. False

43 2.43 The method of least squares ensures that the estimators of the slope and intercept in a linear regression model are best linear unbiased estimator.a. Trueb. False

44 2.44 For any simple linear regression model that has an intercept, the sum of the residuals is always zero.a. Trueb. False

Конец ознакомительного фрагмента.

Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Introduction to Linear Regression Analysis»

Представляем Вашему вниманию похожие книги на «Introduction to Linear Regression Analysis» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Introduction to Linear Regression Analysis»

Обсуждение, отзывы о книге «Introduction to Linear Regression Analysis» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x