Description: Optimal Unbiased Estimation of Variance Components by James D. Malley The clearest way into the Universe is through a forest wilderness. John MuIr As recently as 1970 the problem of obtaining optimal estimates for variance components in a mixed linear model with unbalanced data was considered a miasma of competing, generally weakly motivated estimators, with few firm gUidelines and many simple, compelling but Unanswered questions. Then in 1971 two significant beachheads were secured: the results of Rao [1971a, 1971b] and his MINQUE estimators, and related to these but not originally derived from them, the results of Seely [1971] obtained as part of his introduction of the no~ion of quad ratic subspace into the literature of variance component estimation. These two approaches were ultimately shown to be intimately related by Pukelsheim [1976], who used a linear model for the com ponents given by Mitra [1970], and in so doing, provided a mathemati cal framework for estimation which permitted the immediate applica tion of many of the familiar Gauss-Markov results, methods which had earlier been so successful in the estimation of the parameters in a linear model with only fixed effects. Moreover, this usually enor mous linear model for the components can be displayed as the starting point for many of the popular variance component estimation tech niques, thereby unifying the subject in addition to generating an FORMAT Paperback LANGUAGE English CONDITION Brand New Publisher Description The clearest way into the Universe is through a forest wilderness. John MuIr As recently as 1970 the problem of obtaining optimal estimates for variance components in a mixed linear model with unbalanced data was considered a miasma of competing, generally weakly motivated estimators, with few firm gUidelines and many simple, compelling but Unanswered questions. Then in 1971 two significant beachheads were secured: the results of Rao [1971a, 1971b] and his MINQUE estimators, and related to these but not originally derived from them, the results of Seely [1971] obtained as part of his introduction of the no~ion of quad- ratic subspace into the literature of variance component estimation. These two approaches were ultimately shown to be intimately related by Pukelsheim [1976], who used a linear model for the com- ponents given by Mitra [1970], and in so doing, provided a mathemati- cal framework for estimation which permitted the immediate applica- tion of many of the familiar Gauss-Markov results, methods which had earlier been so successful in the estimation of the parameters in a linear model with only fixed effects.Moreover, this usually enor- mous linear model for the components can be displayed as the starting point for many of the popular variance component estimation tech- niques, thereby unifying the subject in addition to generating answers. Author Biography James D. Malley is a Research Mathematical Statistician in the Mathematical and Statistical Computing Laboratory, Division of Computational Bioscience, Center for Information Technology, at the National Institutes of Health. Table of Contents One: The Basic Model and the Estimation Problem.- 1.1 Introduction.- 1.2 An Example.- 1.3 The Matrix Formulation.- 1.4 The Estimation Criteria.- 1.5 Properties of the Criteria.- 1.6 Selection of Estimation Criteria.- Two: Basic Linear Technique.- 2.1 Introduction.- 2.2 The vec and mat Operators.- 2.3 Useful Properties of the Operators.- Three: Linearization of the Basic Model.- 3.1 Introduction.- 3.2 The First Linearization.- 3.3 Calculation of var(y).- 3.4 The Second Linearization of the Basic Model.- 3.5 Additional Details of the Linearizations.- Four: The Ordinary Least Squares Estimates.- 4.1 Introduction.- 4.2 The Ordinary Least Squares Estimates: Calculation.- 4.3 The Inner Structure of the Linearization.- 4.4 Estimable Functions of the Components.- 4.5 Further OLS Facts.- Five: The Seely-Zyskind Results.- 5.1 Introduction.- 5.2 The General Gauss-Markov Theorem: Some History and Motivation.- 5.3 The General Gauss-Markov Theorem: Preliminaries.- 5.4 The General Gauss-Markov Theorem: Statement and Proof.- 5.5 The Zyskind Version of the Gauss-Markov Theorem.- 5.6 The Seely Condition for Optimal unbiased Estimation.- Six: The General Solution to Optimal Unbiased Estimation.- 6.1 Introduction.- 6.2 A Full Statement of the Problem.- 6.3 The Lehmann-Scheffé Result.- 6.4 The Two Types of Closure.- 6.5 The General Solution.- 6.6 An Example.- Seven: Background from Algebra.- 7.1 Introduction.- 7.2 Groups, Rings, Fields.- 7.3 Subrings and Ideals.- 7.4 Products in Jordan Rings.- 7.5 Idempotent and Nilpotent Elements.- 7.6 The Radical of an Associative or Jordan Algebra.- 7.7 Quadratic Ideals in Jordan Algebras.- Eight: The Structure of Semisimple Associative and Jordan Algebras.- 8.1 Introduction.- 8.2 The First Structure Theorem.- 8.3 Simple Jordan Algebras.- 8.4 SimpleAssociative Algebras.- Nine: The Algebraic Structure of Variance Components.- 9.1 Introduction.- 9.2 The Structure of the Space of Optimal Kernels.- 9.3 The Two Algebras Generated by Sp(?2).- 9.4 Quadratic Ideals in Sp(?2).- 9.5 Further Properties of the Space of Optimal Kernels.- 9.6 The Case of Sp(?2) Commutative.- 9.7 Examples of Mixed Model Structure Calculations: The Partially Balanced Incomplete Block Designs.- Ten: Statistical Consequences of the Algebraic Structure Theory.- 10.1 Introduction.- 10.2 The Jordan Decomposition of an Optimal Unbiased Estimate.- 10.3 Non-Negative Unbiased Estimation.- Concluding Remarks.- References. Promotional Springer Book Archives Long Description The clearest way into the Universe is through a forest wilderness. John MuIr As recently as 1970 the problem of obtaining optimal estimates for variance components in a mixed linear model with unbalanced data was considered a miasma of competing, generally weakly motivated estimators, with few firm gUidelines and many simple, compelling but Unanswered questions. Then in 1971 two significant beachheads were secured: the results of Rao [1971a, 1971b] and his MINQUE estimators, and related to these but not originally derived from them, the results of Seely [1971] obtained as part of his introduction of the no~ion of quad Details ISBN0387964495 Author James D. Malley Short Title OPTIMAL UNBIASED ESTIMATION OF Pages 146 Language English ISBN-10 0387964495 ISBN-13 9780387964492 Media Book Format Paperback Series Number 39 Year 1986 Imprint Springer-Verlag New York Inc. Place of Publication New York, NY Country of Publication United States Illustrations 1 Illustrations, black and white; X, 146 p. 1 illus. DOI 10.1604/9780387964492;10.1007/978-1-4615-7554-2 AU Release Date 1986-12-01 NZ Release Date 1986-12-01 US Release Date 1986-12-01 UK Release Date 1986-12-01 Publisher Springer-Verlag New York Inc. Edition Description Softcover reprint of the original 1st ed. 1986 Series Lecture Notes in Statistics Publication Date 1986-12-01 DEWEY 519.544 Audience Professional & Vocational We've got this At The Nile, if you're looking for it, we've got it. With fast shipping, low prices, friendly service and well over a million items - you're bound to find what you want, at a price you'll love! TheNile_Item_ID:96394813;
Price: 115.04 AUD
Location: Melbourne
End Time: 2024-12-27T01:21:03.000Z
Shipping Cost: 9.74 AUD
Product Images
Item Specifics
Restocking fee: No
Return shipping will be paid by: Buyer
Returns Accepted: Returns Accepted
Item must be returned within: 30 Days
ISBN-13: 9780387964492
Book Title: Optimal Unbiased Estimation of Variance Components
Number of Pages: 146 Pages
Publication Name: Optimal Unbiased Estimation of Variance Components
Language: English
Publisher: Springer-Verlag New York Inc.
Item Height: 244 mm
Subject: Mathematics
Publication Year: 1986
Type: Textbook
Item Weight: 289 g
Author: James D. Malley
Item Width: 170 mm
Format: Paperback