RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      Statistics using IBM SPSS : an integrative approach

      한글로보기

      https://www.riss.kr/link?id=M14455448

      • 저자
      • 발행사항

        New York : Cambridge University Press, 2015

      • 발행연도

        2015

      • 작성언어

        영어

      • 주제어
      • DDC

        519.50285 판사항(23)

      • ISBN

        9781107461222 (Paperback)
        1107461227 (Paperback)

      • 자료형태

        일반단행본

      • 발행국(도시)

        미국

      • 서명/저자사항

        Statistics using IBM SPSS : an integrative approach / Sharon Lawner Weinberg, Sarah Knapp Abramowitz

      • 판사항

        Third edition

      • 형태사항

        xix, 606 pages : illustrations ; 26 cm

      • 일반주기명

        Previous edition: Statistics using SPSS : an integrative approach / Sharon L. Weinberg, Sarah Knapp Abramowitz (Cambridge University Press, 2008)
        Includes bibliographical references(pages 588-591) and index

      • 소장기관
        • 국립중앙도서관 국립중앙도서관 우편복사 서비스
        • 부산대학교 중앙도서관 소장기관정보
      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      목차 (Table of Contents)

      • CONTENTS
      • Preface = xv
      • Acknowledgments = xix
      • 1 Introduction = 1
      • The Role of the Computer in Data Analysis = 1
      • CONTENTS
      • Preface = xv
      • Acknowledgments = xix
      • 1 Introduction = 1
      • The Role of the Computer in Data Analysis = 1
      • Statistics : Descriptive and Inferential = 2
      • Variables and Constants = 3
      • The Measurement of Variables = 3
      • Discrete and Continuous Variables = 8
      • Setting a Context with Real Data = 11
      • Exercises = 12
      • 2 Examining Univariate Distribu tions = 20
      • Counting the Occurrence of Data Values = 20
      • When Variables Are Measured at the Nominal Level = 20
      • Frequency and Percent Distribution Tables = 20
      • Bar Graphs = 21
      • Pie Graphs = 23
      • When Variables Are Measured at the Ordinal, Interval, or Ratio Level = 25
      • Frequency and Percent Distribution Tables = 25
      • Stem-and-Leaf Displays = 27
      • Histograms = 30
      • Line Graphs = 33
      • Describing the Shape of a Distribution = 36
      • Accumulating Data = 38
      • Cumulative Percent Distributions = 38
      • Ogive Curves = 38
      • Percentile Ranks = 39
      • Percentiles = 40
      • Five-Number Summaries and Boxplots = 43
      • Summary of Graphical Selection = 49
      • Exercises = 49
      • 3 Measures of Location, Spread, and Skewness = 65
      • Characterizing the Location of a Distribution = 65
      • The Mode = 65
      • The Median = 69
      • The Arithmetic Mean = 70
      • Interpreting the Mean of a Dichotomous Variable = 72
      • The Weighted Mean = 73
      • Comparing the Mode, Median, and Mean = 74
      • Characterizing the Spread of a Distribution = 76
      • The Range and Interquartile Range = 79
      • The Variance = 80
      • The Standard Deviation = 83
      • Characterizing the Skewness of a Distribution = 84
      • Selecting Measures of Location and Spread = 86
      • Applying What We Have Learned = 86
      • Exercises = 90
      • 4 Re -expressing Variables = 99
      • Linear and Nonlinear Transformations = 99
      • Linear Transformations : Addition, Subtraction, Multiplication, and Division = 100
      • The Effect on the Shape of a Distribution = 101
      • The Effect on Summary Statistics of a Distribution = 102
      • Common Linear Transformations = 106
      • Standard Scores = 107
      • z-Scores = 109
      • Using z-Scores to Detect Outliers = 111
      • Using z-Scores to Compare Scores in Different Distributions = 112
      • Relating z-Scores to Percentile Ranks = 115
      • Nonlinear Transformations : Square Roots and Logarithms = 115
      • Nonlinear Transformations : Ranking Variables = 123
      • Other Transformations : Recoding and Combining Variables = 124
      • Recoding Variables = 124
      • Combining Variables = 126
      • Data Management Fundamentals – The Syntax File = 126
      • Exercises = 130
      • 5 Exploring Relationships between Two Variables = 138
      • When Both Variables Are at Least Interval-Leveled = 138
      • Scatterplots = 139
      • The Pearson Product Moment Correlation Coefficient = 147
      • Interpreting the Pearson Correlation Coefficient = 152
      • The Correlation Scale Itself Is Ordinal = 153
      • Correlation Does Not Imply Causation = 153
      • The Effect of Linear Transformations = 154
      • Restriction of Range = 154
      • The Shape of the Underlying Distributions = 155
      • The Reliability of the Data = 155
      • When at Least One Variable Is Ordinal and the Other Is at Least Ordinal : The Spearman Rank Correlation Coefficient = 155
      • When at Least One Variable Is Dichotomous : Other Special Cases of the Pearson Correlation Coefficient = 157
      • The Point Biserial Correlation Coefficient : The Case of One at Least Interval and One Dichotomous Variable = 157
      • The Phi Coefficient : The Case of Two Dichotomous Variables = 162
      • Other Visual Displays of Bivariate Relationships = 167
      • Selection of Appropriate Statistic/Graph to Summarize a Relationship = 170
      • Exercises = 171
      • 6 Simple Linear Regression = 183
      • The "Best-Fitting" Linear Equation = 183
      • The Accuracy of Prediction Using the Linear Regression Model = 190
      • The Standardized Regression Equation = 191
      • R as a Measure of the Overall Fit of the Linear Regression Model = 191
      • Simple Linear Regression When the Independent Variable Is Dichotomous = 196
      • Using r and R as Measures of Effect Size = 199
      • Emphasizing the Importance of the Scatterplot = 199
      • Exercises = 201
      • 7 Probability Fundamentals = 210
      • The Discrete Case = 210
      • The Complement Rule of Probability = 212
      • The Additive Rules of Probability = 213
      • First Additive Rule of Probability = 213
      • Second Additive Rule of Probability = 214
      • The Multiplicative Rule of Probability = 215
      • The Relationship between Independence and Mutual Exclusivity = 218
      • Conditional Probability = 218
      • The Law of Large Numbers = 220
      • Exercises = 220
      • 8 Theoretical Probability Models = 223
      • The Binomial Probability Model and Distribution = 223
      • The Applicability of the Binomial Probability Model = 228
      • The Normal Probability Model and Distribution = 232
      • Using the Normal Distribution to Approximate the Binomial Distribution = 238
      • Exercises = 239
      • 9 The Role of Sampling in Inferential Statistics = 245
      • Samples and Populations = 245
      • Random Samples = 246
      • Obtaining a Simple Random Sample = 247
      • Sampling with and without Replacement = 249
      • Sampling Distributions = 250
      • Describing the Sampling Distribution of Means Empirically = 251
      • Describing the Sampling Distribution of Means Theoretically : The Central Limit Theorem = 255
      • Central Limit Theorem (CLT) = 255
      • Estimators and Bias = 259
      • Exercises = 260
      • 10 Inferences Involving the Mean of a Single Population When σ Is Known = 264
      • Estimating the Population Mean, μ, When the Population Standard Deviation,σ, Is Known = 264
      • Interval Estimation = 266
      • Relating the Length of a Confidence Interval = the Level of Confidence, and the Sample Size = 269
      • Hypothesis Testing = 270
      • The Relationship between Hypothesis Testing and Interval Estimation = 278
      • Effect Size = 279
      • Type Ⅱ Error and the Concept of Power = 280
      • Increasing the Level of Significance, α = 284
      • Increasing the Effect Size, δ = 284
      • Decreasing the Standard Error of the Mean, σ$$His_{x}$$ = 284
      • Closing Remarks = 285
      • Exercises = 286
      • 11 Inferences Involving the Mean When σ Is Not Known : One- and Two-Sample Designs = 290
      • Single Sample Designs When the Parameter of Interest Is the Mean and σ Is Not Known = 290
      • The t Distribution = 291
      • Degrees of Freedom for the One-Sample t-Test = 292
      • Violating the Assumption of a Normally Distributed Parent Population in the One-Sample t-Test = 293
      • Confidence Intervals for the One-Sample t-Test = 294
      • Hypothesis Tests : The One-Sample t-Test = 298
      • Effect Size for the One-Sample t-Test = 300
      • Two Sample Designs When the Parameter of Interest Is μ, and σ Is Not Known = 303
      • Independent (or Unrelated) and Dependent (or Related) Samples = 304
      • Independent Samples t-Test and Confidence Interval = 305
      • The Assumptions of the Independent Samples t-Test = 307
      • Effect Size for the Independent Samples t-Test = 315
      • Paired Samples t-Test and Confidence Interval = 317
      • The Assumptions of the Paired Samples t-Test = 318
      • Effect Size for the Paired Samples t-Test = 323
      • The Bootstrap = 323
      • Summary = 327
      • Exercises = 328
      • 12 Rese arch Design : Introduction and Overview = 346
      • Questions and Their Link to Descriptive, Relational, and Causal Research Studies = 346
      • The Need for a Good Measure of Our Construct, Weight = 346
      • The Descriptive Study = 347
      • From Descriptive to Relational Studies = 348
      • From Relational to Causal Studies = 348
      • The Gold Standard of Causal Studies : The True Experiment and Random Assignment = 350
      • Comparing Two Kidney Stone Treatments Using a Non-randomized Controlled Study = 351
      • Including Blocking in a Research Design = 352
      • Underscoring the Importance of Having a True Control Group Using Randomization = 353
      • Analytic Methods for Bolstering Claims of Causality from Observational Data (Optional Reading) = 357
      • Quasi-Experimental Designs = 359
      • Threats to the Internal Validity of a Quasi-Experimental Design = 360
      • Threats to the External Validity of a Quasi-Experimental Design = 361
      • Threats to the Validity of a Study : Some Clarifications and Caveats = 361
      • Threats to the Validity of a Study : Some Examples = 362
      • Exercises = 363
      • 13 One-Way Analysis of Variance = 367
      • The Disadvantage of Multiple t-Tests = 367
      • The One-Way Analysis of Variance = 369
      • A Graphical Illustration of the Role of Variance in Tests on Means = 369
      • ANOVA as an Extension of the Independent Samples t-Test = 370
      • Developing an Index of Separation for the Analysis of Variance = 371
      • Carrying Out the ANOVA Computation = 372
      • The Between Group Variance (MSB) = 372
      • The Within Group Variance (MSW) = 373
      • The Assumptions of the One-Way ANOVA = 374
      • Testing the Equality of Population Means : The F-Ratio = 375
      • How to Read the Tables and to Use the SPSS Compute Statement for the F Distribution = 375
      • ANOVA Summary Table = 380
      • Measuring the Effect Size = 380
      • Post-hoc Multiple Comparison Tests = 384
      • The Bonferroni Adjustment : Testing Planned Comparisons = 394
      • The Bonferroni Tests on Multiple Measures = 396
      • Exercises = 399
      • 14 Two-Way Analysis of Variance = 404
      • The Two-Factor Design = 404
      • The Concept of Interaction = 407
      • The Hypotheses That Are Tested by a Two-Way Analysis of Variance = 412
      • Assumptions of the Two-Way Analysis of Variance = 413
      • Balanced versus Unbalanced Factorial Designs = 414
      • Partitioning the Total Sum of Squares = 414
      • Using the F-Ratio to Test the Effects in Two-Way ANOVA = 415
      • Carrying Out the Two-Way ANOVA Computation by Hand = 416
      • Decomposing Score Deviations about the Grand Mean = 420
      • Modeling Each Score as a Sum of Component Parts = 421
      • Explaining the Interaction as a Joint (or Multiplicative) Effect = 421
      • Measuring Effect Size = 422
      • Fixed versus Random Factors = 426
      • Post-hoc Multiple Comparison Tests = 426
      • Summary of Steps to Be Taken in a Two-Way ANOVA Procedure = 432
      • Exercises = 437
      • 15 C orrelation and Simple Regression as Inferential Techniques = 445
      • The Bivariate Normal Distribution = 445
      • Testing Whether the Population Pearson Product Moment Correlation Equals Zero = 448
      • Using a Confidence Interval to Estimate the Size of the Population Correlation Coefficient, ρ = 451
      • Revisiting Simple Linear Regression for Prediction = 454
      • Estimating the Population Standard Error of Prediction, σ Y
      • X = 455
      • Testing the b-Weight for Statistical Significance = 456
      • Explaining Simple Regression Using an Analysis of Variance Framework = 459
      • Measuring the Fit of the Overall Regression Equation : Using R and R²= 462
      • Relating R²To σ²Y
      • X = 463
      • Testing R²for Statistical Significance = 463
      • Estimating the True Population R²: The Adjusted R²= 464
      • Exploring the Goodness of Fit of the Regression Equation : Using Regression Diagnostics = 465
      • Residual Plots : Evaluating the Assumptions Underlying Regression = 467
      • Detecting Influential Observations : Discrepancy and Leverage = 470
      • Using SPSS to Obtain Leverage = 472
      • Using SPSS to Obtain Discrepancy = 472
      • Using SPSS to Obtain Influence = 473
      • Using Diagnostics to Evaluate the Ice Cream Sales Example = 474
      • Using the Prediction Model to Predict Ice Cream Sales = 478
      • Simple Regression When the Predictor Is Dichotomous = 478
      • Exercises = 479
      • 16 An Introduction to Multiple Regression = 491
      • The Basic Equation with Two Predictors = 492
      • Equations for b, β, and R$$His_{Y}$$.₁₂ When the Predictors Are Not Correlated = 493
      • Equations for b, β, and R$$His_{Y}$$.₁₂ When the Predictors Are Correlated = 494
      • Summarizing and Expanding on Some Important Principles of Multiple Regression = 496
      • Testing the b-Weights for Statistical Significance = 501
      • Assessing the Relative Importance of the Independent Variables in the Equation = 503
      • Measuring the Drop in R²Directly : An Alternative to the Squared Part Correlation = 504
      • Evaluating the Statistical Significance of the Change in R²= 504
      • The b-Weight as a Partial Slope in Multiple Regression = 505
      • Multiple Regression When One of the Two Independent Variables Is Dichotomous = 508
      • The Concept of Interaction between Two Variables That Are at Least Interval-Leveled = 514
      • Testing the Statistical Significance of an Interaction Using SPSS = 516
      • Centering First-Order Effects to Achieve Meaningful Interpretations of b-Weights = 521
      • Understanding the Nature of a Statistically Significant Two-Way Interaction = 521
      • Interaction When One of the Independent Variables Is Dichotomous and the Other Is Continuous = 524
      • Exercises = 528
      • 17 Nonparametric Methods = 539
      • Parametric versus Nonparametric Methods = 539
      • Nonparametric Methods When the Dependent Variable Is at the Nominal Level = 540
      • The Chi-Square Distribution (χ²) = 540
      • The Chi-Square Goodness-of-Fit Test = 542
      • The Chi-Square Test of Independence = 547
      • Assumptions of the Chi-Square Test of Independence = 550
      • Fisher's Exact Test = 552
      • Calculating the Fisher Exact Test by Hand Using the Hypergeometric Distribution = 554
      • Nonparametric Methods When the Dependent Variable Is Ordinal-Leveled = 558
      • Wilcoxon Sign Test = 558
      • The Mann-Whitney U Test = 561
      • The Kruskal-Wallis Analysis of Variance = 565
      • Exercises = 567
      • Appendix A : Data Set Descriptions = 573
      • Appendix B : Generating Distributions for Chapters 8 and 9 Using SPSS Syntax = 586
      • Appendix C : Statistical Tables = 587
      • Appendix D : References = 588
      • Appendix E : Solutions to Exercises = 592
      • Appendix F : The Standard Error of the Mean Difference for Independent Samples : A More Complete Account (Optional) = 593
      • Index = 595
      더보기

      온라인 도서 정보

      온라인 서점 구매

      온라인 서점 구매 정보
      서점명 서명 판매현황 종이책 전자책 구매링크
      정가 판매가(할인율) 포인트(포인트몰)
      알라딘

      Statistics Using IBM SPSS : An Integrative Approach (Paperback, 3 Revised edition)

      판매중 163,310원 133,910원 (18%)

      종이책 구매

      6,700포인트
      예스24.com

      Statistics Using IBM SPSS: An Integrative Approach

      판매중 165,650원 157,360원 (5%)

      종이책 구매

      4,730포인트 (3%)
      • 포인트 적립은 해당 온라인 서점 회원인 경우만 해당됩니다.
      • 상기 할인율 및 적립포인트는 온라인 서점에서 제공하는 정보와 일치하지 않을 수 있습니다.
      • RISS 서비스에서는 해당 온라인 서점에서 구매한 상품에 대하여 보증하거나 별도의 책임을 지지 않습니다.

      책소개

      자료제공 : NAVER

      Statistics Using IBM SPSS: An Integrative Approach (An Integrative Approach)

      Written in a clear and lively tone, Statistics Using IBM SPSS provides a data-centric approach to statistics with integrated SPSS (version 22) commands, ensuring that students gain both a strong conceptual understanding of statistics and practical facility with statistical software. The third edition features a new chapter on research design.

      more

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼