# Discovering Statistics Using IBM SPSS Statistics

- Andy Field - University of Sussex, UK

With an exciting new look, new characters to meet, and its unique combination of humour and step-by-step instruction, this award-winning book is the statistics lifesaver for everyone. From initial theory through to regression, factor analysis and multilevel modelling, Andy Field animates statistics and SPSS software with his famously bizarre examples and activities.

What’s brand new:

- A radical new design with original illustrations and even more colour
- A maths diagnostic tool to help students establish what areas they need to revise and improve on.
- A revamped online resource that uses video, case studies, datasets, testbanks and more to help students negotiate project work, master data management techniques, and apply key writing and employability skills
- New sections on replication, open science and Bayesian thinking
- Now fully up to date with latest versions of IBM SPSS Statistics©.

All the online resources above (video, case studies, datasets, testbanks) can be easily integrated into your institution's virtual learning environment or learning management system. This allows you to customize and curate content for use in module preparation, delivery and assessment. For instructions on how to upload the resources you want, please visit the Instructors' page or alternatively, contact your local SAGE sales representative.

Please note that ISBN: 9781526445780 comprises the paperback edition of the Fifth Edition and the student version of IBM SPSS Statistics. More information on this version of the software's features can be found here.

What the hell am I doing here? I don’t belong here |

The research process |

Initial observation: finding something that needs explaining |

Generating and testing theories and hypotheses |

Collecting data: measurement |

Collecting data: research design |

Reporting Data |

What is the SPINE of statistics? |

Statistical models |

Populations and Samples |

P is for parameters |

E is for Estimating parameters |

S is for standard error |

I is for (confidence) Interval |

N is for Null hypothesis significance testing, NHST |

Reporting significance tests |

Problems with NHST |

NHST as part of wider problems with science |

A phoenix from the EMBERS |

Sense, and how to use it |

Preregistering research and open science |

Effect sizes |

Bayesian approaches |

Reporting effect sizes and Bayes factors |

Versions of IBM SPSS Statistics |

Windows, MacOS and Linux |

Getting started |

The Data Editor |

Entering data into IBM SPSS Statistics |

Importing Data |

The SPSS Viewer |

Exporting SPSS Output |

The Syntax Editor |

Saving files |

Opening files |

Extending IBM SPSS Statistics |

The art of presenting data |

The SPSS Chart Builder |

Histograms |

Boxplots (box-whisker diagrams) |

Graphing means: bar charts and error bars |

Line charts |

Graphing relationships: the scatterplot |

Editing graphs |

What is bias? |

Outliers |

Overview of assumptions |

Additivity and Linearity |

Normally distributed something or other |

Homoscedasticity/Homogeneity of Variance |

Independence |

Spotting outliers |

Spotting normality |

Spotting linearity and heteroscedasticity/heterogeneity of variance |

Reducing Bias |

When to use non-parametric tests |

General procedure of non-parametric tests in SPSS |

Comparing two independent conditions: the Wilcoxon rank-sum test and Mann– Whitney test |

Comparing two related conditions: the Wilcoxon signed-rank test |

Differences between several independent groups: the Kruskal–Wallis test |

Differences between several related groups: Friedman’s ANOVA |

Modelling relationships |

Data entry for correlation analysis |

Bivariate correlation |

Partial and semi-partial correlation |

Comparing correlations |

Calculating the effect size |

How to report correlation coefficents |

An Introduction to the linear model (regression) |

Bias in linear models? |

Generalizing the model |

Sample size in regression |

Fitting linear models: the general procedure |

Using SPSS Statistics to fit a linear model with one predictor |

Interpreting a linear model with one predictor |

The linear model with two of more predictors (multiple regression) |

Using SPSS Statistics to fit a linear model with several predictors |

Interpreting a linear model with several predictors |

Robust regression |

Bayesian regression |

Reporting linear models |

Looking at differences |

An example: are invisible people mischievous? |

Categorical predictors in the linear model |

The t-test |

Assumptions of the t-test |

Comparing two means: general procedure |

Comparing two independent means using SPSS Statistics |

Comparing two related means using SPSS Statistics |

Reporting comparisons between two means |

Between groups or repeated measures? |

The PROCESS tool |

Moderation: Interactions in the linear model |

Mediation |

Categorical predictors in regression |

Using a linear model to compare several means |

Assumptions when comparing means |

Planned contrasts (contrast coding) |

Post hoc procedures |

Comparing several means using SPSS Statistics |

Output from one-way independent ANOVA |

Robust comparisons of several means |

Bayesian comparison of several means |

Calculating the effect size |

Reporting results from one-way independent ANOVA |

What is ANCOVA? |

ANCOVA and the general linear model |

Assumptions and issues in ANCOVA |

Conducting ANCOVA using SPSS Statistics |

Interpreting ANCOVA |

Testing the assumption of homogeneity of regression slopes |

Robust ANCOVA |

Bayesian analysis with covariates |

Calculating the effect size |

Reporting results |

Factorial designs |

Independent factorial designs and the linear model |

Model assumptions in factorial designs |

Factorial designs using SPSS Statistics |

Output from factorial designs |

Interpreting interaction graphs |

Robust models of factorial designs |

Bayesian models of factorial designs |

Calculating effect sizes |

Reporting the results of two-way ANOVA |

Introduction to repeated-measures designs |

A grubby example |

Repeated-measures and the linear model |

The ANOVA approach to repeated-measures designs |

The F-statistic for repeated-measures designs |

Assumptions in repeated-measures designs |

One-way repeated-measures designs using SPSS |

Output for one-way repeated-measures designs |

Robust tests of one-way repeated-measures designs |

Effect sizes for one-way repeated-measures designs |

Reporting one-way repeated-measures designs |

A boozy example: a factorial repeated-measures design |

Factorial repeated-measures designs using SPSS Statistics |

Interpreting factorial repeated-measures designs |

Effect Sizes for factorial repeated-measures designs |

Reporting the results from factorial repeated-measures designs |

Mixed designs |

Assumptions in mixed designs |

A speed dating example |

Mixed designs using SPSS Statistics |

Output for mixed factorial designs |

Calculating effect sizes |

Reporting the results of mixed designs |

Introducing MANOVA |

Introducing matrices |

The theory behind MANOVA |

MANOVA using SPSS Statistics |

Interpreting MANOVA |

Reporting results from MANOVA |

Following up MANOVA with discriminant analysis |

Interpreting discriminant analysis |

Reporting results from discriminant analysis |

The final interpretation |

When to use factor analysis |

Factors and Components |

Discovering factors |

An anxious example |

Factor analysis using SPSS statistics |

Interpreting factor analysis |

How to report factor analysis |

Reliability analysis |

Reliability analysis using SPSS Statistics |

Interpreting Reliability analysis |

How to report reliability analysis |

Analysing categorical data |

Associations between two categorical variables |

Associations between several categorical variables: loglinear analysis |

Assumptions when analysing categorical data |

General procedure for analysing categorical outcomes |

Doing chi-square using SPSS Statistics |

Interpreting the chi-square test |

Loglinear analysis using SPSS Statistics |

Interpreting loglinear analysis |

Reporting the results of loglinear analysis |

What is logistic regression? |

Theory of logistic regression |

Sources of bias and common problems |

Binary logistic regression |

Interpreting logistic regression |

Reporting logistic regression |

Testing assumptions: another example |

Predicting several categories: multinomial logistic regression |

Hierarchical data |

Theory of multilevel linear models |

The multilevel model |

Some practical issues |

Multilevel modelling using SPSS Statistics |

Growth models |

How to report a multilevel model |

A message from the octopus of inescapable despair |

### Supplements

This book turned my hatred of stats and SPSS into love.

**MSc in Applied Quantitative Methods**

brilliant book,

well written and easy to remember examples.

stats with a sense of humor

**Psychology, Fachhochschule des Mittelstands (FHM)**

This is a great book for learners with minimal previous experience in statistics. The examples and case studies provided throughout the book are very effective and learners will find them memorable. Easy to use, practical and a must have for anyone serious about learning to analyse their data using SPSS.

**Centre for International Development, Wolverhampton University**

Highly informative and accessible style of writing. It will be a delight for students to discover.

**School of Modern Languages & Applied Linguistics , University of Limerick**

excellent new edition

**Department of Psychology and Counselling, Chichester University**