• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Captain Psychology

  • Topics
  • Notes
  • Videos
  • Syllabus

Statistics

April 1, 2023 by ktangen

Practice ANOVA

Analysis of Variance

Item 1

Year in school and car accidents.

10th     11th    12th
2        13        4
9        17        8
3        14        2
1          9        1
7         1         4

 

SS     df    ms

Between     ____  ___   ____

Within        ____  ___   ____

Total          ____  ___   ____

F =

Which grade has the most car accidents:

 

Item 2

House color and people’s stay (in years).

Blue    Green   Peach
8        11         4
7         9          8
3         7          9
1       18          2
9       12          4

 

SS     df    ms

Between     ____  ___   ____

Within        ____  ___   ____

Total          ____  ___   ____

F =

Which color of house is lived in the longest (in years)?

 

Item 3

Which is the best coffee (most cups ordered):
Blue-Label   Green-Label     Red-Label
13              1              5
4                1              2
10              2              2
13              2              2
11               2              6
3                4              4
SS df ms
Between
Within
Total
F =
Critical value of F =

Item 4

Which is candle lasts more days?
Non-Sented     Low-Sented   High-Sented
3                         5                    8
9                         1                    2
5                         2                    6
11                         5                    4
                                 SS                 df               ms
Between
Within
Total
F =
Critical value of F =  .
Which is candle lasts more days?

ANSWERS

Item 1

Year in school and car accidents.

10th     11th    12th
2        13        4
9        17        8
3        14        2
1          9        1
7         1         4

 

SS      df    ms

Between     150.53    2   75.27

Within        228.80  12   19.07

Total          379.33  14   27.10

F =   3.95

Which grade has the most car accidents: The critical value is 3.74, so F is significant. You can now do multiple t-tests to discover which means are significantly different from each other.

 

Item 2

House color and people’s stay (in years).

Blue    Green   Peach
8        11         4
7         9          8
3         7          9
1       18          2
9       12          4

SS     df    ms

Between     116.13   2   58.07

Within        151.60  12  12.63

Total          267.73   14   19.12

F =   4.60

Which color of house is lived in the longest (in years)? The critical value is 3.74, so F is significant. You can now do multiple t-tests to discover which means are significantly different from each other.

Item 3

Which is the best coffee (most cups ordered):

Blue-Label   Green-Label     Red-Label
13              1              5
4              1              2
10              2              2
13              2              2
11              2              6
3              4              4

Sum        54         12          21                87
Sum X2 584         30          89              703
N              6           6            6                18
SS           98          6          15.50          119.50

                       SS          df         ms
Between       163.00       2       81.50
Within          119.50     15         7.97
Total             282.50     17       16.62

 F = 10.23

Critical value of F = 3.68. F is significant.

Which is the best coffee. You’ll have to do t-tests to find out. But looking at the numbers above, you can guess that Blue Label is most likely to be the big winner. But you’ll know more after you do the post tests.

 

Item 4

Which is candle lasts more days?

Non-Sented     Low-Sented   High-Sented
3                 5                 8
9                 1                 2
5                 2                 6
11                 5                 4

Sum         28              13                20             61
Sum X2  236              55              120           411
N               4                 4                  4             12
SS           40              12.75            20         72.75

SS        df        ms

Between         27.17      2        14.08

Within            72.75      9         8.08

Total               100.92  11        9.17

F = 1.74

 Critical value of F = 4.26. F is not significant.

Which is candle lasts more days? No significant difference between them. Choose any.

 

Filed Under: Statistics

April 1, 2023 by ktangen

Stat Notes

Simulation

  1. Describe the clinic’s clients
  2. Does Daniel qualify for anger management training? (checklist)
  3. Thomas is acting up in school (base rate; ABA reversal
  4. Intervention team (case study)
  5. Customer satisfaction (survey)
  6. Knowledge (test construction)
  7. (speeded)
  8. (mastery; criterion referenced)
  9. Carlos wants to take advanced physics
    Rosa wants in a gifted program (z)
  10. Transform a distribution (normalize)
  11. Fire Dept want to hire 3 people (reliability)
  12. School wants to use personality test to hire teachers (validity)
  13. Wants to know how well child is doing in school? (ethics)
  14. Predict school performance from IQ (regress)
  15. Predict clinic income for next year (time series)
  16. Predict summer employment (curvalinear)
  17. Does giving clients homework have a significant impact on therapy? (ANOR)
  18. Does magnetic therapy work? (t-test)
  19. (1-way)
  20. Gender & counselor style (factorial)
  21. (multiple regression)
  22. (MANOVA)

 

 

1 If Square 1 is thinking up a theory,

2 Lit Review. Square 2 is seeing what others have to say on the matter

Videos

Leave a Comment

3 Select variable. Theories are not directly testable. So using your theory and literature review, you select variables that can be tested.

4 After you build your theory, conduct a lit review, and select your varialbes, it is time to generate operational definitions.

5 pick a design. Which design is best for you? Here are some of your options.

6. Who to study. . You have determined the general parameters of the study, now it is time to figure out who you are going to study.

7 Random selection. Now that you know who to study, you must decide how to choose them.

8 Prepare your materials. After choosing your subjects, it’s time to get everything ready for them. You have to write your tests, create your slides, build your maze and setup your equipment.

9 Write a proposal. When everything is set, you need to get permission from your Institutional Review Board (IRB) to conduct your study. To do so, you write a proposal.

10 Conduct study. The first 9 squares were getting you ready for this. In Square 10, you get informed consent from your participants and collect the data.

11 Data table, matrixWhat do you do when faced with a pile of numbers? Here’s where you begin:

Square 11

Tags:

Square 12: Levels of Measurement

July 11, 2009 by kltangen
Filed under Square One, Videos

Leave a Comment

After you have organized your data, you need to consider what the numbers mean. Are they being used as names, rankings, quasi-numbers or ratios? Which level of measurement is involved?

In other words, you’re at Square 12

Tags:

Square 13: Graph It

July 11, 2009 by kltangen
Filed under Square One, Videos

Leave a Comment

Before you do any major number-crunching, it’s a good idea to get an overview of the data. It’s easy to see things by using histograms, pie chars and frequency distributions.

When you’re ready to making pretty pictures, you’re in Square 13

14 Central Tendency. The 14th episode is about how to find the center of a distribution and why you’d want to. If you’re going descriptive statistics, you’re at Square 14

 

aaaaaaaaa

 

Summary of Measurement

November 5, 2008 by kltangen
Filed under Summaries

NOMINAL

Used as a name
Makes no mathematical assumption.
0, 12 and 1 have no preference.
Examples:
The # on a race car
Bank ID number
Airplane model #
Part numbers
The # on the side of your horse

ORDINAL

Used to report rank or order.
Assumes the numbers can be arranged in order. Allows descriptions of 1st, 2nd and 3rd place but steps need not be the same size. Winning a close race receives the same score as an easy win.
Examples:
Finish order in contest
College sports ranking
Rating scales
The finish order of your horse

INTERVAL

Used to count conceptual characteristics (IQ, aggression, etc.)
Assumes numbers indicate equal units. Allows distinctions to be made between difficult and easy races but does not allow “twice as much” comparisons. O does not mean lack of intelligence, etc.
Examples:
The # of test items passed.
Temperature in Fahrenheit
Temperature in Celsius
The # of hurdles your horse jumps

RATIO

Used to measure physical characteristics.
Assumes 0 is absolute (indicates lack of entity being measured). Allows 2:1, 3:2, “twice as much” and “half as much” comparisons. 0 means no time has elapsed or no distance has been traveled, etc.
Examples:
Distance, time and weight
Temperature in Kelvin
Miles per gallon
How fast your horse runs

 

 

Summary of Central Tendency

November 5, 2008 by kltangen
Filed under Summaries

ITEM A

11
3
12
1
3
6
4
3

Mean
Median
Mode

Positively-skewed distribution. Mean will be higher than median and mode. Median is better representative of where most scores are located.

 

ITEM B

9
8
8
7
6
5
8
1
6

Mean
Median
Mode

Negatively-skewed distribution. Mean will be lower than median and mode. Median is better representative of where most scores are located.

 

ITEM C

5
5
5
5
5
5
5

Mean
Median
Mode

This is a constant. Everyone has the same score.

 

 

Summary of Dispersion

November 5, 2008 by kltangen
Filed under Summaries

If everyone has the same score, there is no dispersion from the mean. If everyone has a different scores, dispersion is at it’s maximum but there is no commonality in the scores. In a normal distribution, there are both repeated scores (height) and dispersion (width).

Percentiles, quartiles and stanines imply that distributions look like plateaus. Scores are assumed to be spread out evenly, like lines on a ruler. People are nicely organized in equal-sized containers.

SS, variance and standard deviation imply that distributions look like a mountain. Scores are assumed to be clustered in the middle, people are more alike than different. People are mostly together at the bottom on the bowl with a few sticking to the sides.

You can describe an entire distribution as 3 steps (standard deviations) to the left and 3 steps to the right of the mean. The percentages go 2, 14, 34, 34, 14, and 2. This is believed to be true of all normally distributed variables, regardless of what it measures.

 

 

Summary of z-Score

November 5, 2008 by kltangen
Filed under Summaries

A z-score indicates how many steps a person is from the mean. A raw score below the mean corresponds to negative z score; a score which is above the mean would have a positive z. The standard deviation indicates how big each step is. Approximately 68% of the scores lie within one standard deviation of the mean. That is, a majority of the distribution is from z = -1 to z = +1.

 There are 5 primary applications of z-scores:

a. locating an individual score

b. using z as a standard. Individual raw scores are converted to z-scores and compared to a set standard. Two common standards are z = 1.65, which represents a 1-tailed area of 95%, and z = + 1.96 or – 1.96 (between which is a 2-tailed area of 95%).

c. standardizing a distribution and smoothing its data.

d. making a linear transformations of variables; converting the mean and standard deviation to numbers that easier to remember or handle.

e. comparing 2 raw score distributions with different means and standard deviations.

Summary of Correlation

November 5, 2008 by kltangen
Filed under Summaries

  • To measure the strength of relationship between two variables, it would be best to use a correlation
  • A correlation can only be between -1 and +1.
  • The closer the correlation coefficient is to 1 (either + or -), the stronger the relationship.
  • The sign indicates the direction of relationship.
  • The coefficient of determination is calculated by squaring r. The coefficient of determination shows how much area the two variables share; the percentage of variance explained (accounted for).
  • The coefficient of nondetermination is calculated by subtracting the coefficient of determination from 1. The coefficient of nondetermination shows how much the two variables don’t share; the percentage of unexplained variance.
  • To calculate the correlation between two continuous variables, the Person product-moment coefficient is used. To calculate the correlation between two discrete variables, the phi coefficient is used. To calculate the correlation between one discrete and one continuous variable, the point biserial coefficient is used.
  • Correlations are primarily a measure of consistency, reliability, and repeatability.
  • Correlations are based on two paired-observations of the same subjects.
  • A cause-effect relationships has a strong correlation but a strong correlation doesn’t guarantee a cause-effect relationship. In a correlation, A can cause B or B can cause A or both A and B can be caused by another variable. Inferences of cause-effect based on correlations are dangerous. A correlation shows that a relationship is not likely to be due to chance but it cannot indicate which variable was cause and which effect.
  • Test-retest coefficients are correlations.
  • In order to make good predictions between two variables, a strong correlation is necessary.

Summary of Regression

November 5, 2008 by kltangen
Filed under Summaries

The variable with the smallest standard deviation is the easiest to predict. The less dispersion, the easier to predict.

Without knowing anything else about a variable, the best predictor of it is its mean.

The angle of a regression line is called the slope. Slope is calculated by dividing the Sxy by the SSx.

The point where the regression line crosses the criterion axis is called the intercept.

Predicting the future based on past experience is best done with a regression.

Predicting scores between known values is called interpolation.

Predicting scores beyond known values is called extrapolation.

Regression works best when a relationship is strong and linear.

Regression works best when the correlation is strong.

The error around a line of prediction is consistent along the whole line. It doesn’t vary or waver along the line, so there is only 1 standard error of estimate for the entire line.

The error around a line of prediction can be estimated with the standard error of estimate.

Plus or minus one SEE accounts for 68% of the prediction errors.

A regression is based on paired-observations on the same subjects.

Pre- and Post-test performance is best analyzed by using a regression.

 

Summary of Advanced Procedures

November 5, 2008 by kltangen
Filed under Summaries

The General Linear Model is “general” because it includes a broad group of procedures, including correlation, regression and the more complex linear models. And it includes both continues and discrete variables. It’s linear because it assumes that the relationship between model components is consistent. When one variable goes up (has larger numbers), the other variable consistent reacts. The reaction can be go the same way (positive) or go the opposite way (negative). But the assumption is that changes in one variable will be accompanied by changes in the other variable.

Another assumption is that causation may not be proved but it can be inferred. Although random assignment might increase one’s confidence in cause-effect conclusions, causation can be inferred based simply on consistency. Such an assumption can be risky but we do it all the time. We assume that the earth gets warm because the sun rises. We’ve never randomly assigned the sun to rising and now rising conditions. But we feel quite confident is our conclusion that the sun causes the heat, and no the other way around.

Here are nine applications of the General Linear Model

Continuous Models compare:

  a. frequency distribution        One variable (predictor or criterion)
b. correlation                        Two regressions
c. regression                         Single predictor; single criterion                 Same as F test or t-squared
d. multiple regression            Multiple predictors; single criterion            Same as ANOVA
e. multivariate analysis           Multiple predictors; multiple criteria
f. causal modeling                 Multiple measures of a factor

Discrete Models Compare:

  a. t-test                                 2 means; 1 independent variable
b. one-way ANOVA            3 or more means; 1 independent variable
c. factorial ANOVA             Multiple means on 2+ independent variables

 

 

stem-leaf graph

histogram

box plot

scattergram

mean

median

mode

range

variance

standard deviation

quartiles

interquartile range

correlation coefficient

population

sample

probability theory

binomial distribution

normal distribution

t-test and t distribution

F test and F distributions

chi-squared probability distributions

estimation procedures

confidence interval

hypothesis testing

t-tests

analysis of variance

goodness of fit

contingency tables

 

 

 

Filed Under: Statistics

March 30, 2023 by ktangen

ANOR

Day 10

It is helpful to have an overview of designs more advanced than those covered in a typical statistics course. Complex models build on the principles we already discussed. Although their calculation is beyond the scope of this discussion (that’s what computers are for), here is an introduction to procedures that use multiple predictors, multiple criteria and multivariate techniques to test interactions between model components.

Until now, our models have been quite simple. One individual, one group, or one variable predicting another. We have explored the levels of measurement, the importance of theories and how to convert theoretical constructs into model variables. We have taken a single variable, plotted its frequency distribution and described its central tendency and dispersion. We have used percentiles and z-scores to describe the location of an individual score in relation to the group.

In addition to single variable models, we studied two variable models, such as correlations, regressions, t-tests and one-way ANOVAs. We have laid a thorough foundation of research methods, experimental design, and descriptive and inferential statistics.

Despite their simplicity, these procedures are very useful. You can use a correlation to measure the reliability and validity of a test, machine or system of management, training or production. You can use a linear regression to time data a rare archaeological find, predict the winner of a race or analyze a trend in the stock market. You can use the t-test to test a new drug against a placebo or compare 2 training conditions. You can use the 1-way ANOVA to test several psychotherapies, compare levels of a drug or brands of computers.

Also, the procedures you’ve studied so far can be combined into more complex models. The most complex models have more variables but they are variations of the themes you’ve already encountered

ANOR

Analysis of variance (ANOR) tests a regression to see how straight of a line it is. It is a goodness of fit test. It tests how good the data fits our straight line

Starting  off, we assume our data looks like chance. It is not an organized pattern; it’s a circle with no linearity. Our null hypothesis is that our data has no significant resemblance to a straight line. We are assuming our data will not match (fit) our model (straight line). We will keep that assumption until it is clear that the data fits the model. But the fit has to be good; it has to be significant.

We are using X to predict Y. We are hoping the variations in Y can be explained by the variations in X. Prediction is based on commonality. When X and Y are highly correlated, it is easy to make predictions from one variable to another. When there is little or no correlation, X is not a good predictor of Y; they are operating independently.

In statistic talk, an ANOR partitions the variance into mean squares Regression (what we understand) and mean squares Error (what we can’t explain). Mean squares is another name for variance. We are going to make a ratio of understood variance to not-understood variance. We will compare this ratio with the values in an F table.

Factorial ANOVA

Interactions can be good or bad. Some heart medications work better when given together. For example, Digoxin and calcium channel blockers go together because they work on different channels. Together they are better than each would be separately. But other heart medications (phenylpropanolamine with MAO inhibitors) can result in fast pulse, increased blood pressure, and even death. This is why we’re often warned not to mix drugs without checking with our doctor.

The ability to check how variables interact is the primary advantage of complex research designs and advanced statistical techniques. Although a 1-Way ANOVA can test to see if different levels of aspirin help relieve headaches. A factorial ANOVA can be used to test both aspirin and gender as predictors of headaches. Or aspirin, gender, time of day, caffeine, and chicken soup. Any number of possible explanations and combination of explanations can be tested with the techniques of multiple regression, MANOVA, factorial ANOVA and causal modeling.

 

Factorial ANOVA

A factorial ANOVA tests the impact of 2 or more independent variables on one dependent variable. It tests the influence of many discrete variables on one continuous variable. It has multiple independent variables and one dependent variable. It can test interactions between variables.

ANOVA

A factorial AVOVA is like combining 1-way ANOVAs together. The purpose of combining the designs is to test for interactions. A 1-way ANOVA can test to see if different levels of salt will influence compliments but what happens if the soft drink is both salty and swee

Factorial designs

A 1-way ANOVA model tests multiple levels of 1 independent variable. Let’s assume the question is if stress causes people to work multiplicationn problems. Subjects are randomly assigned to a treatment level (high, medium and low, for example) of one independent variable (stress, for example). And their performance on one dependent variable (number of errors) is measured.

If stress impacts performance, you would expect errors to increase with the level of attention. The variation between the cells is due to the treatment given you. Variation within each cell is thought to be due to random chance.

A 2-way ANOVA has 2 independent variables. Here is a design which could look at gender (male; female) and stress (low, medium and high):

It is called a 2×3 (“two by three”) factorial design. If each cell contained 10 subjects, there would be 60 subjects in the design. A design for amount of student debt (low, medium and high) and year in college (frosh, soph, junior and senior) and ) would have 1 independent variable (debt) with 3 levels and 1 independent (year in school) with 4 levels.

This is a 3×4 factorial design. Notice that the number (3, 4, etc) tells how many levels in the independent variable. The number of numbers tells you how many independent variables there are. A 2×4 has 2 independent variable. A 3×7 has 2 independent variables (one with 3 levels and one with 7 levels). A 2x3x4 factorial design has 3 independent variables.

Factorial designs can do something 1-way ANOVAs can’t. Factorial designs can test the interaction between independent variables. Taking pills can be dangerous and driving can be dangerous; but it often is the interaction between variables that interests us the most.

Analyzing a 3×4 factorial design involves 3 steps: columns, rows and cells. The factorial ANOVA tests the columns of the design as if each column was a different group. Like a 1-way ANOVA, this main effect tests the columns as if the rows didn’t exist.

The second main effect (rows) is tested as if each row was a different group. It tests the rows as if the columns didn’t exist. Notice that each main effect is like doing a separate 1-way ANOVA on that variable.

The cells also are tested to see if one cell is significantly larger (or smaller) than the others. This is a test of the interaction and checks to see if a single cell is significantly different from the rest. If one cell is significantly higher or lower  than the others, it is the result of a combination of the independent variables.

Multiple Regression

An extension of simple linear regression, multiple regression is based on observed data. In the case of multiple regression, two or more predictors are used; there are multiple predictors and a single criterion.

Let’s assume that you have selected 3 continuous variables as predictors and 1 continuous variable as criterion. You might want to know if gender, stress and time of day impact typing performance.

Each predictor is tested against the criterion separately. If a single predictor appears to be primarily responsible for changes in the criterion, its influence is measured. Every combination of predictors is also used tested. So both main effects and interactions can be tested. If this sounds like a factorial ANOVA, you’re absolutely correct.

You could think of Multiple Regression and ANOVA as siblings. factorial ANOVAs use discrete variables; Multiple Regression uses continuous variables. If you were interested in using income as one of your predictors (independent variables), you could use discrete categories of income (high, medium and low) and test for significance with an ANOVA. If you wanted to use measure income on a continuous variable (actual income earned), the procedure would be a Multiple Regression.

You also could think of Multiple Regression and the parent of ANOVA. Analysis of Variance is actually a specific example of Multiple Regression; it is the discrete variable version. Analysis of Variance uses categorical predictors. Multiple Regression can use continuous or discrete predictors (in any combination); it is not restricted to discrete predictors.

Both factorial ANOVA and Multiple Regression produce a F statistic and both have only one outcome measure. Both produce a F score that is compared to the Critical Values of F table. Significance is ascribed if the calculated value is large than the standard given in the table.

Both procedures have only one outcome measure. There may be many predictors in a study but there is only one criterion. You may select horse weight, jockey height, track condition, past winning and phase of the moon as predictors of a horse race but only one outcome measure is used. Factorial ANOVA and Multiple Regression are multiple predictor-single criterion procedures.

Multivariate Analysis

Smetime called MANOVA (pronounced man-o-va), multivariate analysis is actually an extension of multiple regression. Like multiple regression, multivariate analysis has multiple predictors. In addition to multiple predictors, multivariate analysis allows multiple outcome measures.

Now it is possible to use gender, income and education as predictors of happiness AND health. You are no longer restricted to only a single criterion. With multivariate analysis, the effects and interactions of multiple predictors can be examined. And their impact on multiple outcomes can be assessed.

The analysis of a complex multiple-predictor multiple-criteria model is best left to a computer but the underlying process is the calculation of correlations and linear regressions. As variables are selected for the model, a decision is made whether it is predictor or a criterion. Obviously, aside from the experimenter’s theory, the choice of predictor or criterion is arbitrary. In multivariate analysis, a variable such as annual income could be either a predictor or a criterion.

Complex Modeling

There are a number of statistical procedures at the high end of modeling. Relax! You don’t have to calculate them. I just want you to know about them.

In particular, I want to make the point that there is nothing scary about the complex models. There are involved and require lots of tedious calculations but that’s why God gave us computers. Since we are blessed to have stupid but remarkably fast mechanical slaves, we should let them do the number crunching.

It is enough for us to know that a complex model—at its heart—is a big bundle of correlations and regressions. Complex models hypothesize directional and nondirectional relationships between variables. Each factor may be measured by multiple measures. Intelligence might be defined as the combination of 3 different intelligence tests, for example.

And income might be a combination of both salary plus benefits minus vacation. And education might be years in school, number of books read and number of library books checked out. The model, then, becomes the interaction of factors that are more abstract than single variable measures.

Underlying the process, however, are principles and procedures you already know. Complex models might try to determine if one more predictor helps or hurts but the model is evaluated just like a correlation: percentage of variance accounted for by the relationships.

Photo   by Ray Fragapane on Unsplash

Filed Under: Statistics

March 30, 2023 by ktangen

Calc SS

Sum of Squares

You need three things to calculate the SS (sum of squares): the total of one column, the total of another column, and the number of scores in each column (they have to agree; no fair having columns of different length). Sum of Squares is easy to calculate. Plus it’s one of the most useful measures of dispersion.

Like range, variance and standard deviation, Sum of Squares (SS for short) is a measure of dispersion. The more inconsistent the scores are (less homogeneous) the larger the dispersion. The more homogenous the scores (alike), the smaller the dispersion.

Using the formula above, let’s go through it, step by step Assume this is the distribution at issue:

X
12
6
5
4
5
10
3

First, each number is squared, and put into another column:

X       X2
12     144
6       36
5       25
4       16
5       25
10    100
3         9

Second, we sum each column. The sum of the first column is 45. This is called the sum of X. The sum of the second column is the sum of X-squared. Remember, we squred the scores and then added them up. The sum of the squared-X’s is 355.

Third, we square the sum of X (45 times itself = 2025) and divide it by N (number of scores). Since N = 7, we divide 2025 by 7 (which equals 289.29).

Fourth, we recall the sum of the X2 and subtract 240.67 from it. So 355 minus 289.29 = 65.71. The Sum of Squares is 65.71.

 

 

 

See Transcript

 

Deviation Method

Filed Under: Statistics

March 30, 2023 by ktangen

Calc z Scores

You need three things to calculate a z-score: your score (X), the mean (bar-X) and the standard deviation (s).
A z-score tells you how far away you are from the mean in terms of standard deviations. If you are at the mean, you z-score is zero. If you are ½ standard deviation above the mean, your z-score is +.5. If you are one standard deviation below the mean, your z-score is -1.
A z-score tells you how many standard deviations your score is away from the mean. If z is positive, your score is above the mean. If z is negative, your score is below the mean. If your score is at the mean, z = 0.
Take your score (X) and subtract the mean from it. X with a bar over it is the symbol for the mean, and it’s pronounced bar-x (like the ranch on an old cowboy movie).
Divide the result of your subtraction by the standard deviation.
Takes it. It’s easier than mixing muffins, and there’s no preheating the oven.

Filed Under: Statistics

March 30, 2023 by ktangen

Calc Chi Square

Filed Under: Statistics

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Footer

Search

KenTangen.com

My Channel

Copyright © 2025 · Executive Pro on Genesis Framework · WordPress · Log in