**1) Download and install R for windows**

**2) Install the psych package**

Run the following line of code in the command prompt

```
install.packages("psych")
```

**3) Load psych package**

Run this code

```
library(psych)
```

**4) Run the test**

Enter code like this:

`paired.r(.2,.3,.93,100)`

This above code tests whether correlation of 0.20 is statistically different from 0.30 given that the two predictor variables are correlated 0.93, assuming a sample size of 100.

The output in R looks like this:

```
paired.r(.2,.3,.93,100)
$test
[1] "test of difference between two correlated correlations"
$t
[1] -2.832009
$p
[1] 0.005614689
```

It could be written up as:

The correlation between IV1 and DV (r = 0.30) was significantly larger than the correlation between IV2 and DV (r = 0.20), n=100, p=0.006.

Richard Moulding:

ReplyDelete"point[s] out for the non-r users there is a macro for SPSS correlations Meng test by anthony hayes at

http://psyphz.psych.wisc.edu/~shackman/meng.sps

"

http://sites.google.com/site/richardmouldinghomepage/

How do you determine the 0.93 (the two predictor variables)

ReplyDeleteIf you have three variables (X, Y, Z), then there are three possible correlations X with Y, X with Z, and Z with Y.

DeleteIf you have raw data, just run a correlation matrix on the three variables.

If you are interpreting an existing correlation matrix, just look up the three relevant correlations.

What is the basis for this calculation? Or how do I cite the procedure in a paper? Perhaps more importantly--I may need to explain (or at least support) it during my defense : ) My limited understanding is that with independent samples a Fisher Z test is used, but with dependent samples there is some controversy about how to do this.

ReplyDeleteHowell "Statistical Methods for Psychology" provides further explanation with references.

ReplyDeleteThanks for providing the R code for testing the difference between multiple correlations.

DeleteIs this test suitable for Spearman Rank (rs) correlations?

Zar (1996) advised that hypothesis tests can be carried out on rs correlations, but suggests that the the variance of z be calculated as sqrt(1.06/n-3), rather than sqrt(1/n-3).

Many thanks!

Jeromy,

ReplyDeleteDoes this calculation work for partial correlations as well? I have 3 variables and have run partial correlations on the three possible relationships between the 3 variables. I want to assess if the partial correlation between variables 1 and 2 controlling for 3 is significantly larger than the partial correlation between variables 1 and 3 controlling for 2.

Thanks

Hello, this is very interesting as I am currently trying to calculate to see if one correlation is greater than another when looking at the same sample (so nonindependent correlations). What exactly are these computed values in the example : [1] -2.832009[1] 0.005614689

ReplyDeleteMust they be used after or is it that the first being of a lesser value than the other means that the first correlation entered is indeed lesser than the other? And where does the P in the example come from? Thanks

t represent a t-statistic (from Hotelling's test (dependent correlations) , and the p is in this case the two-tailed p-value associated with the t-statistic. Thus, in a typical context if p < .05, you might conclude that there is a statistically significant difference between the correlations.

DeleteHi,

ReplyDeleteI posted this question a little while ago and wonder if anyone can offer some guidance for me?

Particularly, I'd like to know if this test is suitable for Spearman Rank (rs) correlations?

Zar (1996) advised that hypothesis tests can be carried out on rs correlations, but suggests that the the variance of z be calculated as sqrt(1.06/n-3), rather than sqrt(1/n-3).

Thanks!

Hi Jeromy,

ReplyDeleteThanks for the post. I was wondering if you could answer a related question. I would like to test the difference between two correlations, but the structure of my data is different than what you describe here.

My subjects were measured for two behavioral variables. I measured subjects twice, once after experimental treatment and once after control treatment (order was balanced). I would like to see if the correlation between the two behavioral variables in the control treatment differs from the correlation in the experimental treatment. From my understanding, Fisher's Z test is inappropriate since my behavioral variables are not normally distributed and the repeated measures create non-independence. I've searched quite a bit for how to analyze the data this way, but no luck. Do you have suggestions as to what the proper analysis is and how to perform it in R?

Thanks

I believe that Olkin and Finn provide a formula. See particularly equation 5. I'm not sure whether an R implementation exists.

ReplyDeleteOlkin, I. & Finn, J.D. (1995). Correlations redux. Psychological Bulletin, 118, 155.

Thanks for this post. I know that is a bit old, but I will give a try and see if I get an answer.

ReplyDeleteI want to test whether two dependent correlations are statistically different. I have three variables x, y and z. x is categorical (1,2,3,4) and y and z numerical. I computed xy and xz Spearman Ranked correlation. I also know Spearman Ranked correlation of yz.

My question is: Is the r.test appropriate if categorical variables are involved?

Thanks!

I meant paired.r instead of r.test.

DeleteIf there are three quantitative variables: x, y, and z, how to perform dependent correlation analysis? Thanks.

ReplyDeleteThat's what this post is about. What do you want to know?

DeleteConcerning Hotelling's T-test for "correlated correlations", I have some questions about the formula.

ReplyDelete1. r: what does "r" mean like r12?

2. df=N-3: What does "N" mean? Does N mean the sample size?

3. √ [N-3]: What does "√" mean?

4. √ 2: What does "√" mean?

Concerning Hotelling's T-test for "correlated correlations", I have some questions about the formula

ReplyDelete1. r: what does "r" mean like r12?

2. df=N-3: What does "N" mean? Does N mean the sample size?

3. √ [N-3]: What does "√" mean?

4. √ 2: What does "√" mean?

1. r means correlation; r12 means correlation between variable 1 and variable 2

Delete2. N means total sample size

3 and 4. the symbol means square root

How can this be adapted to compare coefficients of determination?

ReplyDeleteI have an R package that uses bootstrapping to compare r-squared when the sample is the same.

Deletehttps://github.com/jeromyanglim/personalityfacets

Anglim, J., & Grant, S. L. (2014). Incremental criterion prediction of personality facets over factors: Obtaining unbiased estimates and confidence intervals. Journal of Research in Personality, 53, 148-157.

https://scholar.google.com.au/citations?view_op=view_citation&hl=en&user=mF0H9gUAAAAJ&citation_for_view=mF0H9gUAAAAJ:0aBXIfxlw9sC