Department of Statistics, Virginia Tech

This homework is due on **Tuesday, October 10th at 2pm** (the start of class). Please turn in all your work. This homework covers sign tests and variations, and the start of 2x2 contingency tables.

*Six students went on a diet in an attempt to lose weight, with the following results:*

```
weights <- data.frame(
name=c("Abdul", "Ed", "Jim", "Max", "Phil", "Ray"),
before=c(174, 191, 188, 182, 201, 188),
after=c(165, 186, 183, 178, 203, 181))
```

*Is the diet an effective means of losing weight?*

Here we shall use a sign test to pit \(\mathcal{H}_0: \mathbb{P}(+) \geq \mathbb{P}(-)\) versus \(\mathcal{H}_1: \mathbb{P}(+) < \mathbb{P}(-)\)., defiining \(X_i\) to be the weight before and \(Y_i\) to be the weight after.

*(9 pts) Perform the test “by hand”.*

The first step is to count the number of plusses.

```
t1 <- sum(weights$before < weights$after)
n <- sum(weights$before != weights$after)
c(t1, n)
```

`## [1] 1 6`

Then, using that \(T_1 \sim \Binom(6, 1/2)\) we can calculate the \(p\)-value as \(\mathbb{P}(T \leq t_1)\).

`pbinom(t1, n, 1/2)`

`## [1] 0.109375`

- We fail to reject \(\mathcal{H}_0\) at the 5% level and conclude that the diet does not work.

*(8 pts) Perform the test “using a software library”.*

Here, we could just use the above \(t_1\) and \(n\) calculations and then feed them into the `binom.test`

function, as we did in class, or we could get a little clever and write our own library function.

```
sign.test <- function(x, y, alternative="two.sided")
{
n <- sum(x != y)
t1<- sum(x < y)
p.value <- binom.test(t1, n, 0.5, alternative=alternative)$p.value
return(list(n=n,alternative=alternative,t1=t1,p.value=p.value))
}
```

Here it is in action on our problem.

`sign.test(weights$before, weights$after, alternative="less")`

```
## $n
## [1] 6
##
## $alternative
## [1] "less"
##
## $t1
## [1] 1
##
## $p.value
## [1] 0.109375
```

- Same result.

*Two different additives were compared to see which one is better for improving the durability of concrete. One hundred small batches of concrete were mixed under various conditions and, during the mixing, each batch was divided into two parts. One part received additive A and the other part received additive B.*

*After the concrete hardened, the two parts in each batch were crushed against each other, and an observer determined which part appeared to be the most durable. In 77 cases the concrete with additive A was rated more durable, whereas in 23 cases the concrete with additive B was rated more durable. Is there a significant difference between the effects of the two additives?*

Again we will use the sign test to make the two-tailed comparison of \(\mathcal{H}_0: \mathbb{P}(+) = \mathbb{P}(-)\) versus \(\mathcal{H}_1: \mathbb{P}(+) \ne \mathbb{P}(-)\), were we define \(X_i\) to be additive A and \(Y_i\) to be additive B.

*(8 pts) Perform the test “by hand”.*

The observed value of the test statistic is

```
t1 <- 23
n <- 100
c(t1, n)
```

`## [1] 23 100`

Here, the observed value of \(t_1\) is below the null mean, so we can cut-and-paste the two-tailed binomial code for this situation

```
pstar <- 0.5
rhs <- seq(ceiling(n*pstar), n)
tU <- sum(dbinom(rhs, n, pstar) <= dbinom(t1, n, pstar))
phi <- pbinom(t1, n, pstar) + pbinom(n - tU, n, pstar, lower.tail=FALSE)
phi
```

`## [1] 5.513581e-08`

- We reject \(\mathcal{H}_0\) at the 5% level and colude that there is a signifigant difference between the additives.

*(8 pts) Perform the test “using a software library”.*

Since we don’t have the raw data we can’t use our `sign.test`

code from above. Instead it is just as simple to call `binom.test`

directly.

`binom.test(t1, n, 0.5)`

```
##
## Exact binomial test
##
## data: t1 and n
## number of successes = 23, number of trials = 100, p-value =
## 5.514e-08
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.1517316 0.3248587
## sample estimates:
## probability of success
## 0.23
```

- Same result.

*One hundred thirty-five citizens were selected at random and were asked to state their opinion regarding U.S. foreign policy. Forty-three were opposed to the foreign policy. After several weeks, during which they received an informative newsletter, they were again asked their opinion; 37 were opposed, and 30 of the 37 were persons who originally were not opposed to the U.S. foreign policy. Is the change in numbers of people opposed to the U.S. foreign policy significant?*

Here, the McNemar test for significance of change is appropriate. We have that \(\mathcal{H}_0: \mathbb{P}(X_i = 0) = \mathbb{P}(Y_i = 0\), versus \(\mathcal{H}_1: \mathbb{P}(X_i = 0) \ne \mathbb{P}(Y_i = 0)\), for all \(i\). Here we are treating \(X_i\) as before and \(Y_i\) as after.

According to the problem description, the contingency table at play is

\(Y_i = 0\) | \(Y_i = 1\) | |
---|---|---|

\(X_i = 0\) | \(a=7\) | \(b=36\) |

\(X_i = 1\) | \(c=30\) | \(d=62\) |

*(9 pts) Perform the test “by hand”.*

We extract the relevant table entries in R as follows.

```
b <- 36
c <- 30
n <- b + c
c(b, c, n)
```

`## [1] 36 30 66`

Since \(n > 20\), we calculate

```
t1 <- (b-c)^2/(b+c)
t1
```

`## [1] 0.5454545`

Then, \(p\)-value may be calculated as \(P(T_1 > t_1)\), where \(T_1 \sim \chi^2_1\).

`pchisq(t1, 1, lower.tail=FALSE)`

`## [1] 0.4601809`

- We fail to reject the null hypothesis at the 5% level and conclude that there is a significant change in numbers of people opposed to the policy.

*(8 pts) Perform the test “using a software library”.*

`mcnemar.test(matrix(c(7,30,36,62),nrow=2), correct=FALSE)`

```
##
## McNemar's Chi-squared test
##
## data: matrix(c(7, 30, 36, 62), nrow = 2)
## McNemar's chi-squared = 0.54545, df = 1, p-value = 0.4602
```

- Same result.

*In a certain city the mortality rate per 100,000 citizens due to automobile accidents for each of the last 15 years was:*

`mortality <- c(17.3, 17.9, 18.4, 18.1, 18.3, 19.6, 18.6, 19.2, 17.7, 20.0, 19.0, 18.8, 19.3, 20.2, 19.9)`

*Is there any basis for the statement that the mortality is increasing?*

Here we will use the Cox & Stewart test for trend with the following hypotheses, \(\mathcal{H}_0:\) there is no upward trend, versus, \(\mathcal{H}_1:\) there is an upward trend.

First lets plot the data to see if we think an upward trend is plausible.

`plot(mortality, main="checking for trend", xlab="stop #", ylab="passenger count")`

- Looks plausible.

*(8 pts) Perform the test “by hand”.*

Now we perform an upper-tail test of increasing trend where we create pairs in the following way.

```
x <- mortality[1:floor(length(mortality)/2)]
y <- mortality[(ceiling(length(mortality)/2)+1):length(mortality)]
rbind(x,y)
```

```
## [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## x 17.3 17.9 18.4 18.1 18.3 19.6 18.6
## y 17.7 20.0 19.0 18.8 19.3 20.2 19.9
```

The observed value of the test statistic is the count of the number of pairs where \(x < y\); and \(n\) is the number of unequal pairs.

```
t <- sum(x < y)
n <- sum(x != y)
c(t, n)
```

`## [1] 7 7`

Under the null hypothesis, which is that the probability of plusses, \(p\) is at most \(1/2\), \(\mathcal{H}_0: p \leq 1/2\), the null distribution is \(T \sim \mathrm{Bin}(7, 1/2)\). The alternative hypothesis that \(\mathcal{H}_1: p > 1/2\) gives that the \(p\)-value may be calculated as.

`pbinom(t-1, n, 1/2, lower.tail=FALSE)`

`## [1] 0.0078125`

- Therefore we reject \(H_0\) at the 5% level and conclude that the mortality rate is increasing.

*(8 pts) Perform the test “using a software library”.*

```
library(randtests)
cox.stuart.test(mortality, alternative='right.sided')
```

```
##
## Cox Stuart test
##
## data: mortality
## statistic = 7, n = 7, p-value = 0.007812
## alternative hypothesis: increasing trend
```

- Same result.

*One hundred men and one hundred women were asked to try a new toothpaste and to state whether they liked or did not like the new taste. Thirty-two men and twenty-six women said they did not like the new taste. Does this indicate a difference in preferences between men and women in general?*

The proportion tests is appropriate here.

```
O <- rbind(c(32, 68), c(26, 74))
n <- rowSums(O)
N <- sum(n)
C <- colSums(O)
```

and in a pretty table.

Not Like | Like | Total | |
---|---|---|---|

Men |
32 | 68 | 100 |

Women |
26 | 74 | 100 |

Total |
58 | 142 | 200 |

Here we test \(\mathcal{H}_0: p_1 = p_2\), versus \(\mathcal{H}_1: p_1 \ne p_2\), a two-sided test.

*(9 pts) Perform the test “by hand”.*

The observed value of the test statistic may be calculated in R as follows.

```
t1 <- sqrt(N) * (O[1,1]*O[2,2] - O[1,2]*O[2,1]) / sqrt(n[1]*n[2]*C[1]*C[2])
t1
```

`## [1] 0.9349924`

Since this is a two-tailed test, we have a choice about whether to leverage asymptotic standard normality of \(T_1\) or Chi-squared for \(T_1^2\) when calculating the \(p\)-value.

`2 * pnorm(-abs(t1))`

`## [1] 0.3497922`

`pchisq(t1^2, 1, lower.tail=FALSE)`

`## [1] 0.3497922`

- But, of course, they give the same result.
- At the 5% level this is not enough evidence to reject \(\mathcal{H}_0\). We conclude that conclude that there is no difference in preferences between men and women.

*(8 pts) Perform the test “using a software library”.*

`prop.test(x=O, correct=FALSE)`

```
##
## 2-sample test for equality of proportions without continuity
## correction
##
## data: O
## X-squared = 0.87421, df = 1, p-value = 0.3498
## alternative hypothesis: two.sided
## 95 percent confidence interval:
## -0.06549893 0.18549893
## sample estimates:
## prop 1 prop 2
## 0.32 0.26
```

- Identical result.

*A florist maintains that certain venues have a preference for a certain type of flower to decorate for weddings. In order to test this claim, she had 10 available decorative gardenias and 10 available decorative tulips. Venue A had 8 weddings on the list and Venue B had 12 weddings on its list. Venue A used gardenias on 5 different occasions. Does this indicate that there is a significant difference in the use of gardenias across venues?*

Fisher’s exact test is appropriate here. The 2x2 contingency table corresponding to the description above is below.

Gardenias | Tulips | Total | |
---|---|---|---|

Venue A |
5 | 3 | 8 |

Venue B |
5 | 7 | 12 |

Total |
10 | 10 | 20 |

And from the table we can extract out the relevant quantities in R.

```
N <- 20
r <- 8
c <- 10
t2 <- x <- 5
```

Below we test \(\mathcal{H}_0 : p_1 = p_2\), versus \(\mathcal{H}_1: p_1 \ne p_2\), a two-tailed test, leveraging that \(T_2\) follows a hypergeometric distribution with \(N=20\), \(r=8\) and \(c= 10\).

*(9 pts) Perform the test “by hand”.*

The mean of our hypergrometric is \(cr/N = 4\), so \(t_2\) is above that value. Cutting and pasting from any of our many two-tailed binomial examples earlier in class, but replacing binomials with the appropriate hypergeometric gives the following \(p\)-value calculation.

```
lhs <- seq(0, floor(c*r/N))
eps <- sqrt(.Machine$double.eps) ## numerical wiggle room
tL <- sum(dhyper(lhs, r, N-r, c) <= dhyper(t2, r, N-r, c) * (1 + eps))
phyper(tL - 1, r, N-r, c) + phyper(t2 - 1, r, N-r, c, lower.tail=FALSE)
```

`## [1] 0.6499166`

- At the 5% level we fail to reject \(H_0\), and conclude that there is no difference in the use of gardenias across venues.

*(8 pts) Perform the test “using a software library”.*

`fisher.test(rbind(c(x, r-x),c(c-x, N-r-c+x)))`

```
##
## Fisher's Exact Test for Count Data
##
## data: rbind(c(x, r - x), c(c - x, N - r - c + x))
## p-value = 0.6499
## alternative hypothesis: true odds ratio is not equal to 1
## 95 percent confidence interval:
## 0.2725276 21.9391352
## sample estimates:
## odds ratio
## 2.234096
```