Department of Statistics, Virginia Tech

This homework is due on **Thursday, September 15th at 3:30pm** (the start of class). Please turn in all your work. This homework primarily covers the middle part of Chapter 3 from the text.

*It is known that 20% of a certain species of insect exhibit a particular characteristic A. Eighteen insects of that species are obtained from an unusual environment, and none of these have characteristic A. Is it reasonable to assume that insects from that environment have the same probability of 0.20 of having that characteristic, as the species in general has?*

A two-tailed binomial test is appropriate with \(\mathcal{H}_0 : p=0.2\) versus \(\mathcal{H}_1: p\neq 0.2\) at the \(\alpha = 0.05\) level. There are \(n=18\) observations and we treat characteristic A as “class 1” and characteristic “B” as “class 2”. The observed value of the test statistic \(T\) is \(t=0\), and the null distribution is \(T \sim \mathrm{Bin}(18, 0.2)\).

The relevant quantities are recorded in R as follows.

```
n <- 18
pstar <- 0.2
t <- 0
```

*(10 pts) Perform the test “by hand”.*

Since the observed value is less than the null mean, the \(p\)-value may be calculated in R as follows.

```
rhs <- seq(ceiling(n*pstar), n)
tU <- sum(dbinom(rhs, n, pstar) <= dbinom(t, n, pstar))
phi <- pbinom(t, n, pstar) + pbinom(n - tU, n, pstar, lower.tail=FALSE)
phi
```

`## [1] 0.03429454`

Therefore we reject the null hypothtesis that it is not reasonable to assume that insects from that environment have the same probability of 0.20 that the species in general has.

*(5 pts) Perform the test “using a software library”.*

Quite simply:

`binom.test(t, n, pstar)`

```
##
## Exact binomial test
##
## data: t and n
## number of successes = 0, number of trials = 18, p-value = 0.03429
## alternative hypothesis: true probability of success is not equal to 0.2
## 95 percent confidence interval:
## 0.000000 0.185302
## sample estimates:
## probability of success
## 0
```

- confirming our calculation from part a.

*In a dice game a pair of dice were thrown 180 times. The event “seven” occurred on 38 of those times. If these dice are fair the probability of “seven” is one-sixth. If they are loaded the probability is higher. Is the probability of “seven” what should it be if the dice were fair? Use the appropriate one-tailed test.*

Here the apporopriate test is \(\mathcal{H}_0: p \leq 1.6\) versus \(H_1: p > 1.6\), a upper-tailed test, at the \(\alpha = 0.05\) level. The relevant quantities are \(n=180\) observations with “class 1” being “landing a seven” and “class 2” being “not landing a seven”. The observed value of the test statistic is \(t=38\) with null distribution \(T\sim \mathrm{Bin}(n,1/6)\). In R:

```
n <- 180
pstar <- 1/6
t <- 38
```

*(10 pts) Perform the test “by hand”.*

The \(p\)-value may be calculated as follows.

```
phi <- pbinom(t-1, n, pstar, lower.tail=FALSE)
phi
```

`## [1] 0.06996735`

We fail to reject the null hypothesis at the \(\alpha = 0.05\) level and conclude that there is not enough evidence to say that the dice are loaded.

*(5 pts) Perform the test “using a software library”.*

Quite simply:

`binom.test(t, n, pstar, alternative="greater")`

```
##
## Exact binomial test
##
## data: t and n
## number of successes = 38, number of trials = 180, p-value =
## 0.06997
## alternative hypothesis: true probability of success is greater than 0.1666667
## 95 percent confidence interval:
## 0.1621618 1.0000000
## sample estimates:
## probability of success
## 0.2111111
```

- confirming the result we calculated by hand above.

*A random sample of tenth-grade boys resulted in the following 20 observed weights.*

`weight <- c(142, 134, 98, 119, 131, 103, 154, 122, 93, 137, 86, 119, 161, 144, 158, 165, 81, 117, 128, 103)`

*Test the hypothesis that the median weight is 103.*

The relevent hypotheses are \(H_0:\) the median is \(103\), versus \(H_1:\) the median is not \(103\), a two-sided test which we will perform at the \(\alpha=0.05\) level. The observed values of the test statistics, and other relevant values, may be calculated in R as follows.

```
n <- length(weight)
pstar <- 0.5
t1 <- sum(weight <= 103)
t2 <- sum(weight < 103)
c(n, t1, t2)
```

`## [1] 20 6 4`

where \(T_1, T_2 \sim \mathrm{Bin}(20, 1/2)\).

*(10 pts) Perform the test “by hand”.*

The \(p\)-value may be calculated in R as follows.

```
phi <- 2*min(pbinom(t1, n, pstar), 1-pbinom(t2-1, n, pstar))
phi
```

`## [1] 0.1153183`

With this result we do not have enough evidence to to reject the null hypothesis that the median is not 103.

*(5 pts) Perform the test “using a software library”.*

We did not go over a library for this test in class, but I did a little googling and found one which is pasted (with minor edits) below.

- In the version below, I have made a modification to perform an exact
`binom.test`

when`t1 == t2`

.

```
quantile.test <- function(x, xstar=0, quantile=.5, alternative="two.sided")
{
n <- length(x)
p <- quantile
t1 <- sum(x <= xstar)
t2 <- sum(x < xstar)
if (alternative=="less") {
p.value <- 1-pbinom(t2-1, n, p)
} else if(alternative=="greater") {
p.value <- pbinom(t1, n, p)
} else if(alternative=="two.sided") {
if(t1 == t2) p.value <- binom.test(t1, n, p)$p.value
else p.value <- 2*min(1-pbinom(t2-1,n,p), pbinom(t1,n,p))
}
list(xstar=xstar, alternative=alternative, t1=t1, t2=t2, p.value=p.value)
}
```

- Upon inspection, it is clear that this code is doing exactly the same thing as we did by hand above (except in the exact case), only it handles all three cases at once.

Here is the library applied to our case.

`quantile.test(weight, 103, pstar)`

```
## $xstar
## [1] 103
##
## $alternative
## [1] "two.sided"
##
## $t1
## [1] 6
##
## $t2
## [1] 4
##
## $p.value
## [1] 0.1153183
```

- Identical, as expected.

*Test the hypothesis that the third decile is no greater than 100. A decile is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample population.*

This is a one-sided test of \(\mathcal{H}_0 :\) the 30% population quantile is no greater than \(100\), versus \(\mathcal{H}_1:\) the 30% population quantile is greater than \(100\). The relevant quantities (which have changed from above) are

```
pstar <- 0.3
t1 <- sum(weight <= 100)
t1
```

`## [1] 4`

where \(T_1 \sim \mathrm{Bin}(20, 0.3)\).

*(10 pts) Perform the test “by hand”.*

The \(p\)-value may be calculated as follows.

`pbinom(t1, n, pstar)`

`## [1] 0.2375078`

Based on this result we fail to reject the null hypothesis and conclude that there is not enough evidence to say that the 30% population quantile is greater than 100.

*(5 pts) Perform the test “using a software library”.*

As above, …

`quantile.test(weight, 100, pstar, alternative="greater")`

```
## $xstar
## [1] 100
##
## $alternative
## [1] "greater"
##
## $t1
## [1] 4
##
## $t2
## [1] 4
##
## $p.value
## [1] 0.2375078
```

- … an identical result.

*It is desired to design a given automobile to allow enough headroom to accommodate comfortably all but the tallest 5% of the people who drive. Former studies indicate that the 95th percentile was 70.3 inches. In order to see if the former studies are still valid, a random sample of size 100 is selected. It is found that the 12 tallest persons in the sample have the following heights.*

`heights <- c(72.6, 70.0, 71.3, 70.5, 70.8, 76.0, 70.1, 72.5, 71.1, 70.6, 71.9, 72.8)`

*Is it reasonable to use 70.3 as the 95th percentile?*

Here we are interested in testing \(\mathcal{H}_0:\) the 95th population quantile is \(70.3\) versus \(\mathcal{H}_1:\) the 95th population quantile is not \(70.3\), a two-sided test.

```
mh <- min(heights)
mh
```

`## [1] 70`

From the problem description we know that 88 people have hieghts less than 70, which is less than 70.3; so we have following observed value of the test statistics

```
n <- 100
pstar <- 0.95
t1 <- 88 + sum(heights <= 70.3)
t2 <- 88 + sum(heights < 70.3)
c(n, t1, t2)
```

`## [1] 100 90 90`

whose null distribution is \(T_1,T_2 \sim \mathrm{Bin}(100, 0.95)\).

- (10 pts)
*Perform the test “by hand”.*

The \(p\)-value may be calculated in R as follows.

```
phi <- 2*min(pbinom(t1, n, pstar), 1-pbinom(t2-1, n, pstar))
phi
```

`## [1] 0.05637659`

Based on this calculation, there is not enough evidence to reject \(\mathcal{H}_0\) at the \(\alpha = 0.05\) level.

- (5 pts)
*Perform the test “using a software library”.*

Again there is no built-in library for this, and the `quantile.test`

library is not direcly applicable because we don’t have the raw data. But we can easily modify the code from the library above to pass in our own counts.

```
quantile.test.counts <- function(t1, t2, n, quantile=.5, alternative="two.sided")
{
p <- quantile
if (alternative=="less") {
p.value <- 1-pbinom(t2-1, n, p)
} else if(alternative=="greater") {
p.value <- pbinom(t1, n, p)
} else if(alternative=="two.sided") {
if(t1 == t2) p.value <- binom.test(t1, n, p)$p.value
else p.value <- 2*min(1-pbinom(t2-1,n,p), pbinom(t1,n,p))
}
list(alternative=alternative, t1=t1, t2=t2, p.value=p.value)
}
```

Now here goes …

`quantile.test.counts(t1, t2, n, pstar)`

```
## $alternative
## [1] "two.sided"
##
## $t1
## [1] 90
##
## $t2
## [1] 90
##
## $p.value
## [1] 0.03410882
```

- … in this case a slightly different result since the “library” version didn’t need to make an approximation.

(10 pts)

*What must the sample size be to be 90% sure that at least 95% of the population lies within the sample range?**Use the exact calculation.*

Here is the code for calculating sample sizes at certain quantiles given in lecture.

```
bintol <- function(n, r, m, q)
{
i <- 0:(r+m-1)
sum(choose(n, i) * (1-q)^i * q^(n-i))
}
solve.bintol.n <- function(level, r, m, q)
{
## convert from level to alpha
alpha <- 1-level
n <- 1
while(1) {
p <- bintol(n, r, m, q)
if(p < alpha) break ## we crossed the threshold
else n <- n+1
}
return(n)
}
```

And we use it as follows to answer the question.

`solve.bintol.n(0.9, 1, 1, 0.95)`

`## [1] 77`

- So we need at least 77 samples to be 90% sure we have captured 95% of the population range.

*Use the approximation.*

Here is the code from lecture for the same calculation via approximation.

```
approx.bintol.n <- function(level, r, m, q)
{
x1ma <- qchisq(level, 2*(r+m))
(1/4)*x1ma*(1+q)/(1-q) + (1/2)*(r+m-1)
}
```

Using it on the problem at hand gives the following result.

`approx.bintol.n(0.9, 1, 1, 0.95)`

`## [1] 76.34954`

- If we round we find that we need at least 76 samples, which is off of the true result by 1.

(10 pts)

*A fitness gym has measured the percentage of fat in 86 of its members.**At least what percent of its members have fat percentages between the smallest and the largest of the percentages measured on the 86 members in the sample, with 95% certainty?*

For this problem we will need the library we wrote in lecture for finding the quantile given the same size. That code is pasted below.

```
solve.bintol.q <- function(level, n, r, m)
{
alpha <- 1-level
f <- function(q, alpha, n, r, m) {
alpha - bintol(n, r, m, q)
}
out <- uniroot(f, c(0,1), alpha=alpha, n=n, r=r, m=m)
return(out$root)
}
```

Now we can use it on the problem at hand with \(r=m=1\) as follows.

```
q <- solve.bintol.q(0.95, 86, 1, 1)
q
```

`## [1] 0.9460137`

- About 95% can be expected to fall in that range.

*At least what percent of its members have fat percentage between \(X^{(2)}\) and \(X^{(85)}\) with 95% certainty?*

Likewise, we can use the library to get this result with \(r=m=2\) as follows.

```
q2 <- solve.bintol.q(0.95, 86, 2, 2)
q2
```

`## [1] 0.9123098`

If following the book you’ll have to use the approximation to get

```
approx.bintol.q <- function(level, n, r, m)
{
x1ma <- qchisq(level, 2*(r+m))
temp <- 4*n - 2*(r+m-1)
(temp - x1ma)/(temp + x1ma)
}
approx.bintol.q(0.95, 86, 2, 2)
```

`## [1] 0.912266`

- which gives nearly the same result. We conclude that about 91% of the population will fall in that range.

- (5 pts)
*An engineer writing the acceptance specs for a load of steel reinforcement rods would like to specify that at least 90% of the rods are between the sixth longest and the sixth shortest rods in a random sample she selects. In order to have 99% confidence in this statement, what should the sample size be?*b

Again we can use our library to calculate the desired sample size as follows.

`solve.bintol.n(0.99, 6, 6, 0.9)`

`## [1] 210`

If following the book we will have to make an approximation because there are no tables for \(r=m=6\).

`approx.bintol.n(0.99, 6, 6, 0.9)`

`## [1] 209.6541`

- This gives the same result if roudning.

We conclude that at least 210 samples will be needed to be 99% sure that the sixth smallest and largest will cover the 90% range.