# Contingency tables

I have some wishes for the contingency table program. Best (but a lot of work I guess) would be a full log-linear models analysis program. However even without this there are some things that could be added that would be useful.

1. With two-by-two tables it might be useful to include the calculation of odds ratios â€“ these are widely used in medical research.

2. McNemarâ€™s test for correlated proportions in 2 x 2 tables could also be included (Everitt, 1977, pp. 20-22).

3. With r x c tables it might be useful to include the calculation of adjusted standardized residuals. These are suggested by Haberman (1973) (see also Everitt, 1977) and are very useful since they can be compared with the standard normal deviate. I often use them as â€śpost hocâ€ť tests to locate the source of association in a table rather than the approach that you call â€śthe divided chi-squareâ€ť.

4. With ordered r x c tables (i.e., tables where the categories have an intrinsic order) it can be useful to include a test for linear trend (see Everitt, 1977, pp. 51-56).

5. Some measures of the strength of the association in a table may be useful (many journals seem to want Cramerâ€™s V though I find it hard to interpret). Hereâ€™s odds-ratios, and Goodman and Kruskalâ€™s lambdas or Kendallâ€™s tau (much used in â€śordinal statisticsâ€ť) would be nice.

References

Everitt, B S (1977). The analysis of contingency tables. London: Chapman and Hall.

Haberman, S J (1973). The analysis of residuals in cross-classified tables. Biometrics, 29, 205-220.

## Comments

Sorry to keep bothering you but since I had mentioned a number of additions for the contingency table routines I thought it might be worth mentioning another relatively simple one that can be very useful to researchers dealing with judgement data (common in psychiatry and psychology): Cohenâ€™s kappa. David Howell (1997) describes Cohenâ€™s kappa as follows:

â€śAn important statistic that is not based on chi-square but that does use contingency tables is kappa (ĂŞ), commonly known as Cohenâ€™s kappa. This statistic measures interjudge agreement and is often used when we wish to examine the reliability of ratings.â€ť (p. 160).

Howell (1997) describes the calculation of kappa on pages 160 â€“ 161 of his book.

References

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 10, 37-46.

Howell, D. C. (1997). Statistical methods for Psychology (Fourth edition). Belmont CA: Duxbury Press.

Alan Roberts

statistiXL