Relation between FV computer and booklet forms

We had a major concern when we created the FLIGHT/VIDAS 20-item paper and pencil version of our original computer-administered health literacy scale: would they be the same? Although a number of studies have shown that most tests are equivalent in different administration formats (Gwaltney et al. 2008), there was always the possibility that for some reason two versions of FLIGHT/VIDAS might not be the same. We have been addressing this issue in our continuation study, FLIGHT/VIDAS II, by asking people who had previously completed the computer version to now complete the booklet. Although the number of participants in our study has been very small (N = 15; all English speakers), results have suggested that there is a close relation between participants’ performance on the two versions. The correlation (Spearman rho) between scores on the two is 0.92 (p < 0.001). Because the correlation can be strongly influenced by one or two cases, we also looked at a scatterplot of our participants’ test scores (see above). It suggests a high degree of relation between individuals’ performance on the two measures.

Reference:

Gwaltney CJ, Shields AL, Shiffman S (2008). Equivalence of electronic and paper-and-pencil administration of patient-reported outcome measures: A meta-analytic review. Value in Health, 11, 322-333.

Posted in Health Literacy, Research, Using FLIGHT/VIDAS

Self Report and Objective Measurement of Health Literacy

There is a lot of interest in measuring health literacy. A key problem, though, is how to do so. Objective measures of health literacy take time to administer and score. Using one of the several measures that are available is possible, but even when you use one, it can be hard to know what a score means — cutoff scores don’t always agree across measures, and the validity of the cutoff scores isn’t always established.

In this post, I want to highlight a problem that I think isn’t always appreciated — it came up in a discussion at the Health Literacy Annual Research Conference (HARC) in Bethesda this past November (2015). A number of people have observed that there is a relation between self-report of health literacy and objectively measured health literacy. Some people have jumped the observation of this relation (usually established by a correlation coefficient) to wonder whether we can screen people for low health literacy by asking them whether they have difficulties in understanding health information. Correlations establish a relation, but mean nothing about the extent to which the two measures agree on someone’s level of health literacy.

In the FLIGHT/VIDAS project, we asked our participants three of the self-report questions developed by Lisa Chew. We also objectively measures our participants’ general literacy with a well-established, validated, and normed standard academic test, the Passage Comprehension subtest of the Woodcock-Johnson Psycho-Educational Battery. Of our almost 500 hundred participants, nearly half had reading levels at or below the 8th grade level. Of that same group, though, only 33 said they “often” or “always” had problems understanding written health information. Perhaps most important, 298 of the participants said they “never” had problems.

Given the big discrepancy between what people say and how they actually do on a test, I think it’s a good idea to understand the limitations of self-report in understanding our patients’ health literacy.

Posted in Health Literacy, Research, Using FLIGHT/VIDAS