Table 2

Definitions of key terms

Reliability: indicator of the tool's consistency

Validity: determines whether the tool measures what it was designed to measure


Internal consistency: measures the average correlation between all items on a tool

Intrarater reliability: an indicator of the tests' stability overtime when it is administered by the same rater

Interrater reliability: indicates the consistency of a tool when it is administered by different raters

Construct validity: investigates whether the tool correlates with a theorized construct

Criterion validity: can be divided into two categories; concurrent and predictive. Concurrent criterion validity measures the correlation of the tool with other tools that measure the same concepts, preferably a "gold standard" when it exists. Predictive criterion validity examines whether the tool can predict future outcomes.

Content validity: assesses whether the tool targets all of the relevant topics related to the concept being measured and that there are no irrelevant items

Face validity: an assessment of whether the tool appears to measure the intended concept


* Some articles discuss multiple types of reliability and validity; therefore, totals do not correspond with the total number of articles in the sample

Glenny and Stolee BMC Geriatrics 2009 9:52   doi:10.1186/1471-2318-9-52

Open Data