
Validity refers to the extent to which evidence you have gathered and processed can be considered true. There are several different kinds of validity that must be borne in mind when conducting research that will affect the confidence with which you can state your sliver of truth. Most often applied to quantitative methodologies—the equivalent in qualitative research is trustworthiness, which I will discuss in a later blog—there are several different kinds of validity that your research needs to be built around.
Photo by Pop & Zebra on Unsplash
Content validity is based on the extent to which a measurement reflects the specific field of content on which you are focused. It depends on the careful selection of items to include in a test, survey, or set of observations, and the items to include are chosen after a thorough examination of the subject. For example, if researchers aim to study strategies for coping with stress and create a survey to measure people’s ability to cope with stress, if the researchers only focus on social support as a means for coping with stress and then draw conclusions about coping mechanisms in general, the study would have limited content validity because the results excluded other possible coping mechanisms. However, what is said about social support strategies may well be valid. So, to ensure content validity, one needs to have thoroughly explored the concepts and constructs in one’s field of study, evaluated their relevance, and defined and provided a rationale for those constructs to be included for the purposes of one’s own research. And because not all concepts in a given field can be included in a single research project, all research is limited.
Face validity concerns whether the measures used appear to measure what they are supposed to measure. One has to assess to what extent an instrument is a good measure of a construct (or not). Unlike content validity, face validity does not depend on theory; it is an intuitive assessment, an estimate about the whether the survey or semi-structured questions asked or items measured will answer the research question. For example, if you are attempting to measure the efficacy of social support for reducing stress, asking people how often they interact with family members, friends, and colleagues with fixed options and then their evaluation of the level of support received on a 7-point Likert-type scale, on the face of it, appears to have validity.
Criterion-related validity, also called instrumental validity, means evaluating the accuracy of a measure or procedure by comparing it with another already validated measure or procedure. There are two types of criterion-related validity: Concurrent and predictive validity.
- Concurrent validity refers to the degree to which the construct on which you are focused correlates with other measures of the same construct measured at the same time in the same research. For example, imagine an impromptu speech test has been shown to be an accurate test of English proficiency. By comparing the scores on a written comprehension test with the scores from an impromptu speech test in the same research project, the written comprehension test can be validated using a criterion-related strategy in which the results of the impromptu speech test are compared to the written comprehension test in order to assess to what degree the written comprehension test also accurately reflects proficiency in English. If there is a high correlation, for example, between those who score high on the impromptu speech test and the written comprehension test, on average, the written test can be said to possess criterion validity. So, one uses an already validated measure to evaluate the validity of a new measure of the same phenomenon or construct, in this instance, English proficiency.
- Predictive validity refers to the degree to which a construct correlates with behavior in the future, for example, someone who scores high on agreeableness in a new personality test is later observed to express modesty, kindness, and a willingness to help others in various contexts. If that is the case, you can be sure that your new personality test did, in face, measure agreeableness.
Construct validity focuses on the agreement between a theoretical concept or construction and a specific measuring device or procedure. It involves linking empirical and theoretical evidence for the construct. For example, a researcher constructing a new personality test might spend a great deal of time defining various personality traits so that, for example, when measuring agreeableness, the measure is sufficiently distinct from measures for passive-aggression traits. Construct validity can be broken down into two sub-categories: Convergent validity and discriminate validity.
- Convergent validity is where measures that should be theoretically related demonstrate agreement among ratings that are gathered independently of one another. It refers to the degree to which a measure is correlated with other measures with which it is predicted to correlate, at least theoretically. For example, scores on one instrument for measuring the depth of depression correlate with other measures that test for the depth of a person’s depression.
- Discriminate validity is the degree to which a measure does not correlate with other variables that, theoretically, should not correlate. For example, one would expect people who score high for agreeableness to not score high on a scale measuring aggression. In fact, one might expect the two to be inversely related.
So, when testing for construct validity it is important to evaluate to what extent the instrument correlates with other instruments that measure the same thing as well as ensure that it does not measure a construct to which it should have not theoretical relationship.
So, that’s just the half of it with respect to validity in quantitative research designs. In the next blog I will consider internal and external validity and the various threats to those types of validity and how to meet those challenges.
If nothing else at this point, I hope you have gained insight into how research is a deliberate and considered process. One does not just do research; one conducts research: It is a carefully orchestrated process in search of a sliver of truth.