Simplifying Sampling Strategies

Ideally, if one wants to know the answer to a research question, for example, to what extent is people’s buying behavior is influenced by an advertisement, one will need to access everyone who viewed that advertisement.

Photo by on Unsplash

Accessing everyone concerned, however, is well-nigh impossible; therefore, when conducting research, one accesses a sample or portion of those who viewed the advertisement.

The question that arises then is how representative is that sample because if it is not representative of the population who have seen that advertisement, the ability to generalize the results of the research will be limited. That brings us to a discussion of the various types of samples and their limitations for generalizing the results of a research.  

There are basically two types of samples: probability sampling and nonprobability sampling. For the most part, quantitative methods, or methods that crunch numbers, require a probability sample.

Probability Sampling

Four kinds of probability sampling exist (Shin, 2020), all of which require a sampling frame, in other words, a data base of elements that pertain to the population on which the research is focused and from whom (or which if you are doing research in the life sciences) you will gather a sample.

In probability sampling, the size of the sample counts. A general rule is the smaller the sampling frame, the higher the percentage of participants chosen because many statistical equations require a certain number of responses(f) to be executed. For example, a chi-square test requires an expected frequency of more than 5 for any cross-tabulated variable (cell), so larger samples are essential (Gravetter & Wallnau, 2005).

The most representative sample would be a simple random sample. In other words, every person in the population in which you are interested has an equal change of being chosen to participate in the research. For example, if you were interested in the needs of homeowners, you might access a list of all rate payers in a city. Rate payers in your city would be the sampling frame. Likewise, if I was researching the extent to which patients are satisfied with their treatment at a specific hospital, I would access the database of everyone who has visited that hospital over the last three years (sampling frame), assign a number to each of those patients, and then choose a sample using a random sample table or generator. That ensures that every patient has an equal chance of being chosen.

Even simple random sampling, however, may not be as random as one thinks. Some patients may have died over the last three years and others moved without providing a forwarding address. In other instances, the sampling frame may not be ideal. For example, if one is trying to establish the market for fridges in a particular area and the sampling frame to which one has access is ratepayers, ratepayers may have a different demographic from those who do not pay rates by virtue of not being able to afford to buy a home, so in the end, the data represents only the market for fridges among people who own homes. If one wants a more accurate view, one must find a different sampling frame, for example, people who access electricity because that would include both home and apartment dwellers.

A second sampling option is a systematic random sampling strategy, where one would sample every, for example, 5th or 10th person on the list in a database. For example, I might use a data providers’ list of smartphone numbers and call every 10th number or begin with a map of the suburbs and visit every fifth house on every fifth block to establish if they have seen the advertisement, and if so, to what extent their buying behavior was influenced by the advertisement. Arguably, approaching every fifth person entering a mall or shopping center, or even store, would be also be considered systematic random sampling.

The random nature of systematic random sampling may be compromised by participants choosing not to answer an unknown number or having the phone put down or door slammed in one’s face. And there is no guarantee people will respond to emails requesting their participation, so very often a systematic random sample is not random, but a volunteer sample, in other words, a sample of people willing to participate in the research.   

So, given that simple and stratified random samples may not be possible because a suitable sampling frame may not exist and/or those chosen to participate choose not to participate, the next best bet is a stratified random sample. Here one divides the population into groups with similar attributes (Health Knowledge, n.d.), for example, people living in standalone homes and people living in apartments, and randomly samples each group. Or if I am exploring the effects of an advertisement for a particular fridge, I might access an electrical company’s customer database and randomly sample only those who buy or use a certain number of units of electricity because it takes a certain number of units to run a fridge in addition to other electrical appliances. Stratified random sampling is also useful if one want to make comparisons, for example, in a study of the health outcomes of nursing staff in a country, if there are seven hospitals each with different numbers of nursing staff, it would be appropriate to sample numbers from each hospital proportionally, so the hospitals with more nursing staff constitute a larger proportion of the sample. And if I am going to use chi-square, I best ensure the samples from smaller hospitals are large enough to ensure that on any cross-tabulation, the expected frequency is more than 5.  

A final probability strategy is cluster sampling. For example, if I am conducting research in education about the efficacy of a particular Math module, there may be five classes at one school using that module and seven at another school, and I might choose just one class from each school based on the assumption that the classes not chosen would demonstrate the same dynamics as the classes I chose for my sample.

Non-Probability Sampling

Non-probability sampling includes convenience sampling, quota sampling, purposive sampling, and snowball sampling.

Quota sampling is a strategy most often used by market researchers. They are given a quota of specific types of people to select, for example, the research question might be who is buying nonfiction books, and interviewers are asked to recruit a certain number of adolescents, young adults, and adults over the age of 40 based on the proportion of those categories of people in the general population; so, ideally, the sample would be representative of the proportions of those age groups in the population. 

Convenience sampling is often used in the social sciences and humanities because (a) participants are protected by the ethic of informed consent and (b) participants have the right to withdraw from a participation at any point and without prejudice. So, convenience sampling is the easiest way to recruit available and willing participants.

With convenience samples, the representativeness is severely compromised because those who volunteer to participate may have very different profiles from those who do not. For example, social media has become a popular means for distributing surveys, but one has to bear in mind that not everyone uses Facebook and/or Instagram and that those who volunteer to participate may be people with time on their hands rather than busy professionals. That may skew the sample towards people who are not employed or are underemployed, and the less random the nature of the sample, the more unreliable the statistical manipulations. However, while conveniences samples may not be appropriate for asking how many and to what extent, they can answer the what, how, and why questions or assist with describing components and processes and explaining their connections.

A third non-probability sampling strategy for gathering a sample is purposive sampling, which means earmarking specific individuals who would make suitable participants and inviting them to participate. This sampling strategy is most often used in qualitative research, the assumption being that the person can comment on the focus of the research. For example, if one is exploring the meaning of boredom, it would be pointless to include people who claim that they are never bored in one’s sample. Purposive sampling is the most time- and cost-effective strategy, but the least representative and generalizable. It is also the most time-consuming data to process. One can apply software that helps, but software helps: it does not distill the understanding for one.

Snowball sampling is generally used in social sciences to access groups that are difficult to reach. For example, before it became trendy to be part of the LGTB+ community, one would ask an interviewee to nominate two members of the community who would be willing to share their experiences or opinions, and those two would in turn nominate two, and thus the sample would grow. The danger of such samples is that one ends up examining a subculture of the culture on which one is focused.

Defining the Inclusion and Exclusion Criteria

When writing about the sampling strategy chosen, it is critically important to define both the inclusion and exclusion criteria for your sample. For example, if one is going to explore the effects of secondary trauma among neighborhood watch volunteers, it is critical that those participating have (a) experienced secondary trauma within a particular timeframe, (b) are active members of a neighborhood watch, and (c), are volunteers and not paid security personnel. If they are paid security personnel, that would be a reason to exclude them from the sample.  

Likewise, if one is exploring the impact of being terminated from one’s employment for not having had the corporate-mandated jab, one would not interview those who not working in a corporate that mandated the jab, those who obeyed the mandate, or those have not had their employment terminated because they refused to take the jab. None of those potential participants would be able to speak about their experience of being terminated for that particular reason. Likewise, it would be pointless to ask people who have not viewed an advertisement how it affected them. So, for example, if one was using a survey method, the first filter question might be, “Have you viewed the said advertisement.” If not, the survey would be terminated with that person. Of course, information about how many people did and did not view the advertisement would be useful, but the latter could not offer an opinion about an advertisement they have not seen.

Some Conclusions

Samples, and the strategies used to choose those samples, are important because applying statistics and making valid claims about what the data says and then generalizing the findings of the research depend on a sample accurately representing the population in which you are interested and about which you are making claims. At the same time, sampling in the humanities and social sciences is subject to sampling bias that makes their representativeness questionable because ethically, a researcher has little control over who chooses to participate and/or drop out. Moreover, a sample may not be as random as assumed because the return rate for a questionnaire sent, even with a self-addressed and stamped envelope or on email, might be skewed towards those who have the time and motivation to complete the questionnaire.

There are ways and means of evaluating to what extent a sample is representative after the fact. One way is to compare the demographic data of the sample (age, education, income, gender, etc.) to the demographic of the general population you intend generalizing about, if such is available. That not only underlines the importance of collecting at least some basic demographic data, but also allows one to understand which categories of the population may skew the results. Knowing, for example, that people over the age of 60 are over-represented in a sample allows a researcher to temper the interpretation of the processed data. On the other hand, if one can show that the demographic of the sample matches that of the population on which one is focusing, it strengthens one’s ability to generalize the results to that population. So, collecting the relevant demographic facts about the sample is not just about being able to introduce the sample. It is also important because it allows one to assess the degree to which the sample represents the population in which one is interested.

Finally, it would be well to remember that sampling and the associated statistics, even for a probability sample, are based on probabilities, not certainties. So, even when people insist that one  attend to the science, as if science offers truth, bear in mind that even hard science is not about proof or truth but about what is most probably true, all things considered.


Shin, T. (2020, Oct. 25). Four types of random sampling techniques explained with visuals. Towards Data Science.

Health Knowledge. (n.d.). Methods of sampling from a population.

Gravetter, F. J., & Wallnau, L. B. (2005). Essentials of statistics for the behavioral sciences (5th ed.). Wadsworth.

Proposing Research: A Brief Summary

If you have been following the posts shared over the past several months, you should at least understand the process involved when writing a proposal for conducting research.

Photo by Joshua Sortino on Unsplash

You would know that it all begins with a research question and then a series of informed decisions about how that research question could be answered.

Through a review of the literature, you have identified the language you will use to frame and convey your understanding of the phenomenon on which you have focused—the field, paradigm, theories, concepts and/or models you will use to make sense of the data to be collected.

You would also be aware that the way you answer the question, the method, with all its assumptions and limitations, needs to be able to answer the research question posed.

Finally, you will have realized that you have to write in a way that demonstrates the literature reviewed and method chosen are based on and informed and considered decision-making process.

Essentially, when writing a proposal, you are presenting an argument to convince the audience the research question is worth answering and the method by which you will answer it will deliver valid and/or trustworthy results that will add value to the field’s knowledge base and/or humanity in general.

None of this can be done, of course, without acknowledging others who have raised the same kinds of questions and/or used the same kinds of methods, which brings us to the issue of academic styles, and inevitably, citations and reference entries.

The primary academic styles are APA, Harvard (all 64 versions registered on Zetoro), Chicago (footnotes or in-text), and MLA. Consider the following examples of how the same journal article would be notated in the different styles:  


Butler, K. (2001). Defining diaspora, refining a discourse. Diaspora, 10(2), 189‒219.

  • As Butler (2001) explained, ‟Quote” (p. 190).
  • (Butler, 2001, p. 190).


Butler, K 2001, ‛Defining diaspora, refining a discourse’, Diaspora, vol. 10, no. 2, pp. 189‒219, <>.

  • As Butler (2001, p. 190) explained, “Quote.”
  • (Butler 2001, p. 190).

In some forms of Harvard, the page number would be introduced with a colon:

  • As Butler (2001:190) explained, “Quote.”
  • (Butler 2001:190).


Butler, K. 2001. ‟Defining diaspora, refining a discourse.” Diaspora, 10, no. 2: 189‒219.

  • As Butler (2001, 190) explained, “Quote.”
  • (Butler 2001, 190).


Butler, K. “Defining diaspora, refining a discourse.” Diaspora, 10, no. 2, 2001, 2001, pp. 189‒219.

  • As Butler explained, “Quote” (190).
  • (Butler 190).

Notice that all styles document the same basic details in the References (or for MLA, Works Cited) lists, but the order of the information and punctuation or absence thereof is different. The basic information is

  • Author (which could be an organization, for example, the World Health Organization),
  • date of publication,
  • name of the work,
  • name of the container (if a journal or news source, include the volume and issue numbers as well as page numbers) or publisher,
  • the URL or DOI number.

Moreover, all but MLA expect that when citing an author, the date of publication should be documented, at least the first time that author in mentioned in a paragraph. And all but Harvard would format the list with a hanging indent.

To make it even more confusing, each journal and institution makes minute changes to the main styles to “make it their own.” Given that, the best course of action is to seek out a style guide for the journal or institution, and failing that, be absolutely consistent with how citations and references entries are presented.

Validity in Qualitative Studies

The validity of a qualitative approach and its methods is determined by how trustworthy the data is. That can take some convincing because quantitative methods remain the dominant paradigm even in the social sciences, and conclusions based on qualitative methods would not be considered to have much validity. The data is anecdotal, narrative in nature, and subjective.

Jo Szczepanska on Unsplash: A common means for processing raw qualitative data into themes is to use different color post it notes

The late Dreyer Kruger, a leading phenomenologist in his time, impressed upon us undergraduates at in the Psychology Department, Rhodes University, the importance of being “rigorous, systematic, and methodical” in the explication of experiential texts, or people’s descriptions of what it means to be anything a human being has the potential to be or experience. In qualitative approaches, one is not chasing answers to how much and how many variables affect a phenomenon, but attempting to elucidate what it means to be subjected to a phenomenon, be it being a leader or follower, a narcissist or borderline, a perpetrator or victim of a crime, or a marketer or customer loyal to a particular brand.

Rigorous means being thoughtful, deliberate, and diligent about interrogating the personal and theoretical lenses with which one approaches the data and the biases within those lenses. Human beings inevitably see the world from a point of view, however, broad, and one must be honest about the limitations of one’s vision.

Systematic means keeping a record of the decisions made and their implementation at the level of the data and interpretation so that one has an audit trail that can be followed, and one explicates rather than analyzes because one is attempting to understand the complexity of the whole, not break the whole down into its component parts.

Methodical means one has a goal. The focus in qualitative research is directed to answering a particular research question, and one pursues that goal in a step-by-step manner consciously and deliberately and with both forethought and afterthought. It means revisiting the data already explicated as new themes emerge to consider if that new theme is evident or at least does not contradict the experiences of already processed transcripts. So, rather than being methodical in a linear sense, it is being methodical in circular fashion, allowing what arises initially to inform one’s interpretation and allowing what arises later to inform one’s earlier interpretation. 

Because the participant is recognized to be a subject and the insight sought is about the world or phenomenon from his or her point of view, one engages the participant in the research as a subject (rather than object), and one means for establishing validity in qualitative research is to take the transcript, preferably a readable summary, and in a best case scenario, the distilled common themes, to check in with the participant. The intent is to ensure the participant does not feel his or her meaning was misrepresented. What I call collaborative validity is also an attempt to give back to those who made themselves available for the research.   

Another form of validity is intersubjective validity, which occurs at the level of processing the texts. For example, one might ask colleagues or friends to identify themes independently and then compare the themes in order to both challenge and come to some degree of consensus about what is essential for understanding the phenomenon, be the focus on burnout, comfort shopping, or loyalty to a brand or organization.

Triangulation is therefore also an important means for ensuring validity in qualitative research in so much as one involves several participants (data triangulation), processers of the data (investigator triangulation), and lenses (theoretical triangulation).

Finally, reflexivity, or disciplined self-reflection about one’s lenses and the process “is perhaps the most distinctive feature of qualitative research” (Banister et al., 1994, p. 149). In qualitative research, the influence of the researcher’s life experience on the construction of knowledge is centralized rather than marginalized, and it involves not only being honest about one’s personal and theoretical lenses but also continuously and critically examining the process of the research to reveal biases, values, and assumptions that have a bearing on one’s interpretation. This is most often done by keeping journal that documents what one did when and why and the decisions made with their rationales. Including a brief summary of this audit trail allows the reader to evaluate the validity of the conclusions to the research.

So, if you choose to apply a qualitative method in your research, be prepared to raise your level of self-awareness; disclose and critically interrogate your own theoretical lenses and personal biases; reflect upon and defend every research decision made; and engage in an ever-deepening spiral of understanding of the phenomenon you chose to examine. 


Kruger, D. (1979). An introduction to phenomenological psychology. Juta.

Banister, P., Burman, E., Parker, I., Taylor, M., & Tindall, C. (1994). Qualitative methods in psychology: A research guide. Open University Press.

The Value of Validity II

Internal and External Validity

In the previous blog, I attended to specific types of validity that need to be addressed when applying a quantitative research design, including content validity, face validity, criterion-related validity, and construct validity.

Photo by Jo Szczepanska on Unsplash

As if that is not enough, one also has to design a methodology around internal validity, or estimate the extent to which conclusions about relationships between variables are likely to be true based on the measures used, the research setting, and the whole research design, and external validity, or the extent to which one may generalize from the sample studied to the target population defined as well as other populations in time and space.

Experimental techniques involve measuring the effect of an independent variable on a dependent variable under highly controlled conditions, for example, one measures how stressed a participant is before and after an intervention. Such designs usually allow for high degrees of internal validity. There are a number of extraneous factors, however, that may threaten the internal validity of even an experimental design:

  • History factors pertain to specific events that occur between first and second measurements in addition to the experimental variables. For example, when seeking to measure the effectiveness of a post-traumatic stress intervention, a traumatic event between the pre-intervention and post-intervention tests may affect the degree to which the intervention offered can be said to be effective.
  • Maturation factors pertain to processes that occur within participants due to the passage of time as opposed to specific events. For example, participants becoming hungry and tired between a pre- and post-tests on the same day may well affect the results. If measuring the effectiveness of meditation for lowering stress levels, for instance, by the time the post-test is executed, the participants may be feeling exhausted or bored or be worrying about what is going on at home due to their prolonged absence, and that may affect their responses.
  • Testing factors pertain to the effects of taking a test upon the scores of a second test, particularly if the same test is used to compare pre-test and post-test scores in, for example, language proficiency tests where one gives the test, gives a lesson, and then uses the same test to assess the change in participants’ proficiency. In this instance, it might be preferable to use and compare the results of two tests that have been shown to have high convergent validity.
  • Instrumentation factors pertain changes in the calibration of a measurement tool. For example, when using a peak flow meter to measure the force of the breath of a person suffering from asthma, it is advised to use the same brand of peak flow meter because different brands have different calibrations. In other words, if one was measuring the effectiveness of a medication for treating asthma using a different brand of peak flow meter, the pre- and post-medication measures obtained would likely be misleading. Likewise, an observer may have read more about a topic between pre- and post-intervention observations and note aspects in the post-treatment phase that he or she would not have thought to note in pre-treatment phase. There may thus be changes noted that are not a product of the intervention or treatment but a product of the observer’s increased knowledge.   
  • Statistical regression factors occur where participants with extreme scores are included in the analysis. Most often in statistical analyses, especially a Pearson product-moment correlation, outliers, or participants with extreme scores, would be excluded from the analysis.
  • Selection factor biases occur due to differential selection of participants, which is why most quantitative research designs, ideally, use random samples and/or the criteria for selection are discussed in detail upfront. For example, selecting volunteers from social media may point to a particular demographic because not everyone participates on social media platforms, for example, Generation X rather than Baby Boomers and/or people who are unemployed or underemployed and have the time to complete questionnaires. Moreover, the characteristics of those who volunteer versus those who do not may differ. Selection bias threatens the generalizability of the data unless one is looking at a construct that is peculiar to the demographic.  
  • Experimental mortality pertains to the differential loss of respondents from the comparison groups. For example, if one is examining adolescent development in a longitudinal research design, the chances are that over the five years’ duration of the research, some adolescents who participated in the first stages of the study may move away or lose interest.

Four factors might jeopardize external validity or representativeness of one’s research findings:

  • Reactive or interaction effect of testing is where a pretest might increase the scores on a post-test because practice makes perfect. This threat may be overcome at least to some degree, by comparing the pretest and post-test means for the sample, or by using an equivalent test with high convergent validity.
  • Reactive effects of experimental arrangements may also affect the external validity of one’s findings. Often experimental settings are artificial, and one cannot ignore the Hawthorne effect, i.e., when people know they are being observed, contributing to research data, or having their personality assessed, their behavior may change. Moreover, we may ask that participants answer the questions as honestly as possible, but that does not guarantee they will answer honestly. It may also be the case that questions are interpreted differently by different people. For example, with respect to a question like, “Do you often feel angry?” How often is often? My often may not be your often. Even, for the statement, “I feel angry most of the time,” what does most mean? Ultimately, the results of any experimental design, even in hard science, can be questioned based on the fact that the experimental situation can only ever approximate reality.
  • Multiple-treatment interference occurs when the effects of earlier treatments are not erasable. Moreover, a participant may be participating in treatments other than the one being tested—one might be testing for the merits of meditation as a stress reliever, but the participant may also be engaged in therapy as well as practice a host of other means to alleviate their stress, and one cannot then be sure that it was the meditation that reduced their stress level or indeed, whether being in therapy interfered with the efficacy of medication because therapy often only works if there is a certain degree of anxiety present on the part of the patient.  
  • The interaction effects of selection biases and the experimental variable may also threaten the validity of a research. Clearly, selection biases may negatively affect both the internal and external validity, so it is critical that you think about how you will select participants carefully and to what extent those selection criteria will limit the generalizability of the research findings. For example, it may be true that students are more likely to embrace remote work, but if the only participants selected to participate are students, one is limited from generalizing the findings to people on the edge of retirement. In most instances, perfectly random selections are not possible because it would require a list of everyone the population of interest in order to select a random sample from that population.

Finally, there is ecological validity, or the extent to which the results the research can be applied to real life situations. For example, an actual driving test would have more ecological validity than would a simulated driving test.

In most instance, a methodology chapter would include a section devoted to discussing the particular threats to the validity of a research design and the extent to which the findings may be limited by the selection of participants, the methods of measurement chosen, and the context in which the research will be or was undertaken. The conclusion to the research would remind the reader about the threats to the validity of the findings so that the reader can take those limitations into account when generalizing the findings.

The most important aspect to remember when discussing the extent to which validity issues pertain is to discuss only those issues that pertain to your research. For example, a once-off cross-sectional design would not be subject to maturation factors or participants dropping out, whereas discussion of these issues is critical in a longitudinal design. So be clear about which types of validity apply to your research and focus on the threats to validity to your particular research design and selection criteria in order to make the issues and how you will deal with those threats explicit.

Remember, too, that there is no perfect research design, so it is a question of being aware of what kinds of threats to the validity of your findings exist, making your reader aware of those threats, developing strategies to minimize those threats, and then being honest about the extent to which your findings can be depended upon for making decisions.  

The Value of Validity I

Validity refers to the extent to which evidence you have gathered and processed can be considered true. There are several different kinds of validity that must be borne in mind when conducting research that will affect the confidence with which you can state your sliver of truth. Most often applied to quantitative methodologies—the equivalent in qualitative research is trustworthiness, which I will discuss in a later blog—there are several different kinds of validity that your research needs to be built around.

Photo by Pop & Zebra on Unsplash

Content validity is based on the extent to which a measurement reflects the specific field of content on which you are focused. It depends on the careful selection of items to include in a test, survey, or set of observations, and the items to include are chosen after a thorough examination of the subject. For example, if researchers aim to study strategies for coping with stress and create a survey to measure people’s ability to cope with stress, if the researchers only focus on social support as a means for coping with stress and then draw conclusions about coping mechanisms in general, the study would have limited content validity because the results excluded other possible coping mechanisms. However, what is said about social support strategies may well be valid. So, to ensure content validity, one needs to have thoroughly explored the concepts and constructs in one’s field of study, evaluated their relevance, and defined and provided a rationale for those constructs to be included for the purposes of one’s own research. And because not all concepts in a given field can be included in a single research project, all research is limited.  

Face validity concerns whether the measures used appear to measure what they are supposed to measure. One has to assess to what extent an instrument is a good measure of a construct (or not). Unlike content validity, face validity does not depend on theory; it is an intuitive assessment, an estimate about the whether the survey or semi-structured questions asked or items measured will answer the research question. For example, if you are attempting to measure the efficacy of social support for reducing stress, asking people how often they interact with family members, friends, and colleagues with fixed options and then their evaluation of the level of support received on a 7-point Likert-type scale, on the face of it, appears to have validity.

Criterion-related validity, also called instrumental validity, means evaluating the accuracy of a measure or procedure by comparing it with another already validated measure or procedure. There are two types of criterion-related validity: Concurrent and predictive validity.

  1. Concurrent validity refers to the degree to which the construct on which you are focused correlates with other measures of the same construct measured at the same time in the same research. For example, imagine an impromptu speech test has been shown to be an accurate test of English proficiency. By comparing the scores on a written comprehension test with the scores from an impromptu speech test in the same research project, the written comprehension test can be validated using a criterion-related strategy in which the results of the impromptu speech test are compared to the written comprehension test in order to assess to what degree the written comprehension test also accurately reflects proficiency in English. If there is a high correlation, for example, between those who score high on the impromptu speech test and the written comprehension test, on average, the written test can be said to possess criterion validity. So, one uses an already validated measure to evaluate the validity of a new measure of the same phenomenon or construct, in this instance, English proficiency.
  2. Predictive validity refers to the degree to which a construct correlates with behavior in the future, for example, someone who scores high on agreeableness in a new personality test is later observed to express modesty, kindness, and a willingness to help others in various contexts. If that is the case, you can be sure that your new personality test did, in face, measure agreeableness.  

Construct validity focuses on the agreement between a theoretical concept or construction and a specific measuring device or procedure. It involves linking empirical and theoretical evidence for the construct. For example, a researcher constructing a new personality test might spend a great deal of time defining various personality traits so that, for example, when measuring agreeableness, the measure is sufficiently distinct from measures for passive-aggression traits. Construct validity can be broken down into two sub-categories: Convergent validity and discriminate validity.

  1. Convergent validity is where measures that should be theoretically related demonstrate agreement among ratings that are gathered independently of one another. It refers to the degree to which a measure is correlated with other measures with which it is predicted to correlate, at least theoretically. For example, scores on one instrument for measuring the depth of depression correlate with other measures that test for the depth of a person’s depression.
  2. Discriminate validity is the degree to which a measure does not correlate with other variables that, theoretically, should not correlate. For example, one would expect people who score high for agreeableness to not score high on a scale measuring aggression. In fact, one might expect the two to be inversely related.

So, when testing for construct validity it is important to evaluate to what extent the instrument correlates with other instruments that measure the same thing as well as ensure that it does not measure a construct to which it should have not theoretical relationship.

So, that’s just the half of it with respect to validity in quantitative research designs. In the next blog I will consider internal and external validity and the various threats to those types of validity and how to meet those challenges.

If nothing else at this point, I hope you have gained insight into how research is a deliberate and considered process. One does not just do research; one conducts research: It is a carefully orchestrated process in search of a sliver of truth.

A Personal Point about Plagiarism

The life of an editor is (almost) always interesting. One gets to read in a vast array of fields and about a vast array of topics in those fields. Ideally, one has the privilege of witnessing the development and distillation of a considered or evidence-based opinion on the part of one’s client, and often both.

Photo by Surface on Unsplash

I say ideally because I have become aware of how many postgraduate students and even academics pay other people, so-called ghost writers, to do the course papers, literature reviews, methodological designs, and sometimes the entire research project for them. Just this week I respectfully declined what could have been a lucrative project with what appears to be an agency when I pointed out that where I come from, what the agent was asking me to do would be considered unethical: I am not registered for that course at that university. I suggested that if the student felt inclined, he or she could approach me directly on Upwork, and I would be willing help the student better articulate his or her answers to the questions posed in the course handout. I pointed out also that the question paper made it clear that a student paying someone to complete the question paper is as about as close as one gets to plagiarizing.

Here is why it not a good idea to have a ghost writer write your dissertation, thesis, or paper for you:

  • In the long-term, the value of the piece of paper you receive is scuttled: Being awarded a degree allows others to assume you are adept at sifting through, processing, and distilling information (text and data) and possess and can apply those skills in order to come to considered or evidence-based opinions that will set a course of action. Some of you with a ghost’s piece of paper are going land responsible positions, positions that affect and influence many people—powerful positions: public service, executive status, government. If you have not taken the time and trouble to learn the skills involved in sifting through, processing, and distilling information to reach a considered or evidence-based opinion, you will flounder, and your followers will flounder with you. Using a ghost writer in an academic context is setting yourself up to fail.
  • A second reason is that as it becomes more and more evident that students employ people to read their degrees for them, when they flounder at what they are assumed to have mastered, the worth of the piece of paper for those students who actually earned their degrees is also devalued.
  • A third reason rests on the degree to which the practice is fraudulent. On a deeper level, we reveal ourselves in how we write, in the words we choose and the way we put our sentences together as well as what we choose to address and how. If someone else writes a student’s paper, chapter, or entire dissertation, there will be a notable incongruence between how the person awarded the degree presents in person and what he or she claims to know in writing.
  • The final and perhaps most important reason is that the graduate with the voice of ghost has also done him or herself out of an opportunity to become empowered. Having someone else process and give voice to the distillation process means you have never taken up the challenge of developing your own voice. If you are going to be counted among those who made a difference in this world, you need to have a voice, your own voice, not the voice of a ghost.

That said, the story has a beautiful ending. Just hours after respectfully declining the offer, I received a literal flurry of invites from students who are willing to earn their pieces of paper but were needing help with learning how to think like a researcher, distilling and articulating their understanding of others’ considered and evidence-based opinions, and writing their own understanding (and considered and evidence-based opinions) up in their own voices, or just simply needing a trained eye to make sure they had not missed something.

And so, my faith in the next generation is restored. Let’s hope those students who actually earned their degrees and took up the challenge of developing their own voices are the one’s appointed to responsible positions because there is an awful lot to be reconstructed going forward, and it is going to take a generation who can think clearly and voice their considered opinions.


Motivating for the Method

In the course of doing the literature review, you will encounter a wide variety of research designs and probably be thinking about the methodological design you could use to answer your research question. Remember that particular methodologies answer particular kinds of questions, and each method has its own limitations.

Image by Glenn Carstens Peters on Upsplash

Remember that particular methodologies answer particular kinds of questions, and each method has its own limitations.

Quantitative research designs are focused on answering questions about what, how much, how many, and to what extent already identified variables are related. Sophisticated designs can test the strength of relationships between variables or even establish causal relationships. For example, if I had access to a database that documented people’s blood types, vaccination history, and health events, I could test the correlation between blood types, exposure to vaccines with fetal matter, and adverse vaccine reactions. I can crunch those numbers and then interpret the results for statistically significant relationships. So, quantitative research designs are about the measure of things and processes.

Qualitative methodologies, on the other hand, are focused on the meaning of things. The aim is to reach greater understanding of the problem or phenomenon by identifying what is essential for understanding the problem or phenomenon. The motivating questions are about how and why does this work or not work and the “data” is generally narrative text rather than counts and measures. For example, rather than asking how many people are bored with their jobs, I might ask, “What does it mean to feel bored with a job?” Or rather than asking how many or to what extent are leaders in public service transformational leaders, I might ask, “What does it mean to be a transformational leader in public service?” Aspects of the essences identified can later be tested in quantitative research but testing the relationships between themes or aspects or the experience identified in the explication of the narrative would not be the focus of a purely qualitative methodology.

When it comes to distilling the meaning of a phenomenon or even relationships between the themes explicated, you need to be clear from the outset if you are approaching the narrative with a pre-existing lens or theory or allowing the essence to emerge by intentionally suspending your preconceptions about the phenomenon and relationships within it. So, there are two ways of approaching qualitative data, one in which you are honest about the lens with which you are attempting to gain understanding, or second, you will attempt to suspend your preconceptions based on the literature review and personal experience and allow answers to the research question and resolution of the problem to arise from the data itself. Arguably, human beings are perspectival, in other words, as human beings, we always experience phenomena from a perspective or point of view, and the researcher as a human being can therefore only be honest about the preconceptions with which he or she arrived at an answer to the research question.

Since I completed my PhD, apps have been created, like NVivo, to make the processing of the narratives text, easier, but here is the dilemma: While its convenient to have an app like NVivo count the themes and categories you are either looking for via your theoretical lens or that emerge from the narratives, NVivo sometimes misses a truth that just one person uttered but could be subsumed under lesser truths expressed by all the participants. For example, only one participant in my sample for my PhD included the metaphor of Mother as Snake Woman, but the essence the image evoked, namely, the persecutory mother, was a strong theme expressed in all participants’ dream series and descriptions of their relationships with their mothers gathered through amplification of their dream series. Strictly speaking, if you use NVivo or any of its alternatives to process the narratives, you have reduced the narrative to measures and counts and are, technically, using a mixed-method design. So, bear in mind that qualitative methodologies are not about how many people expressed the same idea but about the depth of understanding reached in the explication of what participants said. Moreover, these apps process the data rather than distill the meaning.Mixed method designs can occur on several levels other than the data gathered and the strategies used for processing that data, for example, counting themes that emerge from a series of narratives. A mixed method design can be primary. Social surveys, for instance, can, from the outset, collect both statistics and narrative data. In trying to assess mine employees’ trust of management in the 1980s, one question was, “To what extent do you believe management attends to your physical welfare?” We used a Likert-type scale to measure how many (white and black) miners perceived that statement about management to be true and to what degree they thought it true, or how much. That’s a very simple quantitative design. However, the question, “Why do you say this?” was also asked with an open-ended follow-up question. The question, “Why do you believe that?” evokes narrative data that not only alerts the researcher to what aspects of their physical welfare participants deem important but also for what aspects of their physical welfare they hold management responsible. So, in this example, a mixed-method data collection strategy was used: We collected both quantitative and qualitative data, but the processing of the narrative or qualitative data was quantitative: the number of times a theme emerged was counted, and the themes were listed in the order of frequency or occurrence. So, bear in mind that, for the most part, research is about making strategic decisions in your efforts to answer the research question and solve the research problem. Choosing and developing the methodological design is about finding the best means to approach and gather information about the topic and process it to answer the research question.

Besides the research question, a number of other issues will influence your choice of research design, for example, one consideration is access to reliable data, be that medical records or people who can describe their experience. Access may also depend on the sensitivity of the information the research requires. For example, in a society where same-sexed partnerships are illegal and even persecuted, one is unlikely to have people volunteer to share their experience of their same-sexed relationships, and such people are unlikely to be honest in a general survey. Another consideration is the amount of time, money, and energy available to do the research. Still another is what is possible at your institution. For instance, it would silly to propose doing research about the meaning of boredom and creativity if the department specializes in neuropsychology. But you could, if you have access to the equipment, measure brain wave activity and other physiological responses among bored and creative people.

So a methodological design may well be a mosaic of quantitative and qualitative data collection procedures and processing strategies that offer the best chance of answering the research question, and the kind of methodology for answering the question may well be constrained by considerations related to the amount of time, money, and energy that can be invested in the research. For example, a longitudinal design following the degree to which participants’ stress levels decreased after a workshop about preventing burnout over five years is not achievable if your research report must be submitted within six months of being approved. And regardless of what methodological design you choose to do, you have to show that the design you chose is as good if not better than any other design you could have chosen given your research question and circumstances.

Figuring Out the Feedback

In most institutions, your literature review, once you have whittled down the word count and submitted your efforts, would be evaluated, or at the very least, you would be offered some feedback. The feedback, depending your supervisor/mentor/ promoter, might be disappointing, encouraging, or even devastating.

Photo by Olena Sergienko on Unsplash

The feedback may be disappointing for any number of reasons. It may speak but say nothing because your supervisor/ mentor/promoter lacks interest in the topic, is afraid of offending you, or simply does not have the time to think too deeply about how your research might develop. It may be encouraging for any number of reasons, for example, the supervisor/mentor/ promoter engages with your topic, offers insights and direction, and lets you know you have made a great start. It may be devastating for any number of reasons, for example, your approach is met with skepticism or your focus brushed off an irrelevant.

Whatever the feedback, use it to your advantage. If you are disappointed, find an external sounding board against whom you can bounce your ideas. If encouraged, bless the universe and use the energy to forge ahead. If devasted, address the issues or develop an argument that shows your supervisor/mentor/promoter that you considered the options to which you were alerted and convince him or her of the relevance of the focus when making your corrections. For example, perhaps you have to read more and integrate another perspective or ensure your rationale as more explicit and convincing. Alternatively, you might have to argue that a suggested extension to what you proposed is beyond the scope of your study. In either instance, your work will be strengthened by considering and integrating the questions raised by the feedback.

For example, one of my external examiners noted that during the period I was writing up my PhD, a new theorist had emerged that I should integrate. I ended up not only integrating that theorist in a way that supported my approach to the topic but also wrote another chapter that brought the research to a conclusion in a more thorough fashion. Sure, I was devastated because I thought I was done, but then I was challenged to present an improved product, which I did. Sometimes our thinking is not finished when we meet the deadline to submit, and the extra time given by the evaluation when receiving a pass with conditions can prove a blessing in disguise because that time allows for an even finer distillation of the sliver of truth being offered. 

Whatever the quality of the feedback, you ignore the feedback at your peril. There is nothing worse for a busy supervisor/mentor/promoter locked into a publish or perish world than to invest effort in a student and see no fruit. So, regardless of the quality or nature of the feedback you receive, use it strengthen your offering. See it as a problem to be solved on the way to resolving the problem that motivated your initial research question.

Whittling Down the Word Count

So, you have just rendered your best effort on a paper, thesis, or dissertation, only to realize that you are scores if not hundreds of words over the stipulated word count. What do you do now if you are not to compromise your content?

Nick Morrison on

While it is best to be aware of that limitation from the outset and cut your cloth accordingly, fitting what you need to say about what you have done and found into a word limit can be challenging.

I recall how, having completed an edit, the client suddenly realized she had a word count and that the thesis needed to be two-thirds as long. The complexity of her topic meant the content included was necessary and even essential.

And so began the process of whittling down the report of her research by a third. Here are some of the strategies you might apply if you are faced with the same issue. These range from

  • deleting words that do not add meaning to a sentence (superfluous words),
  • replacing wordy phrases with a single word that captures the essence of what you mean,
  • reconstructing sentences, which might include
    • losing unnecessary prepositions, articles, and determiners,
    • learning how to use the possessive form correctly,
    • avoiding the expletive form or sentences that begin with “It is/there is (are)”,
    • shifting from the singular to the plural, and
    • using more judicious punctuation. 

Remember that when writing for academic and business purposes, you are not writing poetry or prose. Academic writing aims for succinctness of expression. The aim is to convey information as concisely as possible. That means making sure that each word in a sentence counts and is essential to the meaning of the sentence.

Consider some examples of the first strategy:

  • The reason why I was late is because my car would not start. (13 words)
  • The reason I was late is because my car would not start. (12 words)

Reason and why mean the same, so “why” is a superfluous word in this instance.

  • I thought that I could settle with a high school diploma, but I see education is everything. (17 words)
  • I thought I could settle with a high school diploma, but I see education is everything. (16 words)

The word “that” can often be deleted from a sentence without compromising the meaning of a sentence.

A second strategy is replacing wordy phrases with a single word. Consider the following sentences:

  • Themes were distilled further in order to extract the essence of the experience. (13 words)
  • Themes were distilled further to extract the essence of the experience. (11 words)

The phase “in order” does not add meaning to the sentence, and in general, it is best not to use two or three words where just one would effectively convey what you mean. Transition word, or those words and phrases used to smooth the links between ideas, like consequently, therefore, ultimately, etc., should be used only if the link would not be obvious to the reader.  

A third set of strategies involves reconstructing sentences. This strategy more work, but it leads to a more refined expression of what you mean.

Consider the following examples:

  • Transformational leaders seek to include followers in their decisions, to create a shared vision that can be pursued, and to trust their followers will do what is required. (28 words)
  • Transformational leaders include followers in their decisions, create a shared vision that can be pursued, and trust their followers will do what is required. (24 words)

The phrase “seek to” is nice to have rather than essential, and if all the prepositions introducing the list are the same, you only require the first preposition to introduce the list.

  • Gathering of the data included employing a social survey to determine how associates of Chimera perceived the brand. (18 words)
  • Gathering the data included employing a social survey to determine how Chimera’s associates perceived the brand. (16 words)
  • A social survey was used to determine how Chimera’s associates perceived the brand. (13 words)
  • The means of execution involved figuring out how associates of Band Brand perceived the brand. (15 words)
  • Execution involved figuring out how Band Brand’s associates perceived the brand. (11 words)  

Notice how the first example includes three revisions. First, the “of” is superfluous. Second, using the possessive form lets go of another preposition. In the third case, I have a more direct statement. The second example removes a wordy phrase and uses the possessive form to lose four words.  

  • There are teachers who believe that children should be spanked if they disobey instructions. (14 words)
  • Some teachers believe children should be spanked if they disobey instructions. (11 words)
  • There is a difference between a and b. (8 words)
  • A difference exists between a and b. (7 words)
  • A and b are different. (5 words)

Avoiding the expletive form not only results in a more active sentence but can also save a great many words. Finally, shifting from the singular to the plural in a sentence removes the need for articles and “him or her” if you are wanting to express yourself in a politically and linguistically correct fashion.

  • Each participant was provided with an explanation of what was expected of him or her. (15 words)
  • Participants were provided with explanations of what was expected of them. (11 words)

So, let’s look at a passage in which I was asked to reduce a post from 1113 characters to under 1000 characters.   

Until September 2021, I was reviewing in excess of 100 student essays per month for largely an American audience. To get a sense of the American ‘zeitgeist’, I made it my business to read the comments under media reports, both (Blue) and “alternative” (Red). Texas will be challenging in that respect. I would go so far as to suggest that some of the main proponents (amongst ‘we people’, not the career politicians) are obsessed with the idea, for example, that Kamala Harris and Belinda Gates, just about every powerful Blue female, is a born-man who transformed into a woman, based mostly on a strong jaw and broad shoulders. It comes second to the obsession with clones.

It is a real learning edge for the Red. They forget that God experiments—being the Creator—and that biology might not be as simple as just xx and xy.

However, I have noticed a shift from shock and horror to the kind of giggle adolescents give when first learning about sex on the playground (back in my day, anyhow), so maybe there is a shift you can coach forward. One way of doing that, with strong facilitators, would be to connect them with their inner child as they attend the lessons their children will attend and have a debriefing afterward. If they can feel (and I mean feel) the lesson isn’t predatory propaganda, it might calm their fears (and allow them to grow).

Applying the strategies described above, I easily met the requirement:

Until September 2021, I was reviewing some 100 student essays per month for a largely American audience. To get a sense of the American ‘zeitgeist’, I made it my business to read the comments under media reports. Texas will be challenging in that respect. I would go so far as to suggest that some of the main proponents (amongst ‘we people’, not career politicians) are obsessed with the idea, for example, that Kamala Harris, Belinda Gates and Michelle Obama are born-men transformed into women based on strong jaw lines and broad shoulders.

It is a real learning edge. They forget that God experiments—being the Creator—and that biology might not be as simple as just xx and xy.

Lately, I have noticed a shift from shock and horror to the giggle adolescents give when first learning about sex on the playground (back in my day, anyhow). Maybe there is a shift you can coach forward. One option, with strong facilitators, would be to connect parents with their inner child as they attend the lessons their children will attend and include a debriefing afterward. If they can feel (and I mean feel) the lesson isn’t predatory propaganda, it might calm their fears.

In this simple exercise, you can see how I

  • Replaced phrases with a single word (in excess of to some)
  • Reworked an example to show the same more succinctly
  • Sacrificed nice-to-haves rather than essentials
  • Used the plural rather than singular to avoid the use of articles.

So, do not despair if you need reduce your word count. Instead, allow your thinking to fully expand, and once you have that documented your thinking, you can contract your documented thinking by applying the strategies above to whittle down your word count.

(© Michelle L. Crowley 30/12/2021)

Getting Your Head Around the Literature Review

There can be nothing more frustrating than, while writing your literature review, you cannot remember who said what when. Citing the sources of your information about your topic and acknowledging those who have helped you build your ideas is a primary pillar of credibility in presenting academic work.

It is easy to avoid that frustration and the time it wastes if, from the outset, you develop a system for documenting what you have read as you read the literature you have read and are accessing.

The following system assumes you are clear about and have formalized your research question, at least vaguely, and have developed the list of keywords for accessing the relevant literature based on your research question and the context in which you asked your research question.

Each time you access a work that is relevant, and by relevant, I mean the paper would contribute to your and the reader’s understanding of the topic, take a minute or two to

  • Copy the details that would be included in the references list, for example, the author or authors, date of publication, title of the work, the container in which the work can be found, and the DOI number, if available, and make sure you include the URL. Some sites make that really easy.
    • At this point, it does not matter in what order the information is documented, so do not waste your time perfecting the font and order of the information. You just want to ensure that when you construct your References list in the style you have been asked to use that all the necessary information is in one place and at your fingertips.
  • Copy the Abstract, if available. An abstract includes information about the focus of the work, the methods employed, and the findings.
    • If no abstract is evident, note the focus of the work, the methods employed, and the findings of the authors. Key words will do, and perhaps even one or two pertinent quotes.

The critical step…

The critical step for making who said what when available is to include a comment on where the information might be used in your document, be it a section or theme or even theme within a section. Where in the literature review might that author’s views be accommodated or serve to support your approach to the topic?

Bear in mind that themes for some works might include a section of the literature review and a methodology chapter or section: The methodological design must also be defended. Moreover, in the methodology chapter (or section) of your document, you also need to discuss why you did not choose alternative methods for answering your research question.  

Applying this strategy, you will end up with what some might call a basic and useful annotated bibliography in a Word file. In the course of doing that, you will also have begun developing the framework for writing up the literature review. In other words, the beauty of this strategy is the following:

  • In identifying themes, you will begin developing and revisiting the initial framework or template for conveying your conceptual understanding of your topic and research focus. It enables the necessary distillation process.
  • You can use Find in Word to search for themes to identify who said what when: The citation information is literally at your fingertips as you begin using the framework to write your literature review.
  • The content of your Reference list is in place. You will not waste time hunting down an obscure source that supports your argument.

Consider if my research question is about the extent to which adverse vaccine reactions and blood type are correlated.

Notice that I am proposing that there is a correlation, so my alternative hypothesis is that there is a correlation between blood type and adverse vaccine effects. The null hypothesis would be that blood type makes no difference with respect to adverse vaccine effects.

My literature review would need to focused on defending the assumption that there is a correlation because a keyword search reveals some research about vulnerability to being infected with various pathogens, including COVID-19, by blood type, but little about vulnerability to adverse vaccine effects and blood type, with one exception: a 1965 paper that suggests a link with respect to the smallpox vaccine. None of the authors seem to have published in that field since, and the one that continued to publish shifted focus to thinking about how we think and statistical models for making sense of medical data.

I therefore need to access literature that allows me to argue that attempting to answer the research question is worthwhile and that theoretically, at least, it is possible that blood type and adverse vaccine reactions are correlated, especially vaccines that include fetal matter.  

So, for example, I would need to discuss blood types and their characteristics. I would need to point out, for example, that people with O-Neg blood are universal donors but have adverse effects to transfusions with any other blood type. I would need to show that some vaccines include fetal matter and discuss those vaccines and what fetal matter they include and any research results pertaining to that. I would then have to argue, based on genetic theory and the creation of Molly, the first cloned sheep, that matter from an individual cell of every organic entity contains the blueprint for the entire entity, and that would include the entities’ blood types. So, if it is true that matter from the individual cell of every organic entity contains the blueprint for the entire entity, including fetal tissue in vaccines may explain why some people, and particularly those with and O-Neg blood type have adverse reactions to vaccines that include fetal matter.

Notice how, in the process of accessing the literature, my research question is refined to vaccines containing fetal matter and my conceptual framework expanded to a discussion of the mechanics of cloning creatures and the theory that supports that. As I read and come to understand the field in which I will be conducting my research, the framework is refined so that I can convince the reader that the research is worthwhile and may yield fruit. I can show, at least in theory, why some people may have adverse reactions after being vaccinated and others not and develop the grounds for establishing whether (or not) blood type is a significant factor or makes a difference to adverse reactions to vaccines, or at last those containing fetal matter. The next step would be to do the research to establish if the data supports my hypothesis. Even if the research does not support the hypothesis, it is a valid finding because I will know with relative certainty that blood types makes to difference whether people suffer adverse reactions to vaccines and consider looking at alternative explanations for adverse vaccine reactions.

So, when reading around your topic for the purposes of the literature review, from the outset, consider

  • Documenting the references entry information for each work you have read.
  • Copying the Abstract or summarizing the work
  • Identifying the works’ relevance for your research question
  • Categorizing the themes evident in each entry
  • Using the themes to develop the framework by which you will convey (write about) your conceptual understanding of your field and topic.

Once your literature file and framework are in place, you can more confidently begin with the writing.

(© Michelle L. Crowley 14/01/2022)