Category: Research Methods
Summary:
An upper-level undergraduate or entry-level graduate introduction to real research in the technical communication field. Fundamentally sound definitions, principles, examples, and principles. Hughes & Hayhoe cover research phases, a literature review, quantitative and qualitative studies, surveys, and analysis of each of those methods. The authors included enough detail for readers to feel like they understand the fundamentals of a concept, but not enough to write a thesis or dissertation without further training or study. One glaring oversight is the lack of any in-depth approaches or examples to mixed methods studies; the subject is glossed over when it probably deserved a chapter of its' own.
Citation-worthy:
"In essence, the linking of actions, decisions, or advocacy to observable data is what research is all about. In this book, the term research is used to mean the systematic collection and analysis of observations for the purpose of creating new knowledge that can inform actions and decisions. Let us look at what this definition implies:
– Research is systematic — repeatable, one that has protocols and safeguards.
– Research involves the collection of analysis of data — it is important to note that these are two separate activities
– Research creates new knowledge — research should advance our collective knowledge of our field
– Research should inform actions and decisions" (Hughes & Hayhoe, 2010, p. 4).
"Methods:
– Quantitative: Primarily involves the collection of data that is expressed in numbers and their analysis using inferential statistics. This is the usual method employed in empirical research involving hypothesis testing and statistical analysis.
– Qualitative: primarily involves the collection of qualitative data (data represented by text, pictures, video tape, and so forth) and its analysis using ethnographic approaches. This method is often used in case studies and usability test in which the data consists of the words and actions of the test users.
– Critical theory: relies on the deconstruction of "text" and the technologies that deliver them, looking for social or political agendas or evidence of class, race, or gender domination. This method is usually employed in post modern research.
– Literature review: primarily involves the review and reporting on the research of others, often including the analysis and integration of that research through frequency counts and meta-analyses. This method can be applied to any research goal that aims at integrating prior research.
– Mixed methods: research approaches that combine a mixture of methods, usually quantitative and qualitative. Mixed methods are often found in research involving usability tests, which are a rich source of quantitative data, such as time to complete tasks or frequency of errors, and qualitative data, such as user comments, facial expressions, or actions" (Hughes & Hayhoe, 2010, p. 11).
"The actual phases of conducting a research project very closely parallel the structure of the final report:
– Identifying a research goal
– Formulating the research questions
– Reviewing the literature
– Designing the study
– Acquiring approvals for the research
– Collecting the data
– Analyzing the data
– Reporting the results" (Hughes & Hayhoe, 2010, p. 25).
"One of the most important reasons for reviewing the literature is the heuristic function of this task: discovering what others have already done in your area of interest will give you insights into aspects that you want to investigate and suggest contributions that you can make to the field.… Knowing what research has already been done on your topic of interest also helps you discover what remains to be studied. What are the gaps in existing studies that still need to be filled in? What are the areas that have not been explored at all" (Hughes & Hayhoe, 2010, p. 39)?
"Validity can be divided into two types: internal validity and external validity. Internal validity addresses the question, "Did you measure the concept you wanted to study?" External validity addresses the question, "Did what you measured in the test environment reflect what would be found in the real word world?"… Validity is controlled through test design:
– Internal validity can be managed by taking care when you operationalize a variable to make sure that you are measuring a true indicator of what you want to study. Ask yourself whether other factors produce or affect the measurements you intend to take that are not related to the attribute or quality you're studying. If so, your study might lack internal validity.
– External validity can be managed by taking care when you set up the test that the conditions you create in your test environment match those in the general environment as much as possible. This design element includes ensuring that the same group itself is a fair representation of the general population of interest. (We discuss the effect that sampling can have on test design later in the chapter)" p. 59).
"Reliability describes the likelihood that the results would be the same if the study were repeated, either with a different sample or with different researchers" (Hughes & Hayhoe, 2010, p. 60).
"We talk about two different kinds of statistics: descriptive and inferential. Descriptive statistics describe a specific set of data. For example, you could record the education level of the attendees at a technical communication conference and calculates that their average education level was 16.8 years. That average would be a descriptive statistic because it describes that specific data set.
Inferential statistics, on the other hand, make inferences about larger population based upon sample data. For example, we might learn that the average educational for attendees at an instructional designers conference was 16.2 years (another descriptive statistic). It would be tempting to infer from those two descriptive statistics that the average education level of technical communicators in general is higher than the average for instructional designers in general. But you cannot make such leaps unless inferential statistical techniques are applied to help you gauge the reliability of those inferences" (Hughes & Hayhoe, 2010, p. 60).
"Two basic principles underlie all of these formulas, two principles even a beginning researcher can and should grasp.
Principle one: the smaller the variance in the data, the more reliable the inference.…
Principle two: the bigger the sample size, the more reliable the inference" (Hughes & Hayhoe, 2010, p. 62-63).
"In a quantitative study we assess the internal validity by essentially asking "Did you measure the concept you wanted to study?" In a qualitative study, where data is not measurable, we look more to the credibility of the data; that is, do the participants truly represent the population or phenomenon of interest and how typical are their behavior and comments" (Hughes & Hayhoe, 2010, p. 79)?
"In quantitative research, external validity addresses the question "Does the phenomenon you are measuring in the test environment reflect what would be found in the real world?" In qualitative research the question is essentially the same, with the substitution of "observing" for "measuring." The question could also be stated, "How natural or authentic was the environment in which the study took place?" Sometimes, "natural" is difficult to achieve, in which case the emphasis needs to be on "authentic." Authentic means that the context of the behavior being studied is consistent with the context of the real-world situation in which the actual behavior would occur" (Hughes & Hayhoe, 2010, p. 79).
"As with the quantitative counterpart, reliability, dependability refers to the confidence with which the conclusions reached in a research project could be replicated by different researchers. In qualitative studies, more so than with quantitative studies, it is more difficult to turn off the researcher's own subjectivity; therefore, qualitative researchers must be careful to ensure that their conclusions have emerged from the data and not from the researcher's own preconceptions or biases. Unlike quantitative studies, which can rely on procedural, statistical methods to do this, qualitative studies must rely on more humanistic protocols. The literature on qualitative research discusses many techniques that can be applied to verifying the conclusions produced by such research. In this discussion, we try to summarize them into the following categories:
– Depth of engagement
– Diversity of perspectives and methods
– Staying grounded in the data" (Hughes & Hayhoe, 2010, p. 80).