Skyhooks, Cranes, and the Construct Dump: A Comment on and Extension of Boster (2023)
Copyright ⓒ 2023 by the Korean Society for Journalism and Communication Studies
Abstract
This commentary responds to Frank J. Boster's essay titled "Too many? Too few? Just right? Construct proliferation and need for a construct dump," which recently appeared in Asian Communication Research. The authors agree with Boster's call to reassess constructs in communication research and present additional evaluation criteria. In particular, the commentary emphasizes the importance of multilevel explanations and proposes a distinction between constructs that provide empty explanations (skyhooks) and constructs that provide grounded explanations (cranes). By incorporating behavioral and biological measures alongside psychometric remedies, the communication research community can strengthen construct validity and advance the field's explanatory power.
Keywords:
construct, construct validity, measurement, neuroimaging, behaviorThis commentary is in response to an invited essay titled “Too Many? Too Few? Just Right? Construct Proliferation and Need for a Construct Dump,” which appeared in a recent issue of Asian Communication Research (Boster, 2023). The author, Frank J. Boster, will be known to many as a fellow of the ICA, longstanding editor, leading persuasion researcher, and social scientific methods expert. We highly recommend reading his timely text, which discusses key concerns regarding the validity of constructs in communication research and the criteria necessary to establish a construct’s validity. His core conclusion is that a (unspecified) number of our discipline’s constructs should be abandoned, or as he calls it, relegated to the construct dump. We found ourselves largely in agreement with Boster’s core argument. At the same time, we would like to offer some additional criteria that can help determine what constructs belong on the construct dump.
Boster’s argument is built on four points. First, many constructs violate the assumption of unidimensionality.1 Second, it is inappropriate to specify a construct as something that exists at distinct points along the continua of several lower-order unidimensional constructs.2 Third, construct proliferation can result in multiple specifications of the same construct. And finally, it is incorrect to specify opposing ends of a continuum as unique constructs. When making these points, Boster is clear to note how each reduces our capacity to understand and explain the world. Solutions to his first two points require specifying several lower-order constructs and relegating the multidimensional construct to the construct dump. This process results in construct proliferation. By comparison, his final two points require construct consolidation by relegating redundant constructs to the construct dump.
Boster’s overall message is positive, as are its aims and ambitions. The suggested remedies are valuable, actionable, and can be accomplished with existing psychometric and statistical toolkits. We could stop here, but this would only endorse an article we agree with and not continue the conversation. With this commentary, we want to expand on Boster’s argument by recommending a multilevel approach that includes additional criteria for determining which constructs belong in the dump. We argue that explanations of communication phenomena must span across multiple levels and that this raises important challenges. To support our case, we discuss how a single-level approach can lead to a proliferation of skyhook-constructs that provide only empty explanations (Dennett, 2009). We conclude by noting how a consideration of neurocognitive and biobehavioral explanations can distinguish skyhooks from crane-constructs, which are grounded in reality and thus able to carry the weight of real explanations for communication phenomena.
Robust Explanations Require Multiple Levels
The construct of constructs has a long and vexed history. Without delving too much into the philosophical minefield, the reason why we postulate constructs is that the way things work is not obvious: the structures and forces behind communication are latent and thus hard to measure and theorize (Cronbach & Meehl, 1955; Lovasz & Slaney, 2013; Maccorquodale & Meehl, 1948; Slaney & Racine, 2013). The construct - or rather the hypothetical construct - was proposed as a solution to make elusive psychological phenomena measurable. Especially via social and personality psychology, the concept of a construct was imported into communication research. Still, the ‘construct construct’ remains challenging and debates about construct validity, operationalization, reliability, and dimensionality persist (Bechtoldt, 1959; Colliver et al., 2012; Lissitz, 2009).
With that in mind, we freely admit to what Boster jokingly calls ‘physics envy,’ although the word admiration is perhaps more accurate. Indeed, physics excels at measuring unobservable constructs, and this excellence is crucial to determining which constructs should be relegated to the construct dump. For instance, unobservable constructs such as the luminiferous aether in physics (a postulated medium to propagate light) or the phlogiston in chemistry (a fire-like element that was assumed to be released during combustion) provide illustrative examples of constructs relegated to the construct dump.3 Other unobservable constructs (e.g., mass, energy, temperature) persist because they can be measured with high precision and because they offer tremendous explanatory power. Social scientists often lament that physics is “easier” because physicists constrain their inquiry to inanimate objects that are devoid of complex social and cognitive phenomena. If physics was easy, this might indeed breed envy, but it is not. Quantum physics, for example, is remarkably complicated and we recommend that anyone suffering from physics envy talk to a quantum physicist. That will cure it. But if complexity does not account for the success of physics, what else might?
One notable strength of physics is its ability to handle multilevel explanations. In physics, it is taken for granted that things are organized into, and exert influence along part-whole hierarchies - from the atom to the molecule and all the way up to the galaxy. Reflecting this organization, multilevel explanations are pretty much baked into physics and the natural sciences more broadly (Churchland & Sejnowski, 2016; Petersen & Sporns, 2015; Wilson, 1999). A coherent web of explanatory mechanisms spans from nuclear physics to structural chemistry, to organic chemistry, to biology, and so forth. In our view, this acknowledgment of the necessity of multilevel explanations, paired with a strong measurement culture, is a core strength.
In communication, multilevel explanations are less developed. Of course, the notion of multiple levels is also a key organizational force in the field: we clearly distinguish between the individual, the group, or the societal level of analysis (McLeod et al., 2010). However, this organization does not translate so well into our explanations, which focus either on individual and psychological phenomena (like attitudes, beliefs, and intentions), or on macro-level sociological topics (like social class, media ecology, or public opinion). This also leads to instances where people typically working at one level make arguments that their level also accounts for things that are more commonly attributed to other levels, or that the two levels share sufficiently similar characteristics to make previous level distinctions irrelevant (e.g., the distinction between interpersonal and mass communication (Lang, 2013; O’Sullivan & Carr, 2018; Walther, 1996). There are also other challenges associated with multilevel explanation. For instance, between the levels, there are often fundamental gaps and dichotomous incompatibilities, such as the gap between neural reactions and psychological phenomena (mind-body problem) or between biological and social realms (nature-nurture debate; e.g., Cacioppo et al., 2000; Churchland & Sejnowski, 2016; Epstein, 2014; Sherry, 2004). As a result, working within a single level is much more convenient and common. However, this also leads to problems, which are discussed next.
Single-Level Explanations and the Risk of Skyhook Explanations
What is a skyhook? The idea of skyhooks was popularized by Daniel Dennett in his 1996 book Darwin’s Dangerous Idea (Dennett, 1996), and pitted against the idea of cranes. Both, skyhooks (which are floating in the air) and cranes (which are anchored in the ground) are called on to lift a weight, but only the latter actually can do so. For Dennett, skyhooks are a metaphor for empty explanations because they are untethered and necessitate a deus ex machina. By comparison, cranes offer carefully specified multilevel mechanistic explanations that are integrated with previous explanations. Dennett uses the skyhook idea to contrast explanations for the vast complexity of living organisms. In doing so, he notes that divine intervention is a skyhook, whereas evolution is a crane. Science’s long history is filled with skyhooks. The soul is a skyhook for how consciousness works. The four humors (blood, phlegm, yellow bile, and black bile) specified in Hippocrates’ humoral theory are skyhooks. Pathogens, as specified in germ theory, are a crane. We could go on, but the core point has been made. Science is the process of either identifying skyhooks and turning them into cranes, or throwing skyhooks into the construct dump. The next section discusses how communication scientists can achieve this ambition.
Measurement Guides The Way To The Construct Dump
Boster’s main remedy for resolving the four issues he identifies are psychometric and statistical in nature. We agree but also suggest that there are additional criteria to consider. In essence, Boster emphasizes that we need to demonstrate a construct’s unidimensionality via statistical procedures, which we agree is important. However, the examples he offers are organized around survey-based measurement. Although he does note that his approach also applies to physiological and behavioral measures, these measures are only mentioned briefly, and little detail is offered. As researchers who work extensively in this area, we want to discuss how these measures can be used to identify constructs for the construct dump.
To understand why we believe a broader measurement approach is needed, imagine a perfect world in which all possible dimensions of self-report have been measured with high reliability and validity. From a psychometric perspective, this would mean “mission accomplished.” We would know how each construct correlates with all others (Campbell & Fiske, 1959). Those who are into structural equation approaches might even compute a giant model. But what would we have explained? Arguably not much because we still would not understand what the constructs are (Borsboom et al., 2009); and for the SEM model, a core question would still remain: “How do the arrows work?” (Cummins, 2000).4 Our “perfect world” would still consist only of very high-level descriptions and it would be difficult to distinguish skyhooks from cranes.
Remember that cranes are constructs that can be linked across levels of explanation whereas skyhooks exist only at one explanatory level and resist multilevel linkages. How do we link high-level explanations, measured via subjective reports with lower-level explanations measured via behavioral and biological approaches? Measurement, the foundation of any science, provides an answer. “To measure is to know,” said Lord Kelvin to underscore that claims to knowledge require evidential basis.5 In this sense, the quality of our measures is tightly linked with the phenomena we can observe and how well we can explain them. If our explanations, and the measures we use to test them, are confined to one level only, we will be unable to look beyond that level, a bit like wearing vertical blinders. Well-grounded behavioral and neurophysiological measures (which help link explanation across levels) were once part of the arsenal of communication science, but have all but disappeared (in parallel with the rise of tools that made online data collection and statistical analysis very easy). Similarly, even though the field has seen at least three waves promoting cognitive, psychophysiological, and neurophysiological methods, these approaches are not widely adopted (Huskey & Schmälzle, 2023). It is time to return to a broader explanatory focus which necessitates a broader measurement focus. This will help distinguish skyhooks from cranes.
The core of our argument is that if an idea remains a skyhook after attempts to find a hypothetical crane, that construct must go to the construct dump. Said differently, if a proposed construct (measured via self-report) cannot be linked across levels via behavioral and biological correlates, that construct should be abandoned. An anecdote about the construct of a spinal cord soul serves as a case in point. In the 19th century, experiments revealed that the limbs of decapitated frogs would still withdraw long after the animal’s death. This led to philosophical speculations about a ‘spinal cord soul,’ which was assumed to have purposive qualities independent of the brain (Jeannerod, 2006). Ultimately, the debate was resolved when the somatic mechanisms of reflexes were more fully understood by linking constructs across levels of explanation (along a complex pathway that includes molecular, neuroskeletal, neural, and behavioral components). Admittedly, the social constructs we seek to understand, including those Boster discusses (e.g., machiavellianism, mavenness), are equally or possibly even more complex than the reflexes of frogs. Nevertheless, this example illustrates how behavioral and biological data advance theory and guard against skyhooks. The spinal cord soul is a skyhook. The somatic reflex is a crane.6
Note that this process of relegating constructs to the dump can be a lengthy one, and sometimes happens more gradually than by a single-act refutation. In the case of the spinal cord soul, the critical experiments took place over almost a century, and even though spinal reflexes are nowadays well understood, old ideas continue to reverberate in some echo chambers. However, that it took so long to overcome the false construct of a spinal cord soul was largely due to the very limited measurement capabilities. Had the right measurements been available, a single experiment may have resolved the issue, at least at one level of explanation. Similar stories paint a similar picture: The discovery of neurotransmitters, for instance, ended a debate between two opposing construct-camps, namely those advocating for a chemical (“soup”) vs. an electrical (“spark”) mechanism of neural communication (Valenstein, 2005). Likewise, measurement also confirmed the theorized Higgs-Boson - after a search that took 40 years (Aad et al., 2015).
Further support for this strategy comes from modern neuroimaging, a technique that measures brain activity. Given that our thoughts, feelings, and actions all arise from the brain’s coordinated activity, methods that let us measure brain activity are promising (even though they are still limited in several ways; Falk et al., 2015; Schmälzle, 2022; Weber, Eden et al., 2015). Indeed, neuroimaging has already increased our understanding of cognitive constructs like memory and attention as well as social-affective topics like moral reasoning, empathy, and others (Engel, 2008; Hopp et al., 2022; Lieberman, 2010; Mather et al., 2013; Poeppel et al., 2020). These constructs are now all evaluated according to their biological plausibility and examined from an increasingly mechanistic cognitive neuroscience perspective (Craver, 2002; Dubois et al., 2020).
This endeavor extends naturally to communication (Huskey, Bue et al., 2020). For example, consider the case of the elaboration likelihood model (ELM), one of the most prominent theories of persuasion (Carpenter, 2020; Petty & Cacioppo, 1986). At the core of the ELM lies the distinction between central and peripheral routes, which are postulated to i) exist, ii) determine whether an argument is processed elaboratively or not, and iii) shape subsequent attitudinal and behavioral modification. The ELM has been criticized and other models have been proposed (Boster & Carpenter, 2021; Kruglanski & Thompson, 1999; Stiff & Boster, 1987), but the “dual route” notion is very much alive, even beyond the ELM (Evans & Stanovich, 2013; Kahneman, 2013; Vezich et al., 2016).
We dare to ask: Are those routes real? Can they be linked to lower-level (and dissociable) neural substrates? Can we find evidence for a crane? If yes, then different routes should evoke distinguishable brain activity patterns. But how should we go about looking for evidence of those patterns and what framework should we use when interpreting our results? There is no central or peripheral message processing part of the brain. Instead, both types of message processing rely on a host of lower-level cognitive primitives (e.g., attention, memory, motivation, emotion7) and these cognitive primitives, as well as their relationship with behavioral and self-report measures, must be established before any neuroimaging investigation can even begin. If there is no way to link higher-level constructs in the ELM with their lower-level constitutive parts, then we must treat those constructs as skyhooks, and relegate them to the construct dump.
Instead, there is some evidence that constructs specified by the ELM are cranes. The process of identifying cognitive primitives for central and peripheral message processing has already been undertaken (Weber et al., 2013) and, therefore, laid a foundation for subsequent inquiry at a lower (neural) level. Functional magnetic resonance imaging research, using an ELM framework, shows that high- and low-drug-risk individuals neurally process persuasive messages differently (Huskey et al., 2017; Huskey, Turner et al., 2020; Weber, Huskey et al., 2015). While these results are preliminary and substantially more work is necessary (Cacioppo et al., 2016), the point is that neural data can provide evidence to support or refute the ELM’s dual processing model. This same approach can be applied to many other constructs and theories - reactance, attitudes, expectancy violation, and so forth (Clayton, 2022; Huskey, Bue et al., 2020; Wilcox et al., 2020).
Finally, before closing we want to reflect briefly on the status of behavioral measures. Verbal, paraverbal, and nonverbal behaviors - like speech content, speech rate, prosody, eye contact, and body posture - have never been easier to capture, process, and analyze in large quantities. Granted, not all data measurements are theoretically relevant, but we contend that many behavioral measures are because they can often be used without invoking latent variables.8 For example, consider the case of eye-tracking. Many researchers who use this method seem to feel a need to describe eye-tracking as an indicator of some latent construct, for example, attention.9 There seems to be an assumption that this is necessary, more theoretical, and generally superior. However, it is possible to conceptualize and use eye-tracking metrics directly, namely as a measure of visual information acquisition behavior. Recasting eye-tracking in such an operational way circumvents construct validity concerns or debates about the right label (e.g., Weidman et al., 2017). However, it still allows for the same analyses as under the ‘indicator-of’ regime. This reasoning also applies to other behavioral measures beyond eye-tracking: The recent wave of research on language (a.k.a. verbal behavior), like LIWC and other word-counting methods, relies on a similar logic, although this is somewhat obfuscated by the construct-like labels for their categories. Digital trace data, which showcase explicitly what people do, present yet another opportunity.
Behavioral measures are promising to guard against skyhooks by reducing conceptual vagueness and aiding straightforward measurement. Thus, as we argued above for biological measures, constructs should be retained or abandoned based on whether they can be linked to a behavioral correlate. To give an example, we can refer to work by Levine et al. (2012), who studied whether verbal aggressiveness (measured via a scale and conceptualized as a trait construct) showed correspondence with actual verbal aggressive behavior, finding almost none. Another example is the relationship between self-reported social media use and logged social media use, for which it was found that self-reported use scales did not correspond well with actual behavior (Parry et al., 2021). Strictly speaking, the last example only demonstrates the invalidity of the measure and not the construct itself because the underlying construct was already defined behaviorally (social media use). As yet another example, our field often uses “perceived X”-type constructs (e.g., perceived message effectiveness, O’Keefe, 2018) as if they are separate theoretical entities. It is important work to demonstrate if these constructs do or do not hold up against behavioral data. Constructs that correlate with behavioral data should be retained, whereas constructs that do not should be relegated to the construct dump.10
Summary and Conclusions
In summary, we support Boster’s comments on constructs and the vital task of exploring their properties for quality assurance. In our view, the problems he diagnoses result from a design flaw of the ‘construct construct’ itself. Our core point, and extension of Boster’s argument, is that searching for cranes (multilevel explanations that incorporate biological and behavioral measures), effectively guards against the untethered mentalism skyhooks offer. Boster points out that good constructs should be unidimensional. We agree, but also note that unidimensionality exists only at one level and we must be careful to specify constructs that can be linked across multiple levels. Behavioral and biological methods, and the theorizing necessary to incorporate these methods, provide the linkages that can turn skyhooks into cranes. Integrating such methods is easier than one may think. Low-cost physiological measures, mobile eye trackers, and even simple recordings of peoples’ verbal and nonverbal communication have never been more available and can be incorporated fairly easily into the communication researcher’s toolbox. Together with the psychometric remedies Boster suggests, we are well-equipped to make physicists jealous of the theorizing and explanatory power of quantitative communication science in the 21st century!
Acknowledgments
We thank Christopher Carpenter for valuable feedback on the first draft of this manuscript, which prompted several edits that made the paper clearer.
References
- Aad, G., Abbott, B., Abdallah, J., Abdinov, O., Aben, R., Abolins, M., AbouZeid, O. S., Abramowicz, H., Abreu, H., Abreu, R., Abulaiti, Y., Acharya, B. S., Adamczyk, L., Adams, D. L., Adelman, J., Adomeit, S., Adye, T., Affolder, A. A., Agatonovic-Jovin, T., … Woods, N. (2015). Combined measurement of the Higgs Boson Mass in pp collisions at √s=7 and 8 TeV with the ATLAS and CMS experiments. Physical Review Letters, 114(19), 191803. [https://doi.org/10.1103/PhysRevLett.114.191803]
- Bechtoldt, H. P. (1959). Construct validity: A critique. American Psychologist, 14(10), 619–629. [https://doi.org/10.1037/h0040359]
- Borsboom, D., Cramer, A. O. J., Kievit, R. A., Scholten, A. Z., & Franić, S. (2009). The end of construct validity. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions, and applications (Vol. 263, pp. 135–170). IAP Information Age Publishing.
- Boster, F. J. (2023). Too many? Too few? Just right? Construct proliferation and need for a construct dump. Asian Communication Research, 20(2), Advance online publication. [https://doi.org/10.20879/acr.2023.20.011]
- Boster, F. J., & Carpenter, C. J. (2021). Critical questions in persuasion research. Cognella.
- Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.
- Cacioppo, J. T., Berntson, G. G., Sheridan, J. F., & McClintock, M. K. (2000). Multilevel integrative analyses of human behavior: Social neuroscience and the complementing nature of social and biological approaches. Psychological Bulletin, 126(6), 829–843. [https://doi.org/10.1037/0033-2909.126.6.829]
- Cacioppo, J. T., Cacioppo, S., & Petty, R. E. (2016). The neuroscience of persuasion: A review with an emphasis on issues and opportunities. Social Neuroscience, 13(2), 129–172. [https://doi.org/10.1080/17470919.2016.1273851]
- Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. [https://doi.org/10.1037/h0046016]
- Carpenter, C. J. (2020). Elaboration likelihood model. In J. Bulck (Ed.), The international encyclopedia of media psychology. Wiley-Blackwell. [https://doi.org/10.1002/9781119011071.iemp0070]
- Churchland, P. S., & Sejnowski, T. J. (2016). The computational brain. MIT Press.
- Clayton, R. B. (2022). On the psychophysiological and defensive nature of psychological reactance theory. Journal of Communication, 72(4), 461–475. [https://doi.org/10.1093/joc/jqac016]
- Colliver, J. A., Conlee, M. J., & Verhulst, S. J. (2012). From test validity to construct calidity … and back? Medical Education, 46(4), 366–371. [https://doi.org/10.1111/j.1365-2923.2011.04194.x]
- Craver, C. F. (2002). Interlevel experiments and multilevel mechanisms in the neuroscience of memory. Philosophy of Science, 69(S3), S83–S97. [https://doi.org/10.1086/341836]
- Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. [https://doi.org/10.1037/h0040957]
- Cummins, R. (2000). “How does it work?” versus “What are the laws?”: Two conceptions of psychological explanation. Explanation and Cognition, 117–144. [https://doi.org/10.7551/mitpress/2930.003.0009]
- Dennett, D. (2009). Darwin’s “strange inversion of reasoning”. Proceedings of the National Academy of Sciences, 106(supplement_1), 10061–10065. [https://doi.org/10.1073/pnas.0904433106]
- Dennett, D. C. (1996). Darwin’s dangerous idea: Evolution and the meanings of life. Penguin UK.
- Dubois, J., Oya, H., Tyszka, J. M., Howard, M., III, Eberhardt, F., & Adolphs, R. (2020). Causal mapping of emotion networks in the human brain: Framework and initial findings. Neuropsychologia, 145, 106571. [https://doi.org/10.1016/j.neuropsychologia.2017.11.015]
- Engel, S. A. (2008). Computational cognitive neuroscience of the visual system. Current Directions in Psychological Science, 17(2), 68–72. [https://doi.org/10.1111/j.1467-8721.2008.00551.x]
- Epstein, J. M. (2014). Agent zero. Princeton University Press.
- Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223–241. [https://doi.org/10.1177/1745691612460685]
- Falk, E. B., Cascio, C. N., & Coronel, J. C. (2015). Neural prediction of communication-relevant outcomes. Communication Methods and Measures, 9(1-2), 30–54. [https://doi.org/10.1080/19312458.2014.999750]
- Hommel, B., Chapman, C. S., Cisek, P., Neyedli, H. F., Song, J.-H., & Welsh, T. N. (2019). No one knows what attention is. Attention, Perception & Psychophysics, 81(7), 2288–2303. [https://doi.org/10.3758/s13414-019-01846-w]
- Hopp, F., Amir, O., Fisher, J., Grafton, S., Sinnott-Armstrong, W., & Weber, R. (2022). Moral foundations elicit shared and dissociable cortical activation modulated by political ideology. Research Square. [https://doi.org/10.21203/rs.3.rs-2133317/v1]
- Huskey, R., & Schmälzle, R. (2023). Finding middle ground in cognitive media psychology [Unpublished Manuscript].
- Huskey, R., Bue, A. C., Eden, A., Grall, C., Meshi, D., Prena, K., Schmälzle, R., Scholz, C., Turner, B. O., & Wilcox, S. (2020). Marr’s tri-level framework integrates biological explanation across communication subfields. Journal of Communication, 70(3), 356–378. [https://doi.org/10.1093/joc/jqaa007]
- Huskey, R., Mangus, J. M., Turner, B. O., & Weber, R. (2017). The persuasion network is modulated by drug-use risk and predicts nti-drug message effectiveness. Social Cognitive and Affective Neuroscience, 12(12), 1902–1915. [https://doi.org/10.1093/scan/nsx126]
- Huskey, R., Turner, B. O., & Weber, R. (2020). Individual differences in brain responses: New opportunities for tailoring health communication campaigns. Frontiers in Human Neuroscience, 14, 565973. [https://doi.org/10.3389/fnhum.2020.565973]
- James, W. (1890). The principles of psychology. Macmillan.
- Jeannerod, M. (2006). The origin of voluntary action: History of a physiological concept. Comptes Rendus Biologies, 329(5-6), 354–362. [https://doi.org/10.1016/j.crvi.2006.03.017]
- Kahneman, D. (2013). Thinking, fast and slow. Farrar, Straus and Giroux.
- Kruglanski, A. W., & Thompson, E. P. (1999). Persuasion by a single route: A view from the unimodel. Psychological Inquiry, 10(2), 83–109. [https://doi.org/10.1207/s15327965pl100201]
- Lang, A. (2013). Discipline in crisis? The shifting paradigm of mass communication research. Communication Theory, 23(1), 10–24. [https://doi.org/10.1111/comt.12000]
- Levine, T. R., Kotowski, M. R., Beatty, M. J., & Van Kelegom, M. J. (2012). A meta-analysis of trait-behavior correlations in argumentativeness and verbal aggression. Journal of Language and Social Psychology, 31(1), 95–111. [https://doi.org/10.1177/0261927x11425037]
- Lieberman, M. D. (2010). Social cognitive neuroscience. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (pp. 143–193). John Wiley & Sons. [https://doi.org/10.1002/9780470561119.socpsy001005]
- Lissitz, R. W. (2009). The concept of validity: Revisions, new directions and applications. IAP.
- Lovasz, N., & Slaney, K. (2013). What makes a hypothetical construct “hypothetical”? Tracing the origins and uses of the “hypothetical construct” concept in psychological science. New Ideas in Psychology, 31(1), 22–31. [https://doi.org/10.1016/j.newideapsych.2011.02.005]
- Ma, H., Gottfredson O’Shea, N., Kieu, T., Rohde, J. A., Hall, M. G., Brewer, N. T., & Noar, S. M. (2023). Examining the longitudinal relationship between perceived and actual message effectiveness: A randomized Trial. Health Communication. Advance online publication. [https://doi.org/10.1080/10410236.2023.2222459]
- Maccorquodale, K., & Meehl, P. E. (1948). On a distinction between hypothetical constructs and intervening variables. Psychological Review, 55(2), 95–107. [https://doi.org/10.1037/h0056029]
- Mather, M., Cacioppo, J. T., & Kanwisher, N. (2013). How fMRI can inform cognitive theories. Perspectives on Psychological Science, 8(1), 108–113. [https://doi.org/10.1177/1745691612469037]
- McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and STAN. CRC Press.
- McLeod, J. M., Kosicki, G. M., & McLeod, D. M. (2010). Levels of analysis and communication science. In C. R. Berger, M. E. Roloff, & D. R. Roskos-Ewoldsen (Eds.), Handbook of communication science (pp. 183–200). Sage.
- O’Keefe, D. J. (2018). Message pretesting using assessments of expected or perceived persuasiveness: Evidence about diagnosticity of relative actual persuasiveness. Journal of Communication, 68(1), 120–142. [https://doi.org/10.1093/joc/jqx009]
- O’Sullivan, P. B., & Carr, C. T. (2018). Masspersonal communication: A model bridging the mass-interpersonal divide. New Media & Society, 20(3), 1161–1180. [https://doi.org/10.1177/1461444816686104]
- Parry, D. A., Davidson, B. I., Sewall, C. J., Fisher, J. T., Mieczkowski, H., & Quintana, D. S. (2021). A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nature Human Behaviour, 5(11), 1535–1547. [https://doi.org/10.1038/s41562-021-01117-5]
- Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Penguin.
- Petersen, S. E., & Sporns, O. (2015). Brain networks and cognitive architectures. Neuron, 88(1), 207–219. [https://doi.org/10.1016/j.neuron.2015.09.027]
- Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. Springer.
- Poeppel, D., Mangun, G. R., & Gazzaniga, M. S. (2020). The cognitive neurosciences. MIT Press.
- Schmälzle, R. (2022). Theory and method for studying how media messages prompt shared brain responses along the sensation-to-cognition continuum. Communication Theory, 32(4), 450–460. [https://doi.org/10.1093/ct/qtac009]
- Sherry, J. L. (2004). Media effects theory and the nature/nurture debate: A historical overview and directions for future research. Media Psychology, 6(1), 83–109. [https://doi.org/10.1207/s1532785xmep0601_4]
- Slaney, K. L., & Racine, T. P. (2013). What’s in a name? Psychology’s ever evasive construct. New Ideas in Psychology, 31(1), 4–12. [https://doi.org/10.1016/j.newideapsych.2011.02.003]
- Stiff, J. B., & Boster, F. J. (1987). Cognitive processing: Additional thoughts and a reply to Petty, Kasmer, Haugtvedt, and Cacioppo. Communication Monographs, 54(3), 250–256. [https://doi.org/10.1080/03637758709390230]
- Thomson, W. (1889). Popular lectures and addresses. MacMillan and Company, London. https://archive.org/details/popularlecturesa01kelvuoft/page/72
- Valenstein, E. S. (2005). The war of the soups and the sparks: The discovery of neurotransmitters and the dispute over how nerves communicate. Columbia University Press.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. U., & Polosukhin, I. (2017). Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 5998–6008). Curran Associates.
- Vezich, I. S., Falk, E. B., & Lieberman, M. D. (2016). Persuasion neuroscience: New potential to test dual-process theories. In E. Harmon-Jones & M. Inzlicht (Eds.) Social neuroscience (pp. 34–58). Routledge.
- Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43. [https://doi.org/10.1177/009365096023001001]
- Weber, R., Eden, A., Huskey, R., Mangus, M., & Falk, E. (2015). Bridging media psychology and cognitive neuroscience. Journal of Media Psychology, 27(3), 146–156. [https://doi.org/10.1027/1864-1105/a000163]
- Weber, R., Huskey, R., Mangus, J. M., Westcott-Baker, A., & Turner, B. O. (2015). Neural predictors of message effectiveness during counterarguing in antidrug campaigns. Communication Monographs, 82(1), 4–30. [https://doi.org/10.1080/03637751.2014.971414]
- Weber, R., Westcott-Baker, A., & Anderson, G. (2013). A multilevel analysis of antimarijuana public service announcement effectiveness. Communication Monographs, 80(3), 302–330. [https://doi.org/10.1080/03637751.2013.788254]
- Weidman, A. C., Steckler, C. M., & Tracy, J. L. (2017). The jingle and jangle of emotion assessment: Imprecise measurement, casual scale usage, and conceptual fuzziness in emotion research. Emotion, 17(2), 267–295. [https://doi.org/10.1037/emo0000226]
- Wilcox, S., Dorrance Hall, E., Holmstrom, A. J., & Schmälzle, R. (2020). The emerging frontier of interpersonal communication and neuroscience: Scanning the social synapse. Annals of the International Communication Association, 44(4), 368–384. [https://doi.org/10.1080/23808985.2020.1843366]
- Wilson, E. O. (1999). Consilience: The unity of knowledge. Vintage. (Original work published 1998)
- Wright, S. (1934). The method of path coefficients. Annals of Mathematical Statistics, 5(3), 161–215. [https://doi.org/10.1214/aoms/1177732676]