Opportunities and Challenges to Conversations with Generative AI: Integrating Theoretical Perspectives from Clark and Pennebaker
Copyright ⓒ 2024 by the Korean Society for Journalism and Communication Studies
Abstract
This paper introduces the integration of psycholinguistic frameworks of language use and psychology of language research to understand conversational dynamics with generative Artificial Intelligence (AI). An argument is advanced that suggests psycholinguistics research from Clark (1996), particularly the idea of language as joint action and the establishment of common ground, combined with Pennebaker’s (2011) approach toward understanding the psychological meaning behind words, offers opportunities and challenges for analyzing human-AI interactions. Upon articulating such benefits and potential risks by combining perspectives for this purpose, the paper concludes by proposing future research directions, including comparative studies of human-human versus human-AI turn-taking patterns and longitudinal analyses of online community conversations. This theoretical integration provides a foundation for understanding emerging forms of human-AI communication while acknowledging the need for new theoretical frameworks specific to AI.
Keywords:
language, language use, psychology of language, psycholinguistics generative Artificial IntelligenceThere are two books that changed how I think about the world. These books originate from different traditions of psychological science and set me on a path of continual discovery. The first, Using Language (Clark, 1996), is a psycholinguistics text that articulates the functions of language in everyday life. It asks fundamental questions like, among other things: (1) What do people do with language? (2) Why do these activities happen? and (3) What are the implications of using or misusing language? One of the many powers of Using Language is its ability to conceptualize everyday occurrences. For example, when someone asks “Can I ask you a question?” instead of asking the question outright, we learn this is a type of pre-request that serves various social and psychological functions for the communicator (e.g., a soft “on-ramp” before an actual request is stated to the receiver; Schegloff, 2007). In short, Using Language is a primer for understanding the functional role that language plays for humans, taking seriously both lexical (e.g., words) and non-lexical communication data (e.g., non-words, but still message-level characteristics like pointing). I read Clark’s magnum opus as an undergraduate in a language and technology course, and it remains top-of-shelf for me to this day.
The second book that has inspired my current research program is The Secret Life of Pronouns (Pennebaker, 2011). For students who are interested in verbal behavior and natural language processing, Jamie Pennebaker’s resource is the first one I hand them. Instead of asking Clarkian questions like what people do with language and why they communicate to achieve various goals, Pennebaker looks underneath the words to understand what language reflects about— rather than how it functions upon—people and the human condition. What are the verbal correlates of extraversion (Ireland & Mehl, 2014; Schwartz et al., 2013), how is depression revealed in language (Eichstaedt et al., 2018; Rude et al., 2004), and what linguistic markers indicate high-versus low-status people in society (Kacewicz et al., 2014; Markowitz, 2018)? These questions underly interests within the psychology of language perspective, which has spread virally to disciplines including law, education, economics, and others. In a somewhat paradoxical manner, words are instrumental and therefore secondary to the psychology of language perspective— they just happen to be the lens through which we learn what people are thinking, feeling, and experiencing psychologically. What words represent and reveal about the individual mind are the primary interests of those who follow this empirical tradition (Boyd & Markowitz, 2024). Put another way, it is less critical that specific self-references (e.g., I, me, myself) may be used at high or low rates by a person on a climate change subreddit; rather, it matters more about what the high or low rate of self-references indicates about the person’s internal processing of climate change and their attention toward such issues.
A careful look at these two traditions reveals some surface-level similarities. After all, they both care about language and verbal behavior with an interest in understanding psychological dimensions of words. But there are meaningful contrasts between the two regarding their underlying values and assumptions. Psycholinguists care about what language is used for, what it does, and how it works to accomplish various communication tasks. Psychology of language scholars, on the other hand, care about what language suggests about people and their internal processing. These perspectives appear disconnected (e.g., it is rare for a literature review to contain references from both traditions), but as the current paper intends to argue, the contributions of Clark and Pennebaker1 are complementary and deserve simultaneous treatment. Against this backdrop, the purpose of this essay is to articulate how psycholinguistics research and psychology of language research work well together to inform how we think about language use and conversations, with a keen focus on such processes with nonhumans given the rise of generative Artificial Intelligence (AI). The current essay therefore adds to our theoretical toolkit that aims to understand possible challenges and opportunities associated with making meaning from verbal exchanges with new technology.
A Primer on Psycholinguistic and Psychology of Language Perspectives
Clark’s psycholinguistics perspective focuses on language use as a collaborative and social process between communicators. While this work draws on influential theories in linguistics and the philosophy of language, including speech act theory (e.g., how language assists with actions; Austin, 1975; Searle, 1969) and semiotics (e.g., how signs indicate meaning and understanding; Peirce, 1865), Clark’s approach attends to language use as inherently involving individual and social processes toward the creation of shared meaning (see also Levelt, 1989). Clark’s work is also informed by concepts from pragmatics (e.g., studying intentions and beliefs; Potts, 2004), conversation analysis (e.g., studying social interactions to evaluate how people make meaning from communication; Psathas, 1995), and social psychology, integrating them into a framework to understand how and why people use language in everyday settings.
One of Clark’s central principles is the idea of language use as a joint action between two (or more) people. That is, language and communication are not merely the transmission of information from person to person, but instead, the coordination of activities, mental states, and goals to achieve a communication task. To communicate means to act jointly and to act jointly, two people must communicate through verbal and nonverbal means.
Another foundational idea from Clark is common ground. People often use language to accomplish coordination tasks based on the joint knowledge, beliefs, and suppositions that exist between two people (Stalnaker, 1978). The process of grounding involves constant feedback and adjustment, with communicators offering and accepting evidence of mutual understanding. For example, in conversation, listeners might nod (Clark & Krych, 2004), offer verbal acknowledgments like “uh-huh” to signal their agreement with or understanding towards the speaker (Schegloff, 1982), or use disfluencies (e.g., elongated versions of um or uh) to signal uncertainty or speaking delays (Clark & Fox Tree, 2002). These techniques help speakers tailor their messages and ensure comprehension. Most social scientists are also likely familiar with Clark’s work on grounding, which highlights how different media—including the features and affordances therein—affect language use and understanding (Clark & Brennan, 1991). For example, early grounding research inspired a systematic description of how features and affordances of digital media relate to one’s ability to have computer-mediated conversations (e.g., the impact of synchronicity and contemporality on conversational dynamics; Rice et al., 2017; Ronzhyn et al., 2023). Altogether, according to Clark (1996), communication works well when two entities can anticipate, share, and collaborate during conversation, and joint action is central to accomplishing communicative tasks well. We can use one’s verbal and nonverbal behavior to infer what they intended to communicate to another person, further creating a profile of speech acts that help people coordinate tasks and goals (c.f., Austin, 1975; Searle, 2002).
The psychology of language tradition, on the other hand, argues that words are reflective markers of one’s attention (for reviews, see Boyd & Markowitz, 2024; Boyd & Schwartz, 2021; Pennebaker, 2011). A person who uses high rates of negative emotion, for example, does not necessarily feel more negative than a person who uses low rates of negative emotion. Instead, the communicator with high rates of negative emotion is simply attending to or focused on a negative emotional state. This difference might seem trivial, but how one describes effects reported in the psychology of language tradition is consequential. Words-as-attention assesses how we think and process psychological events as revealed by verbal behavior (Stone & Dunphy, 1966), which is different from our feeling system (Barrett, 2017). To definitively say that a person felt negative during an utterance, we would need in-situ, instantaneous insights from the person to triangulate with their words, to suggest their psychological states were aligned with negativity instead of other alterative explanations (e.g., generally heightened arousal).
Words-as-attention is the central framework organizing for most psychology of language studies (Boyd & Markowitz, 2024; Boyd & Schwartz, 2021; Pennebaker, 2011), and social and psychological insights about the human condition using this model span a range of disciplines. Language data are all around us and our records of verbal communication are ever-growing. We take for granted the idea that we can now analyze language data at scale with computers, but it was not always this way. Early work using the Harvard General Inquirer (Stone et al., 1962) was one of the first computerized systems to analyze language content, but it fell out of style due to its complexity and computers were not available en masse for empirical use. Thus, it was decades before computers and automated text analysis resurfaced as an efficient, effective, and practical tool to collect and analyze verbal data for social and psychological insights.
Indeed, a popular tool that has emerged from the psychology of language tradition is Linguistic Inquiry and Word Count (LIWC: Pennebaker et al., 2022). LIWC relies on human-validated dictionaries of social, psychological, and part of speech categories to identify the degree to which a text contains a focus on specific concepts (e.g., how much a text focused on money, the self, or cognition). There are many other tools, including Coh-Metrix (McNamara et al., 2014), SPLICE (Moffitt et al., 2012), WMatrix (Rayson, 2003, 2008), which aim to achieve the same goal: to use computers to make social and psychological inferences about people through words. Such automated text analysis tools remove entry barriers for social scientists who, compared to computer scientists, may not have the programming skills or theoretic background to analyze language computationally. Many of these tools can be used “out of the box,” which has broadened their appeal and made text analysis mainstream in the social sciences.2 Altogether, if there are words, it is likely that there are tools available for people to analyze them regardless of one’s programming background.
The purpose of discussing such tools in current paper is to benchmark the history of verbal behavior and computation that now provide the basis for generative AI and large language models (LLMs). Instead of simply being able to analyze text (Markowitz, 2024a; Rathje et al., 2024), however, LLMs can seamlessly interact with humans, producing language-based responses to prompts that appear, feel, and are perceived as natural (Demszky et al., 2023; Ho et al., 2018). The use of generative AI for everyday interactions is underexplored and under-theorized. The field has emerging evidence about the social and psychological value of generative AI responses to human prompts (Demszky et al., 2023; Messeri & Crockett, 2024; Tey et al., 2024), but conversation are different due to their interactive and dependent nature (e.g., a person's language output is the direct result of the information they received from a speaker; Yeomans et al., 2022, 2023). Next, an argument is advanced to combine psycholinguistic and psychology of language perspectives to understand their implications for the study of conversational dynamics with AI.
Positive Implications for Language Use and Conversations with Generative AI
Combining the psycholinguistic perspective of Clark with the psychology of language approach of Pennebaker offers several benefits for understanding conversational dynamics with generative AI. Broadly, their integration provides a framework for analyzing multiple layers of human-AI interaction that each single framework alone could not achieve. Clark’s focus on language as joint action and the importance of common ground can help researchers understand how humans and AI systems coordinate their communicative efforts, and if such efforts fail, why issues may have occurred (Clark, 1994). For example, we can examine how an AI chatbot like ChatGPT attempts to establish and maintain common ground with human users, adapting its language and knowledge base to match the user’s perceived understanding and background. If the experience is underwhelming, ineffective, or undesirable, one possibility is the lack of grounding and joint action that humans ordinarily require for coordination.
Second, an approach that combines psycholinguistics and psychology of language allows for a deeper analysis of the content and style of AI-generated responses. Pennebaker’s emphasis on what language reveals about one’s internal state(s) can be applied to assess the “psychology” or “psychological output” of AI systems (Demszky et al., 2023; Yuan et al., 2024). For instance, researchers could use LIWC or similar tools to analyze the language patterns of AI responses, potentially uncovering insights about the model’s training data or biases, or biases of the scholars who have helped to develop such models (Dancy & Saucier, 2022; Markowitz, 2024b). This could be particularly valuable in understanding how different prompt engineering techniques or model architectures influence the “personality” or “cognitive style” of AI outputs relative to humans (Giorgi et al., 2023; Kosinski, 2023).
The integration of these perspectives can also reveal the evolving nature of human-AI relationships. The concept of a joint action can be extended to explore how humans and AI engage in meaning-making to achieve communicative goals together. At the same time, Pennebaker’s focus on the psychological implications of language use can help us understand how interacting with AI might influence human cognition and emotion. For example, researchers could examine how prolonged interactions with AI chatbots affect users’ psychological processing via temporal dynamics in their language patterns, potentially revealing changes in thinking styles or emotional states. It is also possible that long-term experiences with AI may shape how people think and feel about the technology itself. Therefore, conversations with AI can serve as a vehicle to shape and reveal technological perceptions at a meta-perceptions level.
A fourth benefit of the combi n ed psycholinguistics and psychology of language approach is its potential to inform how effective and natural AI conversational interfaces are designed. By applying Clark’s insights on grounding and common ground, developers could create AI systems that are better at managing conversational flow (e.g., response latency, or the appropriate amount of time to respond to a message; Kalman et al., 2006; McLaughlin & Cody, 1982), adapting to different communication settings, and repairing misunderstandings. Misunderstandings are particularly tricky issue with human-AI conversations because according to Clark (1996), they are typically communicated through a message recipient’s disfluencies, uncertainties, incorrect statements, and nonverbal communication that might signal some communication problem. Many of these meta-communicative devices, such as disfluencies, however, are never leveraged by generative AI (e.g., ChatGPT does not say words like “um” or “uh” and instead, provides fast and cogent responses). Generative AI can correct mistakes only at the direction of a human (e.g., a human telling a chatbot that the information is correct or that they have misunderstood the task). Therefore, while the psycholinguistics and psychology of language perspectives can certainly inform each other, we are still at an infantile stage of human-AI conversation to have the management elements of human-human conversation.3
Simultaneously, incorporating Pennebaker’s work on language as a reflective marker of psychological states could help to develop AI models that are more attuned to users’ social and psychological states. That is, for users who have particular emotional or cognitive needs, improving the social and psychological responsiveness of AI assistants is an open opportunity for the AI development community and researchers. This latter point is worth emphasizing alongside research on Language Style Matching (LSM) that has been pioneered in psychology of language studies. The LSM approach uses style words (e.g., articles, prepositions) to measure joint attention—the degree to which two people are mutually engaged and attending to each other’s psychological states. Prior work suggests joint attention at the style word level is predictive of several positive downstream interpersonal outcomes, such as in dating and non-romantic situations (e.g., Gonzales et al., 2010; Ireland & Henderson, 2014; Ireland & Nalabandian, 2022; Ireland & Pennebaker, 2010; Ireland et al., 2011). Therefore, by fine-tuning AI models to attend to style words, this might produce positive interpersonal effects that attract humans to such systems. It is open empirical question about how much or when style matching is appropriate between humans and AI, and which entity tends to match with whom, first. Future work would benefit from taking AI linguistic fine-tuning seriously within conversations and style words, moving beyond the adjacent interest of prompt engineering that appears typically in non-conversational contexts (Bozkurt & Sharma, 2023; Zhou et al., 2023). Finally, this integrated perspective could further contribute to our understanding of the differences and similarities between human-human and human-AI communication. By applying both Clark’s and Pennebaker’s frameworks to analyze conversations with AI, researchers can identify which aspects of human communication are successfully replicated by AI systems (e.g., politeness; Ribino, 2023), and which are uniquely human. This could lead to important insights about the nature of intelligence, cognition, and the fundamental characteristics of human language use. For instance, scholars might explore whether AI systems exhibit the same patterns of style word use that Pennebaker associates with various psychological states or experiences in humans (Boyd & Schwartz, 2021; Tausczik & Pennebaker, 2010), or how closely AI adheres to the principles of collaborative communication and joint action outlined by Clark (1996).
Negative Implications for Language Use and Conversations with Generative AI
While combining psycholinguistic and psychology of language perspectives offers many benefits toward the advancement of conversation theory, there are also potential drawbacks and concerns to consider. First, by combining human-centered theoretical perspectives that are based on human communication and language use, there is a risk of anthropomorphizing AI models (Salles et al., 2020; Troshani et al., 2021). Clark’s concepts of joint action and common ground, for instance, assume a level of shared understanding and intentionality that may not be present in AI systems (e.g., Markowitz et al., 2024). Applying these concepts could misrepresent the agency of AI’s behavior and its capabilities (e.g., mistakenly attributing human-like cognitive processing or emotional understanding to an AI tool whose output is purely statistical).
Another concern is the potential misapplication of the Pennebakerian approach to analyzing AI-generated language. The psychology of language perspective assumes that language patterns reflect one’s internal psychological states, but who is the bearer of such psychological states (“one”) when discussing AI systems is an important question to consider, as large language models do not have internal states that are analogous to human psychology. Analyzing AI outputs using tools like LIWC might produce results that seem meaningful to reveal trends in an AI model’s “psychology,” but they are instead mere reflections of the training data and therefore misleading (unless the explicit purposes is to evaluate how humans and AI communicate or produce text differently). For instance, if a scholar discovers high use of cognitive processing words (e.g., but, know) in AI-generated text on a subreddit about trauma, this might be interpreted as an increase in an AI attempting to “work through” how to process such information psychologically (e.g., Boyd et al., 2020; Markowitz, 2022; Vine et al., 2020). However, this interpretation could also be resolved by a much simpler counter-explanation, such that the use of cognitive processing terms reflected the training data, a human-generated prompt, or the AI model was simply mimicking the statistical pattern that a highly distressing event tends to be associated with an increase in cognitive processing terms (based on existing empirical research). Future research should attempt to disentangle these alternative explanations or acknowledge these possibilities when interpreting reported effects.
Finally, there is some risk that focusing on established psycholinguistics or psychology of language frameworks might limit innovation in developing new theories tailored to AI language use. The unique characteristics of generative AI— the ability to process vast amounts of data, the lack of genuine lived experience (Markowitz et al., 2024), and the potential for rapid iteration and improvement—may require new theoretical approaches to understand conversational dynamics with such systems. By relying on existing human-centered theories, we might miss opportunities to develop more appropriate frameworks of human-AI interactions. Merging existing language models with behavioral theories, as suggested by prior work (Boyd & Markowitz, 2024), might present one way out of this conundrum.
CONCLUSIONS AND MUSINGS FOR THE FUTURE
This paper has explored the integration of a seminal psycholinguistic perspective (Clark, 1996) with psychology of language research (Pennebaker, 2011) to understand the emerging field of conversational dynamics with generative AI. By combining these two theoretical frameworks, we perhaps gain a more comprehensive understanding of the multi-layered nature of human-AI interactions, and the power than language has in revealing how people process experiences facilitated by the technology through words. The synthesis of Clark’s concepts of joint action and common ground, with Pennebaker’s focus on language as a reflection of psychological states, offers several promising avenues for research. It allows us to examine how AI models attempt to establish and maintain common ground, adapt their communication styles, and potentially reflect human psychological processes through language. Moreover, it provides a foundation for analyzing the content and style of AI-generated responses that can explicate the evolving nature of human-AI relationships.
How can scholars put these ideas into practice for their own research program? Below are two study ideas that could be executed. The first incorporates both Clark and Pennebaker perspectives in a comparative analysis of turn-taking and repair. In this study, researchers could recruit participants to engage with another human or AI, discussing various topics for about fifteen minutes. The conversations could be analyzed for turn-taking patterns, repair strategies, and the development of common ground at the language level. Therefore, at its most basic level, this study draws on joint action and grounding principles, and by examining how humans and AI systems manage turn-taking and employ repair strategies, researchers can gain insights into the collaborative nature of communication. This study could also incorporate Pennebaker’s psychology of language perspective by examining the content and style of these interactions. What are the social and psychological signals of turn-taking, and what verbal indicators suggest that a repair will be necessary? These questions could be addressed by adopting the Pennebaker approach alongside the Clark approach. Together, they offer a rich exploration of both the mechanics of conversation (Clark) and the psychological underpinnings of language use (Pennebaker) in human-AI interactions.
Another study might use data from a subreddit (or other online community) to analyze posts over time to evaluate how those within an online community establish and maintain conversational common ground, how they manage repair strategies linguistically, and analyze the development of shared references (e.g., norms). Applying the psychology of language approach, automated text analysis tools could how such Clarkian dimensions manifest in response to real-world events (Cohn et al., 2004; Markowitz, 2022), using words as markers of one’s psychological experience and processing of the world. For example, it might be worthwhile to examine how people manage conversational repairs during affectively heightened or polarizing times (e.g., times of great political unrest) compared to periods of stability. This observational approach is common in psychology of language studies, but an often-missing component is the explication of community norms that highlight how disclosures maintain and impact others within the group, and the upkeep of common ground in real time (and over time).
Altogether, the integration of Clark’s psycholinguistic perspective and Pennebaker’s psychology of language approach has promise to offer a powerful framework for understanding and analyzing human-AI communication. This paper aimed to provide researchers with a conceptual toolkit to begin examining the mechanics of conversation and the psychological underpinnings of words in AI interactions. While this combined approach presents exciting opportunities for advancing our understanding of human-AI dynamics, it also comes with potential pitfalls, such as the risk of anthropomorphizing AI systems or misinterpreting AI-generated language patterns. As AI technology evolves, becomes more sophisticated, and is integrated into daily life (e.g., seamlessly fused across applications on a single device), it is crucial that researchers co-create and develop new theoretical understandings of how older models might apply to AI.
Disclosure Statement
No potential conflict of interest was reported by the authors.
References
- Austin, J. L. (1975). How to do things with words. Harvard University Press.
- Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt Publishing Company.
- Boyd, R. L., & Markowitz, D. M. (2024). Verbal behavior and the future of social science. American Psychologist. [https://doi.org/10.1037/amp0001319]
- Boyd, R. L., & Schwartz, H. A. (2021). Natural language analysis and the psychology of verbal behavior: The past, present, and future states of the field. Journal of Language and Social Psychology, 40(1), 21–41. [https://doi.org/10.1177/0261927X20967028]
- Boyd, R. L., Blackburn, K. G., & Pennebaker, J. W. (2020). The narrative arc: Revealing core narrative structures through text analysis. Science Advances, 6(32), eaba2196. [https://doi.org/10.1126/sciadv.aba2196]
- Bozkurt, A., & Sharma, R. C. (2023). Generative AI and prompt engineering: The art of whispering to let the genie out of the algorithmic world. Asian Journal of Distance Education, 18(2), 2.
- Clark, H. H. (1994). Managing problems in speaking. Speech Communication, 15(3–4), 243–250. [https://doi.org/10.1016/0167-6393(94)90075-2]
- Clark, H. H. (1996). Using language. Cambridge University Press.
- Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick & J. M. Levine (Eds.), Perspectives on socially shared cognition (pp. 127–149). American Psychological Association. [https://doi.org/10.1037/10096-006]
- Clark, H. H., & Fox Tree, J. E. (2002). Using uh and um in spontaneous speaking. Cognition, 84(1), 73–111. [https://doi.org/10.1016/S0010-0277(02)00017-3]
- Clark, H. H., & Krych, M. A. (2004). Speaking while monitoring addressees for understanding. Journal of Memory and Language, 50(1), 62–81. [https://doi.org/10.1016/j.jml.2003.08.004]
- Cohn, M. A., Mehl, M. R., & Pennebaker, J. W. (2004). Linguistic markers of psychological change surrounding September 11, 2001. Psychological Science, 15(10), 687–693. [https://doi.org/10.1111/j.0956-7976.2004.00741.x]
- Dancy, C. L., & Saucier, P. K. (2022). AI and blackness: Toward moving beyond bias and representation. IEEE Transactions on Technology and Society, 3(1), 31–40. [https://doi.org/10.1109/TTS.2021.3125998]
- Demszky, D., Yang, D., Yeager, D. S., Bryan, C. J., Clapper, M., Chandhok, S., Eichstaedt, J. C., Hecht, C., Jamieson, J., Johnson, M., Jones, M., Krettek-Cobb, D., Lai, L., JonesMitchell, N., Ong, D. C., Dweck, C. S., Gross, J. J., & Pennebaker, J. W. (2023). Using large language models in psychology. Nature Reviews Psychology, 2(11), 688–701. [https://doi.org/10.1038/s44159-023-00241-5]
- Eichstaedt, J. C., Smith, R. J., Merchant, R. M., Ungar, L. H., Crutchley, P., Preoţiuc-Pietro, D., Asch, D. A., & Schwartz, H. A. (2018). Facebook language predicts depression in medical records. Proceedings of the National Academy of Sciences of the United States of America, 115(44), 11203–11208. [https://doi.org/10.1073/pnas.1802331115]
- Giorgi, S., Markowitz, D. M., Soni, N., Varadarajan, V., Mangalik, S., & Schwartz, H. A. (2023). “I slept like a baby”: Using human traits to characterize deceptive ChatGPT and human text. Proceedings of the IACT’23 Workshop (pp. 23–37). SIGIR.
- Gonzales, A. L., Hancock, J. T., & Pennebaker, J. W. (2010). Language Style Matching as a predictor of social dynamics in small groups. Communication Research, 37(1), 3–19. [https://doi.org/10.1177/0093650209351468]
- González-Bailón, S., & Paltoglou, G. (2015). Signals of public opinion in online communication: A comparison of methods and data sources. The ANNALS of the American Academy of Political and Social Science, 659(1), 95–107. [https://doi.org/10.1177/0002716215569192]
- Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712-733. [https://doi.org/10.1093/joc/jqy026]
- Ireland, M. E., & Henderson, M. D. (2014). Language style matching, engagement, and impasse in negotiations. Negotiation and Conflict Management Research, 7(1), 1–16. [https://doi.org/10.1111/ncmr.12025]
- Ireland, M. E., & Mehl, M. (2014). Natural language use as a marker of personality. In T. M. Holtgraves (Ed.), The oxford handbook of language and social psychology (pp. 201–218). Oxford University Press.
- Ireland, M. E., & Nalabandian, T. (2022). Language coordination in writing and conversation. In M. Dehghani & R. L. Boyd (Eds.), Handbook of language analysis in psychology (pp. 65–101). The Guilford Press.
- Ireland, M. E., & Pennebaker, J. W. (2010). Language style matching in writing: Synchrony in essays, correspondence, and poetry. Journal of Personality and Social Psychology, 99(3), 549–571. [https://doi.org/10.1037/a0020386]
- Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., & Pennebaker, J. W. (2011). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1), 39-44. [https://doi.org/10.1177/0956797610392928]
- Kacewicz, E., Pennebaker, J. W., Davis, M., Jeon, M., & Graesser, A. C. (2014). Pronoun use reflects standings in social hierarchies. Journal of Language and Social Psychology, 33(2), 125–143. [https://doi.org/10.1177/0261927x13502654]
- Kalman, Y. M., Ravid, G., Raban, D. R., & Rafaeli, S. (2006). Pauses and response latencies: A chronemic analysis of asynchronous CMC. Journal of Computer-Mediated Communication, 12(1), 1–23. [https://doi.org/10.1111/j.1083-6101.2006.00312.x]
- Kosinski, M. (2023). Theory of mind may have spontaneously emerged in large language models. ArXiv. [https://doi.org/10.48550/arXiv.2302.02083]
- Levelt, W. J. M. (1989). Speaking: From intention to articulation. MIT Press.
- Markowitz, D. M. (2018). Academy Awards speeches reflect social status, cinematic roles, and winning expectations. Journal of Language and Social Psychology, 37(3), 376–387. [https://doi.org/10.1177/0261927x17751012]
- Markowitz, D. M. (2022). Psychological trauma and emotional upheaval as revealed in academic writing: The case of COVID-19. Cognition and Emotion, 36(1), 9–22. [https://doi.org/10.1080/02699931.2021.2022602]
- Markowitz, D. M. (2024a). Can generative AI infer thinking style from language? Evaluating the utility of AI as a psychological text analysis tool. Behavior Research Methods, 56(4), 3548–3559. [https://doi.org/10.3758/s13428-024-02344-0]
- Markowitz, D. M. (2024b). From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science. PNAS Nexus, 3(9), pgae387. [https://doi.org/10.1093/pnasnexus/pgae387]
- Markowitz, D. M., Hancock, J. T., & Bailenson, J. N. (2024). Linguistic markers of inherently false AI communication and intentionally false human communication: Evidence from hotel reviews. Journal of Language and Social Psychology, 43(1), 63–82. [https://doi.org/10.1177/0261927X231200201]
- McLaughlin, M. L., & Cody, M. J. (1982). Awkward silences: Behavioral antecedents and consequences of the conversational lapse. Human Communication Research, 8(4), 299–316. [https://doi.org/10.1111/j.1468-2958.1982.tb00669.x]
- McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix. Cambridge University Press.
- Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49–58. [https://doi.org/10.1038/s41586-024-07146-0]
- Moffitt, K. C., Giboney, J. S., Ehrhardt, E., Burgoon, J. K., Nunamaker, J. F., Jensen, M., Meservy, T., Burgoon, J., & Nunamaker, J. (2012). Structured Programming for Linguistic Cue Extraction (SPLICE). Proceedings of the HICSS-45 Rapid Screening Technologies, Deception Detection and Credibility Assessment Symposium (pp. 103–108). Computer Society Press.
- Peirce, C. S. (1865). Five Hundred and Eighty- Second Meeting. May 14, 1867. Monthly Meeting; On a New List of Categories. Proceedings of the American Academy of Arts and Sciences, (Vol. 7, 287–298). The MIT Press. [https://doi.org/10.2307/20179567]
- Pennebaker, J. W. (2011). The secret life of pronouns: What our words say about us. Bloomsbury Press.
- Pennebaker, J. W., Boyd, R. L., Booth, R. J., Ashokkumar, A., & Francis, M. E. (2022). Linguistic inquiry and word count: LIWC- 22 [Computer software]. Pennebaker Conglomerates. https://www.liwc.app
- Potts, C. (2004). The logic of conventional implicatures. OUP Oxford.
- Psathas, G. (1995). Conversation analysis. SAGE Publications, Inc. [https://doi.org/10.4135/9781412983792]
- Rathje, S., Mirea, D.-M., Sucholutsky, I., Marjieh, R., Robertson, C. E., & Van Bavel, J. J. (2024). GPT is an effective tool for multilingual psychological text analysis. Proceedings of the National Academy of Sciences, 121(34), e2308950121. [https://doi.org/10.1073/pnas.2308950121]
- Rayson, P. (2008). From key words to key semantic domains. International Journal of Corpus Linguistics, 13(4), 519-549.
- Rayson, P. E. (2003). Matrix: A statistical method and software tool for linguistic analysis through corpus comparison [Unpublished doctoral dissertation]. Lancaster University.
- Ribino, P. (2023). The role of politeness in human-machine interactions: A systematic literature review and future perspectives. Artificial Intelligence Review, 56(1), 445-482. [https://doi.org/10.1007/s10462-023-10540-1]
- Rice, R. E., Evans, S. K., Pearce, K. E., Sivunen, A., Vitak, J., & Treem, J. W. (2017). Organizational media affordances: Operationalization and associations with media use. Journal of Communication, 67(1), 106–130. [https://doi.org/10.1111/jcom.12273]
- Ronzhyn, A., Cardenal, A. S., & Batlle Rubio, A. (2023). Defining affordances in social media research: A literature review. New Media & Society, 25(11), 3165–3188. [https://doi.org/10.1177/14614448221135187]
- Rude, S., Gortner, E.-M., & Pennebaker, J. (2004). Language use of depressed and depression-vulnerable college students. Cognition and Emotion, 18(8), 1121–1133. [https://doi.org/10.1080/02699930441000030]
- Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88–95. [https://doi.org/10.1080/21507740.2020.1740350]
- Schegloff, E. A. (1982). Discourse as an interactional achievement: Some uses of ‘uh huh’ and other things that come between sentences. In D. Tanner (Ed.), Analyzing discourse: Text and talk (pp. 71–93). Georgetown University Press.
- Schegloff, E. A. (2007). Sequence organization in interaction: A primer in conversation analysis. Cambridge University Press. [https://doi.org/10.1017/CBO9780511791208]
- Schwartz, H. A., Eichstaedt, J. C., Kern, M. L., Dziurzynski, L., Ramones, S. M., Agrawal, M., Shah, A., Kosinski, M., Stillwell, D., Seligman, M. E. P., & Ungar, L. H. (2013). Personality, gender, and age in the language of social media: The open-vocabulary approach. PLoS ONE, 8(9), e73791. [https://doi.org/10.1371/journal.pone.0073791]
- Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge University Press.
- Searle, J. R. (2002). Consciousness and language. Cambridge University Press.
- Stalnaker, R. C. (1978). Assertion. In P. Cole (Ed.), Syntax and Semantics, Volume: 9: Pragmatics (pp. 315–332). Brill. [https://doi.org/10.1163/9789004368873_013]
- Stone, P. J., & Dunphy, D. C. (1966). Trends and issues in content analysis research. In P. J. Stone, D. C. Dunphy, M. S. Smith, & D. M. Ogilvie (Eds.), The general inquirer: A computer approach to content analysis (pp. 20–66). The MIT Press.
- Stone, P. J., Bales, R. F., Namenwirth, J. Z., & Ogilvie, D. M. (1962). The general inquirer: A computer system for content analysis and retrieval based on the sentence as a unit of information. Behavioral Science, 7(4), 484–498. [https://doi.org/10.1002/bs.3830070412]
- Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24–54. [https://doi.org/10.1177/0261927X09351676]
- Tey, K. S., Mazar, A., Tomaino, G., Duckworth, A. L., & Ungar, L. H. (2024). People judge others more harshly after talking to bots. PNAS Nexus, 3(9), pgae397. [https://doi.org/10.1093/pnasnexus/pgae397]
- Troshani, I., Rao Hill, S., Sherman, C., & Arthur, D. (2021). Do we trust in AI? Role of anthropomorphism and intelligence. Journal of Computer Information Systems, 61(5), 481–491. [https://doi.org/10.1080/08874417.2020.1788473]
- Vine, V., Boyd, R. L., & Pennebaker, J. W. (2020). Natural emotion vocabularies as windows on distress and well-being. Nature Communications, 11(1), 4525. [https://doi.org/10.1038/s41467-020-18349-0]
- Yeomans, M., Boland, F. K., Collins, H. K., Abi- Esber, N., & Brooks, A. W. (2023). A practical guide to conversation research: How to study what people say to each other. Advances in Methods and Practices in Psychological Science, 6(4). [https://doi.org/10.1177/25152459231183919]
- Yeomans, M., Schweitzer, M. E., & Brooks, A. W. (2022). The Conversational Circumplex: Identifying, prioritizing, and pursuing informational and relational motives in conversation. Current Opinion in Psychology, 44, 293–302. [https://doi.org/10.1016/j.copsyc.2021.10.001]
- Yuan, H., Che, Z., Li, S., Zhang, Y., Hu, X., & Luo, S. (2024). The high dimensional psychological profile and cultural bias of ChatGPT. ArXiv. [https://doi.org/10.48550/arXiv.2405.03387]
- Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., & Ba, J. (2023). Large language models are human-level prompt engineers. ArXiv. [https://doi.org/10.48550/arXiv.2211.01910]