Asian Communication Research
[ Original Research ]
Asian Communication Research - Vol. 23, No. 1, pp.69-93
ISSN: 1738-2084 (Print) 2765-3390 (Online)
Print publication date 30 Apr 2026
Received 29 Jan 2026 Revised 24 Mar 2026 Accepted 08 Apr 2026
DOI: https://doi.org/10.20879/acr.2026.23.006

YouTube Recommendation Algorithm Perception, AI Regulation Support, and Digital Competence: A Three-Wave Panel Study of Bidirectional Effects and Moderation Effects

Hongjin Shim1 ; Eun-Yi Kim2
1Department of Artificial Intelligence Policy Research, Korea Information Society Development Institute
2Department of Media and Communication, Incheon National University

Correspondence to: Eun-Yi KimDepartment of Media and Communication, Incheon National University, 119 Academy-ro Yeonsu-gu, Incheon, 22012, Republic of Korea. Email: eykim@inu.ac.kr

Copyright ⓒ 2026 by the Korean Society for Journalism and Communication Studies

Abstract

While existing research has focused on how algorithm experiences drive regulation demand, we know little about whether regulation support can reshape algorithm perceptions in return. Using three-wave panel data (2022-2024, N = 2,269) from the Intelligent Information Society User Panel Survey (KCC & KISDI, 2022-2024), we examined bidirectional relationships between YouTube algorithm perception and regulation support. We found temporal asymmetries between transparency and accountability regulations. For forward effects—where negative experiences trigger regulation demand—transparency operated from the outset and strengthened over time, while accountability emerged only later. Reverse effects—where regulation expectations reshape evaluations-- traced opposite trajectories. Transparency gradually improved perceptions, whereas accountability faded. We also found that dimension-specific digital competence patterns. Low rights protection competence users initially demanded stronger regulation but converged with high-competence users, while reverse effects diverged, benefiting only high-competence users. Critically, low critical understanding competence users supporting accountability regulations exhibited decreased harm perception. Our findings advance algorithmic governance and inform competence-tailored regulation communication strategies.

Keywords:

algorithm perception, regulation support, digital competence, bidirectional causality, YouTube, transparency, accountability

Recommendation algorithms now shape how millions of people access information, consume content, and form opinions. YouTube exemplifies this transformation. Its algorithms drive roughly 70% of watching time on the platform (Macready & Stanton, 2025). While these systems enhance user experience through customization, they have also raised serious concerns about filter bubbles, algorithmic bias, and misinformation exposure—concerns that have intensified calls for government intervention with algorithm regulations (Cheong, 2024; Shim & Park, 2024).

Governments worldwide have responded with comprehensive regulations. The U.S. reintroduced its Algorithmic Accountability Act (2023), the EU adopted its AI Act (2024), Japan enacted its AI PromotionAct (2025), and South Korea’s Framework Act on AI takes effect in January 2026 after intensive discussions from 2022 to 2024. This momentum reflects widespread adoption—68% of South Koreans use AI-based services as of 2024 (KISDI, 2025).

We focus on YouTube because it sit at the center of this regulatory environment. With 48 million monthly active users in South Korea (Shin, 2025), YouTube is both the country’s most popular content platform and primary target of transparency and accountability regulations (Reynolds & Hallinan, 2024), making it ideal for examining user perceptions and regulation demands.

Existing research has primarily examined one direction. How negative algorithmic experiences drive regulatory demands—the forward effect (Eslami et al., 2019; Lin, 2025; Wu et al., 2024). However, this bottom-up focus overlooks that regulatory support itself may reshape algorithm perceptions. Cognitive psychology shows expectations significantly alter system evaluations (Clark, 2013; Gregory, 1974), and recent studies demonstrate bidirectional attitude-behavior relationships (Taylor & Choi, 2024). Yet we know little about such reverse effects in algorithmic governance.

Moreover, research often treats algorithmic regulations as monolithic, but transparency and accountability serve distinct ways (Ferrari et al., 2025; Mensah, 2023). Transparency enables direct observation through visible changes like interface updates. Accountability operates through indirect mechanisms-complaint systems, audits, penalties-- largely invisible to users (Ferrari et al., 2025; Goodman & Trehu, 2022). These observability differences raise a noteworthy question. Do these regulatory types generate different forward and reverse effects?

Finally, even if forward and reverse effects differ by regulatory type, their impact may vary across individuals. Understanding who benefits from regulatory dynamics requires examining digital competence. While studies emphasize its moderating role (Gagrčin et al., 2024), treating it as oversimplification obscures crucial distinctions. Algorithmic literacy encompasses multiple dimensions. Critical evaluation skills, digital rights knowledge, and technical understanding (Dogruel et al., 2022; Oeldorf-Hirsch & Neubaum, 2025). These dimensions may differently moderate forward and reverse effects, with profound implications for education and policy.

This study addresses these gaps using three-wave panel data collected from South Koran internet users (2022-2024). We pursue these objectives. First, we explore whether algorithm perception and regulation support influence each other bidirectionally examining forward effects (perception drives attitudes) and reverse effects (attitudes reshape perception). Second, we assess whether transparency and accountability regulations operate differently based on their observability. Third, we identify how digital competence dimensions—general competence, rights protection, and critical understanding--moderate these relationships. These analyses reveal how regulation emerges from experience and how discourse reshapes that experience.

Bidirectional Causality between Algorithm Perception and Regulation Support

The emergence of regulations is based on positive or negative perceptions of the regulation subject. Regulations can be enacted to prevent adverse effects or malfunctions, such as those in the media, even without user demand or social discourse. Typically, however, regulations undergo a process in which regulatory discourse, such as public opinion, is formed based on social demand arising from negative user experiences with the regulated subject. Only then are they regulated (Lunt & Livingstone, 2012). For example, if users directly experience issues with YouTube’s recommendation algorithm while using the platform and negative perceptions gradually accumulate, demands for the algorithm’s regulation could rise. Thus, regulation can be enacted through a “bottom-up process” that begins with user awareness of the problems, progress through social discourse, and reaches a point where is regulation in necessary.

According to prior studies, users who experienced the opacity of algorithms provided by platforms perceived the algorithm negatively (Eslami et al., 2019; Gagrčin et al., 2024). Users’ trust in recommendation systems varies significantly depending on their perception of algorithms on social media and other platforms (Wu et al., 2024). Negative perceptions of algorithm manipulation have also triggered resistance behaviors toward algorithms (Hu et al., 2024; Lin, 2025). This discussion further suggests that a forward effect operates between users’ media perceptions and regulations. A forward effect is a general, logical causal relationship in which attitudes form in the direction of one’s experience. For example, people who experience traffic accidents tend to support safety regulations. It should be noted that the forward effect prediction rests on an assumption that warrants explicit acknowledgement. That is because users’ algorithm perceptions do not systematically improve between waves. This assumption is plausible given that algorithmic harm concerns dominated policy and public discourse throughout this period (Cheong, 2024; Shim & Park, 2024), but it represents a boundary condition in the predicted direction.

The forward effect explains the relationship between YouTube’s recommendation algorithm and related regulations. As described, the forward effect clearly illustrates how negative experiences with the algorithm lead to negative attitudes, and regulation support, and result in calls for regulation. However, there are still theoretical gaps within the forward effect. The forward effect cannot rule out the possibility that regulatory discourse, such as policy debates, might reshape users’ perceptions of the recommendation algorithm instead. In other words, the forward effect fails to account for a reverse effect, whereby users’ expectations regarding new institutional introductions could lead them to perceive the platform positively.

The theoretical gap is becoming more problematic in light of the recent policy trends. From 2022 to 2025, the regulation of algorithms became a central focus of major policy discussions worldwide (OECD, 2024). If policymakers translate public concerns into policy immediately without considering how regulation support toward recommendation algorithms could alter users’ perceptions of these platforms, they are essentially relying on a one-way feedback loop, known as the forward effect. Conversely, users’ changing attitudes toward algorithm regulation can influence their perception of algorithms. This could create an alternative feedback route: a top-down process where regulation expectations lead to better recognition of recommendation algorithms through a reverse effect. For example, GAPI, a global AI cooperation organization, has elevated algorithm regulation from a technical issue to a matter of global consensus by emphasizing risk-based regulation, as seen in the EU AI Act. It’s an example of the reverse effect. The reverse effect prediction assumes that regulatory expectations are at least partially met. Where expectations go unmet, null or negative perceptual updating remains plausible. This condition is more likely to be satisfied for transparency regulation support, whose effects are user-facing, than for accountability regulation support, whose institutional mechanisms remain invisible to the users.

How Algorithm Perception Drives Regulation Support

We have seen that users may have a negative experience with media, and that this experience can lead to demands for related regulations. So, when users perceived YouTube’s recommendation algorithm negatively, what regulation responses to they demand or support? Regulations concerning AI systems, such as recommendation algorithms, primarily address algorithmic transparency and accountability principles (Mensah, 2023). In this study, we use the term regulation support to refer to users’ normative beliefs about whether platform providers should implement transparency or accountability actions, with transparency regulation support and accountability regulation support as the two sub-constructs examined.

These two regulation principles are functionally distinct. Transparency regulation support refers to obligations on platform providers to disclose algorithmic decision-making processes, data usage practices, and recommendation criteria (Larsson & Heintz, 2020). By directly addressing the information asymmetry that underlies negative algorithm perception, transparency enables users to understand why specific content is recommended and how their personal data informs those decisions (Zarouali et al., 2021). Accountability regulation support, by contrast, entails governance principles that require platforms to be answerable to external bodies through audits, complaint systems, and periodic reporting (Reynolds & Hallinan, 2024). Unlike transparency, which enables users to direct observation through visible changes, accountability operates through indirect systems through institutional intermediaries whose remain invisible to users at the user interface level (Ferrari et al., 2025; Goodman & Trehu, 2022). In this sense, accountability regulation support’s effect is real but experienced indirectly, if at all, by ordinary users. More importantly, this functional asymmetry would directly predict earlier reverse effects for transparency regulation support, but moder delayed effects for accountability regulation support.

This distinction suggests that negative perceptions of YouTube’s algorithm may prompt demands for transparency and accountability, which operate different principles. Thus, even identical complaints may lead to different policy demands. Although user demands for regulation may differ depending on the type of regulation, these demands are generally triggered by users’ awareness of the harm caused by algorithms. It is true that these demands are emerging. Algorithm harms—including filter bubbles, misinformation, and privacy violations—create conditions where negative perceptions translate into regulation support. This is the point where users’ negative perceptions of YouTube can translate into support for regulation. This would create a forward effect in the context of YouTube’s recommendation algorithm. Thus, this study poses the following research question and proposes research hypotheses.

  • RQ1. Does negative YouTube algorithm perception at Time t predict increased support for (a) transparency regulation and (b) accountability regulation support at Time t+1?
  • H1a. Negative YouTube algorithm perception at Time t will positively predict transparency regulation support at Time t+1.
  • H1b. Negative YouTube algorithm perception at Time t will positively predict accountability regulation support at Time t+1.

How Regulation Support Shapes Algorithm Perception

This study focuses on the reverse effect that follows the forward effects. This reverse effect is based on top-down process. This effect forms the field of cognitive psychology. Gregroy (1974) explains, human cognition is not merely a passive process of receiving sensory input but an active process of construction. Individuals with strong expectations about a system perceive that these expectations significantly influence their evaluation of the system (Clark, 2013; Raftopoulos, 2001). Similarly, regulation support such as transparency or accountability can make a platform’s recommendation algorithm appear more positive when expectations are met. This occurs through the users’ active interpretive process rather than mere sensory reception.

Do all types of regulation reconstruct perceptions of recommendation algorithms in the same way, then? More specifically, do user expectations regarding transparency and accountability regulations lead to the same evaluations of algorithm perception? As discussed above, Ferrari et al. (2025) highlighted a potential asymmetry between the effects of transparency and accountability, emphasizing the inherent observability of these two-regulation types. When users support transparency regulations and increase their level of scrutiny of platforms, they can perceive the tangible effects of these regulations through changes to the interface, enhanced privacy protections and public statements. This tangible experience enables positive perceptions of the algorithm to be updated based on the evidence users observe and their expectations of transparency.

In comparison, accountability regulations make it difficult for users to experience the substance of the regulations first-hand. Those supporting accountability regulations cannot observe whether complaint handling systems function properly, whether algorithmic audits are initiated according to standards or whether platforms face penalties such as fines for violating user protection regulations. Consequently, unlike with transparency, users who support accountability may fail to update their positive perception of the algorithm if there is no observable evidence. This is because their expectations of accountability and the available evidence do not align, even if substantial regulation progress has been made. The following research questions and hypotheses were established by the study to empirically demonstrate the reverse effect, in line with the discussion.

  • RQ2. Does support for (a) transparency regulation and (b) accountability regulation at Time t predict improved algorithm perceptions at Time t+1? Do effects differ between transparency and accountability? Do effects differ between transparency and accountability regulation support?
  • H2a. Transparency regulation support at Time t will positively predict YouTube algorithm perception at Time t+1.
  • H2b. Accountability regulation support at Time t will show weaker effects on YouTube algorithm perception at Time t+1 compared to transparency regulation support.

Digital Competence as Multidimensional Construct and the Role of Moderation

As discussed earlier, research on digital competence requires breaking down users’ technical abilities into specific dimensions rather than consolidating them into a single, unified concept. The European Parliament defines digital competence as ‘the ability to use information society technologies confidently and critically for work, leisure and communication’ (European Parliament & Council of the European Union, 2006, p. 16). Given the diverse application areas of digital competence, the development of digital competence tailored to these various domains is also required. Hwang et al. (2022) refined digital competence within the Korean context, delineating it into seven dimensions: basic technical skills, everyday use, critical understanding, sharing and production, social participation, rights protection, and security competence.

This study positions digital competence as a moderator rather than an antecedent of algorithm perception, for two reasons. First, digital competence is a relatively stable individual characteristic that exists prior to and shapes how users interpret algorithmic experience. It is not caused by those experiences. Second, the main question of this study is not whether digital competence affects regulation support, but for whom and under what conditions the relationship between algorithm perception and regulation support operates. This is a boundary condition question for which moderation is the theoretically appropriate specification (Valkenburg et al., 2016). Each dimension of digital competence is linked to a specific moderating principles—coping resource availability (rights protection), sensitivity to algorithmic harm cues (general competence), and capacity to interpret platform-level evidence (critical understanding). That predicts not whether but how strongly and in which direction the perception-attitude relationship manifests across competence levels.

Moderating Effect of General Digital Competence

General digital competence encompasses the everyday use and basic understanding of technologies such as smartphones and Internet, rather than specialized expertise in a particular field (Hwang et al., 2022). This general digital competence can help people understand and respond to issues with recommendation algorithms and engage with regulatory discourse. For forward effects, high digital competence enables users to recognize subtle algorithmic issues that low digital competence users might miss (Sundar & Marathe, 2010). Technical experience can also trigger negative machine heuristics when algorithm malfunction (Molina & Sundar, 2024), reinforcing pathway from negative perceptions to regulation demands.

Correspondingly, the relationship between regulation and perception restructuring may manifest divergent patterns depending on the digital competence gap, not only in the forward direction but also in the reverse direction. Users with high general digital competence can alter their perception of regulation, either positively or negatively, based on their ability to monitor recommendation algorithms. Prior studies ( Jung & Park, 2024) have shown that digital competence is associated with positive attitudes towards new technologies. This suggests that competent users can engage with regulation discussions using an interpretative framework that considers how regulation could serve to improve recommendation algorithms. When users with high digital competence back transparency or accountability regulations, they might link these with better recommendation algorithms, in turn, positively rethink recommendation algorithms based on these expectations. The following research questions and hypotheses were established from this discussion.

  • RQ3a. Does general digital competence moderate (a) forward effects (algorithm perception → regulation support) and (b) reverse effects (regulation support → algorithm perception)?
  • H3a-1. General digital competence will moderate the effects, so negative perceptions of the algorithm will more strongly predict regulation support (transparency and accountability) among high-competence users than low-competence users.
  • H3a-2. General digital competence will moderate reverse effects, so regulation support more strongly predict improved algorithm perceptions among high-competence users than low-competence users.

Moderating Effect of Right Protection Competence

The concept of right protection competence is similar to the idea of external political efficiency, which is defined as the belief that the government will respond to citizens’ political demands (Gil de Zúñiga et al., 2017). An individual’s experience of expressing political demands through institutional channels like elections or petitions, and seeing outcomes reflected in reality, plays a key role in shaping external political efficacy.

In this study, the ability to protect rights is tied to users’ trust in recommendation algorithm regulation and their belief that their demands can prompt changes to theses algorithms, much like external political efficacy. Capacity for rights protection encompasses knowledge of regulation compliant systems (e.g., media review committee), the ability to report phishing, fraud and copyright infringement, an understanding of rights protection processes (Hwang et al., 2022; Shim & Park, 2024). Users with higher levels of this competence can hold platforms to account for algorithmic issues and their consequences more effectively than users with lower competence through regulatory discussion.

The capacity for right protection is expected to have different moderating effects on forward and reverse effects. For forward effects, when users experience negative algorithms, their perceived vulnerability to recommendation algorithms may motivate regulatory demands. This may be independent of institutional knowledge. Protective Motivation Theory suggests that coping appraisal (response efficacy and self-efficacy) predicts protective behavior more strongly than threat appraisal (Floyd et al., 2000; Tsai et al., 2016). Users with limited rights protection competence lack alternative coping strategies beyond systemic regulatory demands. When coping resources are scarce, protection motivation intensifies (Rippetoe & Rogers, 1987). In contrast, users with high rights protection competence can access alternative coping strategies, which could reduce their need for systemic regulation change. Consequently, the psychological pathway from negative perceptions of recommendation algorithms to support for regulation may be weakened. Importantly, this coping-based mechanism operates regardless of regulation type. Unlike critical understanding competence which differentially amplifies forward effects depending on the observability of transparency versus accountability violations, rights protection competence shapes the intensity of regulatory demand through coping resource availability rather than through the interpretive capacity required to diagnose specific regulatory failures. Users with low rights protection competence would lack individualized coping strategies across both regulatory fields, making systemic regulation support their primary available response to perceived algorithmic harm in both transparency and accountability contexts. Accordingly, the amplifying effect of low rights protection competence on forward effects is expected to be consistent across regulation types.

In the reverse effect, rights protection competence plays a key moderating role in regulating problem recognition and regulation support. Studies on external political efficacy suggest that individuals’ political participation and evaluations of institutions improve as their trust in institutions and belief in their responsiveness increases (Balch, 1974; Pollock, 1983). In a similar vein, users with high rights protection competence expect their regulatory demands to compel platform changes. This expectation could generate an expectancy effect, improving perceptions of current algorithms even before actual institutional improvements occur. Based on the above discussion, this study established following research questions and hypotheses.

  • RQ3b. Does rights protection competence moderate (a) forward effects and (b) reverse effects differently?
  • H3b-1. Forward effects may be stronger among low rights protection competence users who lack alternative coping processes beyond demanding systemic regulation change.
  • H3b-2. Rights protection competence will positively moderate reverse effects, so regulation support (transparency and accountability) predicts improved algorithm perceptions among high rights protection users, but not among low rights protection users.

Moderating Effect of Critical Understanding Competence

Critical understanding competence is the ability to evaluate information and identify reliable sources, as well as spot deliberate manipulation (Hwang et al., 2022). It involves assessing information for reliability, discerning misinformation from truthful content (Koh, 2024). Recent algorithm literacy studies suggest that critical understanding enables users to mitigate risks and maximize opportunities when using the media. This involves cognitive, emotional, and behavioral skills (Gagrčin et al., 2024).

Critical understanding competence can moderate forward and reverse effects, like rights protection competence. But the moderating effect patterns for these two effects may differ due to differences in observability. Regarding the forward effect, the moderating effect of critical understanding competence will differ across transparency and accountability regulation. The asymmetric moderating pattern predicted in H3c-1 follows from differences in the observability of transparency versus accountability violations. Transparency violations, such as privacy issues manifest concrete, interface-level anomalies that users can encounter and recognize without specialized interpretative capacity (Eslami et al., 2019; Lin, 2025). Because even users with relatively low critical understanding competence can identify obvious transparency failures, competence offers limited additional leverage on the relationship between negative algorithm perception and transparency regulation support. The forward effect for transparency is therefore expected to operate uniformly across competence levels.

Accountability violations present a basically different interpretive challenge. Accountability principles operate through institutional channels that are often invisible to ordinary users, making failure difficult to detect (Cheong, 2024; Goodman & Trehu, 2022). Recognizing such failures requires metacognitive awareness, which is the ability to perceive not only the occurrence of harm but also the breakdown of institutional oversight (Dogruel et al., 2022). Consistent with Molina and Sundar (2024) finding that sophisticated users are more likely to attribute algorithmic harm to platform oversight failure rather than mere technical error, we argue that critical understanding competence meaningfully amplifies the forward effect specifically in the accountability regulation support. Users with higher critical understanding competence are better equipped to make consequently more likely to translate negative algorithm perception into accountability regulation support.

For the reverse effect, users must detect changes on the platform to form attitudes toward transparency regulations (Reynolds & Hallinan, 2024). This includes explanations for recommendations, clear privacy disclosures and content labelling. Users with high critical understanding competence can monitor these changes and verify improvements. Based on previous studies, user’s high algorithm literacy discovers differences or the absence of content in their news feeds through interaction with others (Rader & Gray, 2015; Velkova & Kaun, 2021), deepening their critical reflection through participation in communities of practice (Cotter, 2024; Morris, 2023).

In the context of accountability, the reverse effect may also lead to greater disparity based on a lack of understanding due to constraints on observability. Users have limited means of directly checking whether complaint handling systems are functioning or audits are being conducted. However, those with a strong critical understanding can gather clues about accountability improvements through sources of information such as corporate blogs and traditional media reports (Bishop, 2020).

On the other hand, users with low critical understanding competence may miss these indirect signals and lack the cognitive pathways to connect possible improvements in accountability with their attitudes towards recommendation algorithms. With this discussion, this study established following research questions and hypotheses.

  • RQ3c. Do critical understanding competence moderate (a) forward effects and (b) reverse effects differently, and do these patterns vary between regulation support?
  • H3c-1. Critical understanding competence will not significantly moderate transparency-related forward effects but will positively moderate accountability-related forward effects.
  • H3c-2. Critical understanding competence will positively moderate reverse effects (both transparency and accountability), so regulation support predicts improved algorithm perceptions among high critical understanding users but not among low critical understanding users.

METHOD

Participants and Procedure

This study used three-wave national panel data from the Intelligent Information Society User Panel Survey (Korea Communications Commission (KCC) and Korea Information Society Development Institute (KISDI), 2022 to 2024).1 The survey employed stratified multi-stage sampling of Korean internet users and smartphone users aged over 20. The baseline survey was in August 2022 (Wave 1, N = 5,378), with follow-up surveys in September 2023 (Wave 2, N = 4,581), and August 2024 (Wave 3, N = 4,437). Analyses retained participants completing all waves with complete responses (N = 2,269, retention rate = 42.2%).

The sample included 44.5% male, 55.5% female, median age of 40-49 years. Education varied: high school or below (37.4%), college degree (61.7%), graduate degree (.9%). Monthly income was also varied: none (2.0%) to 9 million KRW or more (15.6%), median 3-4 million KRW. Sample characteristics mirror national statistics. The broad representativeness of the panel is supported by the near-universal smartphone penetration among Korean adults. According to a nationlally representative survey (Korea Gallup, 2022), smartphone usage reached 97% overall (male: 99%, female: 95%), and 100% among those aged 18–59, suggesting that access to internet-based surveys is not systematically stratified by gender or age in this population.

Attrition Analysis

The analytic sample (N = 2,269; retention rate = 42.2%) closely mirrors the original Wave 1 (N = 5,378). Gender distributions were similar (analytic: 44.5% male; original: 44.1%; X2 (1) = .18, p = .670) and education distributions were comparable. Two small but significant differences emerged. The analytic sample slightly underrepresents the youngest and oldest age groups (X2 (5) = 17.98, p = .003) and reports marginally lower income (analytic: M = 5.50; original: M = 5.64; t = -2.06, p = .039). Given the small magnitude of these differences and the inclusion of demographic controls in all analyses, attrition is unlikely to substantially bias the findings.

Measures

All measures used 5-point Likert scales (1 = strongly disagree to 5 = strongly agree for attitude items; 1 = very poor to 5 = very good for competence items) unless otherwise noted.

YouTube Algorithm Perception

The respondents’ perceptions of YouTube’s recommendation algorithm were measured using 10 items (Cronbach’s α = .78~.82 across waves). The scale comprised six positively worded items and four negatively worded items (reverse-coded).

Positive items. Positive items using six items were found to be associated with higher levels of satisfaction and perceived usefulness. For example, the items had “It is evident that YouTube recommendations are meticulously customized to align with the individual’s preferences and the intended purpose of video consumption.”, “It is asserted that YouTube recommendations are of use to the users.”, “It is evident that YouTube offers a plethora of information that is multifaceted and varied, as opposed to presenting a uniform and monolithic perspective.”

Negative items. The four negative items with reverse coding were also identified and assessed. For example, the items included, “Frequent use of YouTube recommendations will lead to biased values.”, “I worry that my personal information may be leaked when using YouTube.”, “I worry that I might unknowingly receive illegal content.”

Responses to all 10 items were averaged to create a composite score, with higher values meaning more positive perceptions of YouTube’s algorithm. The scale demonstrated adequate internal consistency across all three waves (2022: Cronbach’s α = .78; 2023: Cronbach’s α = .81; 2024: Cronbach’s α = .82). Each item was rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree).

Meanwhile, the survey items cover evaluative judgements (e.g., perceived usefulness) and harm appraisals (e.g., privacy concern), which may appear to conflate distinct psychological states. We treat these as constituents of a unified perceptual appraisal construct—a summary judgement of the algorithm formed through cumulative experiential processing—rather than as synonymous concepts. This integration is theoretically grounded in two backgrounds. Knijnenbure et al. (2012) demonstrate that in recommender system contexts, subjective perceptions and evaluative experience form an integrated casual chain in which perceptions of system quality mediated the effects of objective features on users’ outcomes—in panel survey settings that capture standing judgements rather than in-session states, this perceptual-evaluative integration is the theoretically appropriate unit of analysis. Similarly, Shin (2020) proposes an Algorithm experience model in which perception and experience function as a unified heuristic-cognitive appraisal process, wherein users’ accumulated interactions with an algorithm form into an integrated cognitive-affective judgement. The empirical coherence of this appraisal construct is supported by our CFA results (CFI = .95; TLI = .93; RMSEA = .06; SRMR = .04), which confirm that positive and negative items load on a single factor rather than fragmenting into distinct experiential and evaluative components.

AI regulation Support

Transparency regulation support. Support for transparency-focused AI regulations among participants was measured using three items (Cronbach’s α = .88~.90 across waves). For example, the items had: “AI service providers should disclose to users which content has been selected by AI algorithms.”, “AI service providers should inform users whether content creators are human or AI-generated.”

Responses were averaged to create a transparency regulation support score, with higher values indicating stronger support for transparency mandates. The scale showed internal consistency (2022: Cronbach’s α = .88; 2023: Cronbach’s α = .89; 2024: Cronbach’s α = .90). Every item was rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree).

Accountability regulation support. Assessment of support for AI regulations focused on accountability used five items (Cronbach’s α = .90-.92) on enforcement and liability backgrounds. The items included, “AI service providers should provide users with options to control AI recommendation service.”, “AI service providers should offer clear remedies when users suffer damages from AI recommendations.”

Items were averaged to form an accountability regulation support score. The scale demonstrated reliability (2022: Cronbach’s α = .90; 2023: Cronbach’s α = .91; 2024: Cronbach’s α = .92) and high convergent validity with transparency attitudes (r = .68~.72), supporting their relatedness while maintaining empirical distinctiveness. All items were rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree).

Although the original survey items are framed as normative agreement statements, we operationalize them as regulation support – users’ normative beliefs that platform providers ought to implement transparency and accountability measures – consistent with algorithmic governance literature (Ferrari et al., 2025; Mensah, 2023; Reynolds & Hallinan, 2024).

To further assess subscale distinctiveness, within-subscale average correlations exceed cross-subscale correlations (transparency within: r = .33-.43; accountability within: r = .40-.51; cross: r = .36-.46), and inter-subscale correlations ranged from r = .66 to .74, indicating moderate empirical distinctiveness.

Digital Competence

General digital competence. Basically, digital competence was measured using 12 items2 common to all three waves. These items assessed various types of digital competencies, including functional skills (e.g., making online payments and using kiosks), civic engagement (e.g., using e-petition), rights protection (e.g., reporting cybercrimes), and critical evaluation (e.g., identifying credible sources). The items were adapted for the AI environment.

Respondents rated their competence in each skill on a scale from 1 (very poor) to 5 (very good). Responses were averaged to create a general digital competence score (2022: Cronbach’s α = .93; 2023: Cronbach’s α = .94; 2024: Cronbach’s α = .93).

Digital competence subdimensions. To investigate differentiated moderating principles, we decomposed digital competence into two theoretically distinct subdimensions. Respondents rated their competence in each skill on a scale from 1 (very poor) to 5 (very good).

Rights protection competence (5 items). This subdimension assesses procedural knowledge for protecting digital rights and civic engagement skills. It captures users’ institutional efficacy. That is, the belief that they can leverage regulation and legal systems to address algorithm harms. For example, the items included “I know what the Korea Media and Communications Commission does.”, “I know how to report phishing scams and seek remedies”, “I know how to report copyright infringement (videos, texts, etc.) and seek remedies.”, Responses were averaged to create “Rights protection competence” score (2022: Cronbach’s α = .89; 2023: Cronbach’s α = .88; 2024: Cronbach’s α = .86).

Critical understanding competence (2 items). The subdimension measures information evaluation skills and reflects the ability to monitor information like the capacity to track, assess, and detect changes in algorithm outputs. The items had “I can distinguish credible information from unreliable information by comparing search results with other sources.”, “I can identify sponsored or advertising content.” The two items showed inter-item correlations (2022: Pearson r = .80; 2023: Pearson r = .79; 2024: Pearson r = .75, all p value p < .001).

To assess statistical distinguishability, we conducted supplementary EFA on all items used across the three subscales—the 12-item GDC scale (which embeds the five right protection items) plus the two critical understanding items not included in the GDC, totaling 14 items. Across all three waves, three factors emerged cleanly corresponding to rights protection, functional skills, and critical understanding. Internal consistency was strong for GDC (α = .90-.93) and rights protection (α = .87-.91), and moderate for critical understanding (inter-item r=.55-.63). Subscale inter-correlations supported empirical distinctiveness (GDC-CU: r = .67- .68; RP-CU: r = .62-.68). The high GDC-RP correlation (r = .86-.89) reflects their nested structure rather than a measurement flaw.

Control Variables

All demographic variables were measured at baseline (Wave 1, 2022) and treated as time-invariant predictors in longitudinal analyses.

Gender. Gender was coded as 1 for male and 2 for female. For the regression analyses, gender was dummy-coded, with “female” as the reference category and a value of 1, and “male” as the other category and a value of 0.

Age. Age was measured in six categories. 17-19, 20–29, 30–39, 40–49, 50–59, over 60. In regression models, age group was treated as a quasi-continuous variable (coded 1-6).

Education. It was assessed an eight-point scale ranging from elementary school (1) to a doctoral degree (8). For descriptive proposes, education was recorded into three categories high school or below (37.4%), a college degree (2-4 years; 61.7%), and a graduate degree or higher (0.9%).

Monthly income. It was measured on an 11-point scale ranging from no income (1) to income of 9 million KRW or more (11). Income was treated as continuous in the analyses consistent with convention in socioeconomic research.

Analysis

Cross-lagged panel models examined reciprocal relationship between YouTube algorithm perception and regulation support. Separate models for Period 1 (2022-2023) and Period 2 (2023-2024) estimated: (a) forward effects (perception → attitudes; RQ1), (b) reverse effects (attitudes → perception; RQ2), and (c) autoaggressive paths. Hierarchical regression models included demographic controls (age, gender, education, and income). Continuous variables were standardized (M = 0, SD = 1).

For moderation (RQ3), median- split comparisons divided respondents into high and low groups for general digital competence, rights protection, and critical understanding. Z-tests compared coefficients between groups. Moderator values were time-matched to the predictor. That is, digital competence measured at Wave 1 (T1) was used for Period 1 analyses (T1 → T2), and the digital competence measured at Wave 2 (T2) was used for Period 2 analyses (T2 → T3). This approach ensures temporal ordering consistency and reduces potential confounding from changes in moderator values over time.


RESULTS

Descriptive Statistics and Correlations

Table 1 shows the means, standard deviation, and correlations. YouTube perception increased from 2022 (M = 3.29, SD = .37) to 2023 (M = 3.38, SD = .39), then declined in 2024 (M = 3.32, SD = .33). Transparency and accountability attitudes increased linearly, though growth decelerated (Period 1: +2.5% and +2.2%; Period 2: +0.8% and +1.4%).

Descriptive Statistics and Correlations(N = 2,269)

Within-wave correlations remained weak to moderate (transparency: r = .23-.37, average r = .28; accountability: r = .15-.35, average r = .25), peaking in 2023 then declining by 2024.

Meanwhile, across-wave stability revealed moderate stability. Specifically, YouTube perception (r = .20 in 2022-2023 and r = .33 in 2023-2024), transparency attitudes (r = .19 in 2022-2023 and r = .17 in 2023-2024), and accountability attitudes (r = .13 in 2022-2023 and r = .22 in 2023-2024) showed moderates stability. These patterns indicate meaningful stability within individuals alongside substantial changes over time, which are ideal conditions for cross-lagged panel analysis (Hamaker et al., 2015).

Algorithm Perception Predicting Regulation Support (RQ1)

To test if negative algorithmic experience drives regulatory support, we examined forward effects from YouTube perception to regulation support across two periods.

The transparency pathway (H1a) operated immediately in Period 1. Contrary to H1a, Individuals who viewed YouTube’s algorithm more positively in 2022 expressed stronger support for transparency regulation support the following year (β = .056, SE = .021, p < .001). While the hypothesis predicted that negative perceptions would drive regulation demand, the significant positive coefficient indicates the opposite direction. This effect nearly doubled by Period 2, with perception in 2023 predicting attitude in 2024 at β = .107 (SE = .022, p < .001), a 91% increase from Period 1. Thus, H1a was not supported in the predicted direction.

The accountability pathway (H1b) showed different timing. During Period 1, algorithmic experience had no impact on accountability attitudes (β = .017, SE = .022, n.s.), rejecting the hypothesis for this early phase. The pathway emerged only in Period 2, where more positive perception in 2023 significantly predicted stronger accountability support in 2024 (β = .080, SE = .022, p < .001). The pathway emerged only in Period 2, where more positive perception in 2023 significantly predicted stronger accountability regulation support in 2024 (β = .080, SE = .022, p < .001). This represents a 371% increase from Period 1.

Regulation Support Predicting Algorithm Perception (RQ2)

We next tested whether regulation support shape how people perceive algorithmic behavior, examining reverse effects that would indicate top-down cognitive processing. For transparency (H2a), this pathway was absent during Period 1. Attitudes in 2022 showed no relationship with subsequent perception in 2023 (β = .021, SE = .021, n.s.), However, Period 2 revealed a striking emergence: transparency attitudes in 2023 predicted improved YouTube perception in 2024 (β = .074, SE = .021, p < .001), a 252% increase from Period 1.

The accountability pathway (H2b) traced the opposite trajectory. During Period 1 individuals with stronger accountability attitudes perceived YouTube more favorably the following year (β = .046, SE = .021, p < .05), supporting H2b for this period. By Period 2, this effect vanished (β = .005, SE = .021, n.s.), a 90% decline from Period 1.

How Digital Competence Shapes These Relationships (RQ3)

General Digital Competence (RQ3a)

We also tested if digital competence moderates both causal directions, entering high versus low competence groups based on median splits.

Period 1 rejected H3a-1. Low-competence users showed stronger forward effects (transparency β = .124, SE = .042, p < .01; accountability β = .139, SE = .043, p < .01) than high-competence users (transparency β = .024, SE = .025, n.s.; accountability β = -.040, SE = .024, n.s. ; z = -2.06, p < .05 and z = -3.60, p <.001). This asymmetry disappeared entirely by Period 2. Both groups now showed significant positive forward effects (high competence: transparency β = .122, SE = .030, p < .001, accountability β = .099, SE = .031, p < .01; low competence: transparency β = .095, SE = .031, p < .01, accountability β = .062, SE = .032, n.s.), with no meaningful difference between them (z = .62 and z = .84, both n.s.).

Cross-Lagged Effects between YouTube Algorithm Perception and AI Regulation Support Across Two Time Periods

H3a-2 was rejected due to non-significant moderation across both periods. For transparency, high-competence users showed β = .067 (SE = .030, p < .05) compared to low-competence users’ β = -.002 (SE = .031, n.s.). The group difference between high-competence and low-competence users was not significant (transparency reverse: z = 1.60, n.s.). For accountability, high-competence users showed β = .072 (SE = .030, p < .05) versus low-competence β = .046 (SE = .032, n.s.), again with non-significant moderation (accountability reverse: z = .61, n.s.).

Period 2 continued this pattern of non-significant moderation, providing only marginal evidence for H3a-2. First, for transparency reverse effects, high-competence users showed β = .094 (SE = .030, p < .01) compared to low-competence users β = .035 (SE = .031, n.s.), with group differences z = 1.38, n.s,. Second, for accountability reverse effects, high competence users showed β = .031 (SE = .030, n.s.) versus low-competence users β = -.033 (SE = .031, n.s.), with z = 1.48, n.s.

Digital Competence Moderation Effects (Period 1: 2022-2023)

Rights Protection Competence (RQ3b)

Testing whether awareness of digital rights operates differently than general digital competence produced distinct temporal patterns.

Period 1 strongly supported H3b-1. Low-competence users showed stronger effects (transparency: β = .133, SE = .032, p < .001; accountability: β = .094, SE = .032, p < .01) while high-competence users showed negative effects (transparency: β = -.018, SE = .028, n.s., accountability: β = -.060, SE = .029, p < .05), creating highly significant group differences (z= -3.56, p < .001 for both). Period 2 showed complete convergence. Both groups showed positive effects (high: transparency β = .120, SE = .029, p < .001, accountability β = .085, SE = .029, p < .01; low: transparency β = .101, SE = .031, p < .01, accountability β = .078, SE = .031, p < .05) with moderation disappearing entirely (z = .43 and z = .16, both n.s.), now supporting the original prediction.

The reverse direction showed temporal evolution. H3b-2 was rejected in period 1 but supported in period 2. Period 1 showed no meaningful moderation. For transparency, high-competence users showed β = .064 (SE = .029, p < .05) compared to low-competence users’ β = .027 (SE = .030, n.s.), but the group difference was not significant (transparency: z = .88, n.s.). For accountability, high-competence users showed β = .052 (SE = .029, n.s.), with no significant moderation (z = -.55, n.s.). By Period 2, however, clear asymmetric patterns emerged. High-competence users demonstrated substantial reverse effects (transparency: β = .137, SE = .030, p < .001; accountability β = .077, SE = .030, p < .05) while low-competence users showed weak or negative effects (transparency: β = .035, SE = .031, n.s., accountability: β = -.045, SE = .031, n.s.), creating significant moderation (z = 2.36, p < .05 and z = 2.85, p < .01).

Digital Competence Moderation Effects (Period 2: 2023-2024)

Critical Understanding Competence (RQ3c)

Critical understanding showed distinct patterns. Forward effects showed consistent non-moderation, partially supporting H3c-1 (transparency confirmed, accountability rejected). Period 1 showed similar weak effects across groups (high: transparency β = .037, SE = .026, n.s., accountability β = -.005, SE = .026, n.s.; low: transparency β = .068, SE = .037, n.s., accountability β = .029, SE = .038, n.s.; z = -.68 and z = -.73, both n.s.). Period 2 repeated this pattern (high: transparency β = .115, SE = .029, p < .001, accountability β = .100, SE = .032, p < .001; low: transparency β = .104, SE = .031, p < .01, accountability β = .042, SE = .034, n.s.; z = .25 and z = 1.24, both n.s.).

Reverse effects supported H3c-2. Period 1 established critical understanding as dominant reverse moderator. High-competence users showed substantial effects (transparency: β = .100, SE = .029, p < .001; accountability: β = .110, SE = .029, p < .001) while low-competence users showed negative effects (transparency: β = -.048, SE = .032, n.s.; accountability: β = -.014, SE = .032, n.s.; z = 3.43, p < .001 and z = 2.88, p < .01). Period 2 repeated and intensified this pattern. For transparency, the gap persisted (high competence: β = .119, SE = .032, p < .001; low competence: β = -.015, SE = .033, n.s.; z = 2.92, p < .01). For accountability, the divergence reached its maximum: high-competence users maintained positive effects (β = .069, SE = .032, p < .01) while low-competence users developed significant negative effects (β = -.113, SE = .033, p < .01), producing the strongest moderation in the entire analysis (z = 3.97, p < .001).


DISCUSSION

This study used three-year paned data (2022-2024) to verify the bidirectional causal relationship between user perception of YouTube’s recommendation algorithm and their regulation support. It also sought to determine how an individual’s digital competence moderates this relationship.

Forward effects exhibited different temporal patterns, though in a direction contrary to our initial predictions. The results revealed opposite. That is, more positive algorithm perceptions significantly predicted greater support for both transparency and accountability regulation support. While this contradicts the fear-based logic underlying H1, users who perceive the algorithm as effective may become more attuned to its influence over their information environment, generating demand for governance precisely because the system demonstrates shapes their behavior. Zarouali et al. (2021) and Bucher (2018) argued that algorithm awareness can motivate regulatory demand regardless of valence, consistent with the finding of this research. The temporal pattern further reflects observability differences. That is, transparency forward effects operated from the outset and strengthened over time, while accountability effects emerged only later, as accountability require more time for recognition (Ferrari et al., 2025).

This reverse effect exhibited a pattern diametrically opposed to the forward effect. Users who supported transparency regulations only began to evaluate algorithms more positively over time, suggesting that the reverse effect is a gradual process requiring learning and experience with regulation, unlike the immediate expectation effect assumed by top-down processing (Clark, 2013; Gregory, 1974). Notably, both forward and reverse transparency effects positive in direction, indicating a mutually reinforcing dynamic wherein favorable algorithm appraisals predict stronger governance demand, which in turn sustains more favorable appraisals. For accountability regulations, users who initially supported the regulations evaluated algorithms positively, but this effect faded over time. This early reverse effect may reflect regulatory optimism. Users anticipated platform improvement and interpreted algorithmic behavior more charitably as a result. Interestingly, its subsequent disappearance suggests that when regulatory effects are difficult to observe, initial optimism gives way as users find no confirming evidence of tangible change (Goodman & Trehu, 2022), underscoring the need for verification and public disclosure of their effectiveness.

Finally, digital competence effects varied by type and time. General digital competence showed unexpected patterns. Low-competence users’ initial advantages disappeared, with no reverse moderation. Regarding rights protection competence, low-competence users initially demanded stronger regulations when recognizing harm, but this gap disappeared as high-competence users converged to similar levels over time. This support Protection Motivation Theory’s coping appraisal concept (Rippetoe & Rogers, 1987). Lacking personal remedies, low-competence users demanded systemic change as their sole viable response. For reverse effects, only high-competence users showed positive effects in later periods. Their regulation efficacy may act as a “placebo”, improving algorithmic evaluations through positive expectations regardless of actual effectiveness (Petrovic et al., 2005). Critically, low critical understanding competence users supporting accountability regulations exhibited decreased harm perception. This suggests they form mistaken belief that “regulation exists therefore problems are solved,” dulling their problem perception. Alternatively, exposure to regulation support norms may inappropriately activate positive machine heuristics (Molina & Sundar, 2024), leading vulnerable users to underestimate algorithmic risks.

Based on these findings, the study derived a theoretical implication. This study revealed the dual trajectory of the digital divide: convergence in the forward effect and divergence in the reverse effect over time. Focusing rights protection competence, in the forward effect, the low-competence group initially exhibited a strong reaction. However, over time, the high-competence group increased to a similar level, eliminating the difference between the two groups. That is, the gap is converged. That can be because as regulation support becomes more prevalent across the population, high-competence users also recognize the limitations of individual effort and came to agree on the necessity of systematic regulation like low-competence users. This convergence phenomenon suggests that the digital “second divide” noted by van Dijk (2020) can be mitigated under specific conditions.

However, the opposite phenomenon emerged in the reverse effect. Over time, the high-competence users developed the expectations that “regulation is strengthening”, and they could evaluate the algorithm more positively as a result. The low-competence users showed no such effect and even experienced a counterproductive outcome. When holding similar levels of regulation support, high-competence users drew positive inference, such as “Since regulation are improving, the platform must be improving too”, while low-competence users failed to draw this connection or drew erroneous inferences. This implies that digital inequality can intensify beyond disparities in technological benefits and advantages. It can lead to cognitive distortions when assessing subjects within the context of generalized and sophisticated algorithms. Furthermore, it also suggests that regulation policies may have unintended consequences for vulnerable groups.

These findings hold practical implications for regulation authorities and platforms. First, transparency regulations strengthen continuously from the outset, suggesting that linking user harm experience to transparency improvements builds long-term trust through ongoing initiatives like transparency reports and regular updates. Second, accountability regulations require more complex strategies. Since reverse effects fade over time, authorities must regularly disclose platform penalty records, audit results, and complaint statistics while translating complex liability structures into users-friendly formats. Lastly, interventions should be competence tailored. Low rights protection competence users need accessible remedy systems with simplified procedures, while high critical understanding users require detailed audit reports and governance records.

This study has three limitations. First, focusing solely on YouTube may limit generalizability, as regulations may operate differently across platforms. Future research should compare platforms varying in observability. Second, we did not control for Korea’s evolving regulation environment (2022-2024), including AI Basic Act discussions and ChatGPT emergence. Cross-national comparisons could clarify policy impacts. Third, a further limitation concerns the assumption of algorithmic stability. YouTube routinely updates its recommendation systems and disclosure practices, and such platform-level changes could independently shift user perceptions, partially confounding the causal pathways we estimate. Future research incorporating natural experiment designs would better disentangle user-driven perceptual change from platform-driven change. Fourth, the moderation analysis relied on median-split comparisons, which carry known limitations including loss of statistical power and artificial dichotomization (MacCallum et al., 2002). Future research should employ continuous interaction terms or multi-group SEM for more rigorous moderation tests. Finally, this study did not incorporate algorithm usage patterns. Future research should examine whether usage patterns moderate the bidirectional relationships identified here.

Disclosure Statement

No potential conflict of interest was reported by the author.

Notes

1 This study used secondary data from publicly available database provided by the KCC and KISDI, and it therefore exempt form Institutional Review Board (IRB) review.
2 The 2022 survey included 24 digital competence items, whereas the 2023 and 2024 survey included 21 modified items of 24 items. To ensure longitudinal comparability, this current study identified 12 identically worded items across all three waves and used these for our primary analyses.

References

  • Balch, G. I. (1974). Multiple indicators in survey research: The concept “sense of political efficacy”. Political Methodology, 1(2), 1–43. http://www.jstor.org/stable/25791375
  • Bishop, S. (2020). Algorithmic experts: Selling algorithmic lore on YouTube. Social Media + Society, 6(1), 1–13. [https://doi.org/10.1177/2056305119897323]
  • Bucher, T. (2018). If... then: Algorithmic power and politics. Oxford University Press. [https://doi.org/10.1093/oso/9780190493028.001.0001]
  • Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, Article 1421273. [https://doi.org/10.3389/fhumd.2024.1421273]
  • Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. [https://doi.org/10.1017/S0140525X12000477]
  • Cotter, K. (2024). Practical knowledge of algorithms: The case of BreadTube. New Media & Society, 26(4), 2131–2150. [https://doi.org/10.1177/14614448221081802]
  • Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115–133. [https://doi.org/10.1080/19312458.2021.1968361]
  • Eslami, M., Vaccaro, K., Lee, M. K., Elazari Bar On, A., Gilbert, E., & Karahalios, K. (2019, May). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–14). [https://doi.org/10.1145/3290605.3300724]
  • European Parliament & Council of the European Union. (2006). Recommendation of the European Parliament and of the Council of 18 December 2006 on key competences for lifelong learning. Official Journal of the European Union, L394, 10–18. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006:394:0010:0018:en:PDF
  • Ferrari, F., Van Dijck, J., & Van den Bosch, A. (2025). Observe, inspect, modify: Three conditions for generative AI governance. New Media & Society, 27(5), 2788–2806. [https://doi.org/10.1177/14614448231214811]
  • Floyd, D. L., Prentice-Dunn, S., & Rogers, R. W. (2000). A meta-analysis of research on protection motivation theory. Journal of Applied Social Psychology, 30(2), 407–429. [https://doi.org/10.1111/j.1559-1816.2000.tb02323.x]
  • Gagrčin, E., Naab, T. K., & Grub, M. F. (2024). Algorithmic media use and algorithm literacy: An integrative literature review. New Media & Society, 28(1), Article 14614448241291137. [https://doi.org/10.1177/14614448241291137]
  • Gallup Korea. (2022, July 6). 2012–2022 smartphone usage rates & brands, smartwatches, and wireless earbuds survey. https://www.gallup.co.kr/gallupdb/reportContent.asp?seqNo=1309
  • Gil de Zúñiga, H., Diehl, T., & Ardévol-Abreu, A. (2017). Internal, external, and government political efficacy: Effects on news use, discussion, and political participation. Journal of Broadcasting & Electronic Media, 61(3), 574–596. [https://doi.org/10.1080/08838151.2017.1344672]
  • Goodman, E. P., & Trehu, J. (2022). Algorithmic auditing: Chasing AI accountability. Santa Clara High Tech. LJ, 39, Article 289. [https://doi.org/10.2139/ssrn.4227350]
  • Gregory, R. L. (1974). Choosing a paradigm for perception. In E. C. Carterette & M. P. Friedman (Eds.), Handbook of perception: Vol. 1. Historical and philosophical roots of perception(pp. 255–283). Academic Press. [https://doi.org/10.1016/B978-0-12-161901-5.50020-0]
  • Hamaker, E. L., Kuiper, R. M., & Grasman, R. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102–116. [https://doi.org/10.1037/a0038889]
  • Hu, P., Zeng, Y., Wang, D., & Teng, H. (2024). Too much light blinds: The transparency-resistance paradox in algorithmic management. Computers in Human Behavior, 161, Article 108403. [https://doi.org/10.1016/j.chb.2024.108403]
  • Hwang, Y., Lee, S., Kim, Y., & Hwang, H. (2022). Digital competence: Conceptualization, scale development. Journal of Communication Research, 59(2), 5–48.
  • Jung, S., & Park, J. (2024). Determinants of AI literacy: Focusing on AI usage experience and innovativeness. Journal of Broadcasting and Telecommunications Research, 128, 137–168. [https://doi.org/10.22876/kjbtr.2024..128.005]
  • Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User modeling and User-Adapted Interaction, 22(4), 441-504. [https://doi.org/10.1007/s11257-011-9118-4]
  • Koh, H. (2024). An exploratory study on the impact of SNS use on critical understanding competence. Journal of Social Science, 63(3), 209–230. [https://doi.org/10.22418/JSS.2024.12.63.3.209]
  • Korea Information Society Development Institute. (2025). 2024 intelligent information society user panel survey (Policy Research 24-15-02). https://www.kisdi.re.kr/report/fileView.do?key=m2101113024770&arrMasterId=3934580&id=1833198
  • Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Review, 9(2), 1–16. [https://doi.org/10.14763/2020.2.1469]
  • Lin, H. (2025). Oscillation between resist and to not? Users’ folk theories and resistance to algorithmic curation on douyin. Social Media+Society, 11(1), Article 20563051251313610 [https://doi.org/10.1177/20563051251313610]
  • Lunt, P., & Livingstone, S. (2012). Media regulation: Governance and the interests of citizens and consumers. Sage. [https://doi.org/10.4135/9781446250884]
  • MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7(1), 19–40. [https://doi.org/10.1037/1082-989X.7.1.19]
  • Macready, H., & Stanton, L. (2025, September 10). How the YouTube algorithm works in 2025. Hootsuite. https://blog.hootsuite.com/youtube-algorithm/
  • Mensah, G. B. (2023). Artificial intelligence and ethics: A comprehensive review of bias mitigation, transparency, and accountability in AI Systems. Preprint, November, 10(1), Article 1.
  • Molina, M. D., & Sundar, S. S. (2024). Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation. New Media & Society, 26(6), 3638–3656. [https://doi.org/10.1177/14614448221103534]
  • Morris, C. E. (2023). Development of professional community of practice in higher education staff: Identity, meaning, and community in academic operations [Doctoral dissertation, Arizona State University].
  • Oeldorf-Hirsch, A., & Neubaum, G. (2025). What do we know about algorithmic literacy? The status quo and a research agenda for a growing field. New Media & Society, 27(2), 681–701. [https://doi.org/10.1177/14614448231182662]
  • Organization for Economic Co-operation and Development. (2024, December 3). 14 – Algorithmic transparency in the public sector: A state of the art report of algorithmic transparency instruments. OECD.AI Wonk. https://oecd.ai/en/wonk/documents/14-algorithmic-transparency-in-the-public-sector-a-state-of-the-art-report-of-algorithmic-transparency-instruments
  • Petrovic, P., Dietrich, T., Fransson, P., Andersson, J., Carlsson, K., & Ingvar, M. (2005). Placebo in emotional processing—induced expectations of anxiety relief activate a generalized modulatory network. Neuron, 46(6), 957–969. [https://doi.org/10.1016/j.neuron.2005.05.023]
  • Pollock, P. H., III. (1983). The participatory consequences of internal and external political efficacy: A research note. Western Political Quarterly, 36(3), 400–409. [https://doi.org/10.1177/106591298303600306]
  • Rader, E., & Gray, R. (2015, April). Understanding user beliefs about algorithmic curation in the Facebook news feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 173–182). [https://doi.org/10.1145/2702123.2702174]
  • Raftopoulos, A . (2001). Is perception informationally encapsulated? The issue of the theory-leadenness of perception. Cognitive Science, 25(3), 423–451. [https://doi.org/10.1207/s15516709cog2503_4]
  • Reynolds, C. J., & Hallinan, B. (2024). User-generated accountability: Public participation in algorithmic governance on YouTube. New Media & Society, 26(9), 5107–5129. [https://doi.org/10.1177/14614448241251791]
  • Rippetoe, P. A., & Rogers, R. W. (1987). Effects of components of protection-motivation theory on adaptive and maladaptive coping with a health threat. Journal of Personality and Social Psychology, 52(3), 596–604. https://psycnet.apa.org/doi/10.1037/0022-3514.52.3.596 [https://doi.org/10.1037/0022-3514.52.3.596]
  • Shim, H., & Park, J. (2024). Relationship between YouTube’s recommendation algorithm and continuous usage intention: The mediating effect of regulation attitudes toward algorithmic transparency and accountability principles. Korean Journal of Journalism & Communication, 68(5), 165–195. [https://doi.org/10.20879/kjjcs.2024.68.5.005]
  • Shin, D., Zhong, B., & Biocca, F. A. (2020). Beyond user experience: What constitutes algorithmic experiences?. International Journal of Information Management, 52, 102061. [https://doi.org/10.1016/j.ijinfomgt.2019.102061]
  • Shin, J. B. (2025, December 28). YouTube named most-used app by Koreans in 2025. KM Journal. https://www.kmjournal.net/news/articleView.html?idxno=6773
  • Sundar, S. S., & Marathe, S. S. (2010). Personalization versus customization: The importance of agency, privacy, and power usage. Human Communication Research, 36(3), 298–322. [https://doi.org/10.1111/j.1468-2958.2010.01377.x]
  • Taylor, S. H., & Choi, M. (2024). Lonely algorithms: A longitudinal investigation into the bidirectional relationship between algorithm responsiveness and loneliness. Journal of Social and Personal Relationships, 41(5), Article 02654075231156623 [https://doi.org/10.1177/02654075231156623]
  • Tsai, H. Y. S., Jiang, M., Alhabash, S., LaRose, R., Rifon, N. J., & Cotten, S. R. (2016). Understanding online safety behaviors: A pr ot e ct ion mot iv a t ion the or y perspective. Computers & Security, 59, 138–150. [https://doi.org/10.1016/j.cose.2016.02.009]
  • Valkenburg, P. M., Peter, J., & Walther, J. B. (2016). Media effects: Theory and research. Annual Review of Psychology, 67(2016), 315–338. [https://doi.org/10.1146/annurev-psych-122414-033608]
  • Van Dijk, J. (2020). The digital divide. John Wiley & Sons.
  • Velkova, J., & Kaun, A. (2021). Algorithmic resistance: Media practices and the politics of repair. Information, Communication & Society, 24(4), 523–540. [https://doi.org/10.1080/1369118X.2019.1657162]
  • Wu, W., Huang, Y., & Qian, L. (2024). Social trust and algorithmic equity: The societal perspectives of users' intention to interact with algorithm recommendation systems. Decision Support Systems, 178, Article 114115. [https://doi.org/10.1016/j.dss.2023.114115]
  • Zarouali, B., Helberger, N., & De Vreese, C. H. (2021). Investigating algorithmic misconceptions in a media context: Source of a new digital divide? Media and Communication, 9(4), 134–144. [https://doi.org/10.17645/mac.v9i4.4090]

Appendix

Appendix

Full Survey Measurements

Table 1.

Descriptive Statistics and Correlations(N = 2,269)

Variable M SD 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Note. T1 = Wave 1(2022); T2 = Wave 2(2023); T3 = Wave 3(2024). Reg: Regulation, ***p < .001. **p < .01. *p < .05.
1. YouTube Perception (T1, 2022) 3.29 0.37
2. YouTube Perception (T2, 2023) 3.38 0.39 0.20***
3. YouTube Perception (T3, 2024) 3.32 0.33 0.17*** 0.33***
4. Transparency Reg. Support (T1) 3.62 0.56 0.23*** 0.06** 0.13***
5. Transparency Reg. Support (T2) 3.71 0.54 0.09*** 0.37*** 0.19*** 0.19***
6. Transparency Reg. Support (T3) 3.74 0.53 0.06** 0.16*** 0.23*** 0.17*** 0.17***
7. Accountability Reg. Support (T1) 3.62 0.59 0.25*** 0.09*** 0.19*** 0.75*** 0.18*** 0.17***
8. Accountability Reg. Support (T2) 3.70 0.54 0.05* 0.35*** 0.12*** 0.16*** 0.69*** 0.15*** 0.14***
9. Accountability Reg. Support (T3) 3.75 0.59 0.05* 0.12*** 0.15*** 0.18*** 0.11*** 0.65*** 0.17*** 0.13***
10. General Digital Competence (T1) 3.48 0.66 0.22*** -0.03 0.02 0.26*** 0.04* 0.02 0.24*** 0.01 -0.01
11. General Digital Competence (T2) 3.53 0.66 -0.06** 0.06** 0.00 0.05** 0.14*** 0.02 0.02 0.16*** 0.01 0.42***
12. General Digital Competence (T3) 3.48 0.68 0.05* 0.06** 0.12*** 0.05* 0.08*** 0.09*** 0.05* 0.05* 0.04* 0.46*** 0.47***
13. Rights Protection Competence (T1) 3.23 0.78 0.09*** -0.08*** -0.06** 0.17*** 0.03 -0.01 0.13*** 0.00 -0.06** 0.88*** 0.40*** 0.41***
14. Rights Protection Competence (T2) 3.29 0.80 -0.07*** -0.07*** -0.07** 0.04 0.07** 0.00 0.00 0.07** -0.04 0.37*** 0.86*** 0.40*** 0.40***
15. Rights Protection Competence (T3) 3.23 0.83 0.04 -0.01 0.08*** 0.03 0.06** 0.03 0.02 0.03 -0.05* 0.38*** 0.40*** 0.88*** 0.40*** 0.41***
16. Critical Understanding (T1) 3.45 0.79 0.11*** -0.04* 0.02 0.15*** 0.05* 0.06** 0.11*** 0.02 0.00 0.76*** 0.37*** 0.40*** 0.62*** 0.35*** 0.37***
17. Critical Understanding (T2) 3.44 0.85 -0.11*** 0.04* 0.00 -0.02 0.07*** 0.03 -0.04 0.08*** -0.01 0.27*** 0.77*** 0.36*** 0.28*** 0.62*** 0.33*** 0.25***
18. Critical Understanding (T3) 3.37 0.83 -0.01 0.06** 0.11*** 0.05* 0.06** 0.06** 0.03 0.03 0.01 0.32*** 0.34*** 0.78*** 0.32*** 0.29*** 0.68*** 0.31*** 0.34***

Table 2.

Cross-Lagged Effects between YouTube Algorithm Perception and AI Regulation Support Across Two Time Periods

Path Period 1 (2022 → 2023) Period 2 (2023 → 2024) HO/ Change Ratio
Note. N = 2,269, All models control for autoregressive effects and demographics (age, gender, education, and income) measured at baseline (2022). Control variables showed minimal effects (all | β | < .05). Coefficients are standardized. YT: YouTube perception, Trans: Transparency regulation support, Acc: Accountability regulation support. HO: Hy-pothesis outcome. * p < .05, ** p < .01, *** p < .001.
Trans
Forward
(YT → Trans)
 β = .056*** (SE = .021)  β = .107*** (SE = .022) Not supports
(opposite direction)/+91%
 P1: 2.73;1
R2 = .039, F(6, 2262)= 15.33 *** R2 = .042, F(6, 2262)= 16.66 ***
Reverse (Trans → YT) β = .021 (SE = .021, n.s.) β = .074*** (SE = .021) Partially supported
(P2 only)/+252%
P2: 1.44;1
R2 = .046, F(6, 2262)= 18.12 *** R2 = .088, F(6, 2262)= 36.49 ***
Acc
Forward (YT → Acc)  β = .017 (SE = .022, n.s.)  β = .080*** (SE = .022) Not supported
(opposite direction)/+384%
 P1: .36;1
R2 = .027, F(6, 2262)= 10.61*** R2 = .031, F(6, 2262)= 11.87 ***
Reverse (Acc → YT) β = .046* (SE = .021) β = .005 (SE = .021, n.s.) Partially supported
(P1 only)/-90%
P2: 17.16;1
R2 = .046, F(6, 2262)= 18.26 *** R2 = .087, F(6, 2262)= 35.73 ***

Table 3.

Digital Competence Moderation Effects (Period 1: 2022-2023)

Moderator Tran Acc
Forward (YT → Trans) Reverse (Trans → YT) Forward (YT → Acc) Reverse (Acc → YT)
Note. N = 2,269, All models control for autoregressive effects and demographics (age, gender, education, and income) measured at baseline (2022). Control variables showed minimal effects (all | β | < .05). Coefficients are standardized. YT: YouTube perception, Trans: Transparency regulation support, Acc: Accountability regulation support. HO: Hy-pothesis outcome. * p < .05, ** p < .01, *** p < .001.
RQ3a/General DC
High β = .024 (SE = .025) β = .067* (SE = .030) β = -.040 (SE = .024) β = .072* (SE = .030)
Low β = .124** (SE = .042) β = -.002 (SE = .031) β = .139** (SE = .043) β = .046 (SE = .032)
z-test z = -2.06, p < .05 z = 1.60, p = .110† z = -3.60, p < .001 z = .61, n.s.
HO. H3a-1: Not supported; H3a-2: Not supported
RQ3b/Rights protection
High β = -.018 (SE = .028) β = .064* (SE = .029) β = -.060* (SE = .029) β = .052 (SE = .029)
Low β = .133*** (SE = .032) β = .027 (SE = .030) β = .094** (SE = .032) β = .075* (SE = .030)
z-test z = -3.56, p < .001 z = .88, n.s. z = -3.56, p < .001 z = -.55, n.s.
HO. H3b-1: Not supported; H3b-2: Not supported
RQ3c/Critical understanding
High β = .037 (SE = .026) β = .100*** (SE = .029) β = -.005 (SE = .026) β = .110*** (SE = .029)
Low β = .068 (SE = .037) β = -.048 (SE = .032) β = .029 (SE = .038) β =- .014 (SE = .032)
z-test z = -.68, n.s. z = 3.43, p < .001 z = -.73, n.s..099 z = 2.88, p < .01
HO. H3c-1: Supported; H3c-2: Supported

Table 4.

Digital Competence Moderation Effects (Period 2: 2023-2024)

Moderator Trans Acc
Forward (YT → Trans) Reverse (Trans → YT) Forward (YT → Acc) Reverse (Acc → YT)
 Note. N = 2,269, All models control for autoregressive effects and demographics (age, gender, education, and income) measured at baseline (2022). Control variables showed minimal effects (all β | < .05). Coefficients are standardized. YT: YouTube perception, Trans: Transparency regulation support, Acc: Accountability regulation support. HO: Hy-pothesis outcome. *p < .05, ** p < .01, *** p < .001.
RQ3a/General DC
High β = .122*** (SE = .030) β = .094** (SE = .030) β = 099** (SE = .031) β = .031 (SE = .030, n.s.)
Low β = .095** (SE = .031) β = .035 (SE = .031, n.s.) β = .062 (SE = .032, n.s.) β = -.033 (SE = .031, n.s.)
z-test z = .62, n.s. z = 1.38, n.s. z = .84, n.s. z = 1.48, n.s.
HO. H3a-1: Not supported; H3a-2: Not supported
RQ3b/ Rights protection
High β = .120*** (SE = .029) β = .137*** (SE = .030) β = .085** (SE = .029) β = .077* (SE = .030)
Low β = .101** (SE = .031) β = .035 (SE = .031, n.s.) β = .078* (SE = .031) β =-.045 (SE = .031, n.s.)
z-test z = .43, n.s. z = 2.36, p <.05. z = .16, n.s. z = 2.85, p < .01.
HO. H3b-1: Supported; H3b-2: Supported
RQ3c/Critical understanding
High β = .115*** (SE = .029) β =.119*** (SE=.032) β =.100*** (SE = .32) β =.069** (SE = .032)
Low β = .104** (SE = .031) β =-.015 (SE = .033, n.s.) β = .042 (SE = .034, n.s.) β = -.113 (SE = .033, n.s.)
z-test z = .25, n.s. z = 2.92, p < .01 z = 1.24, n.s. z = 3.97, p < .001
HO. H3c-1: Supported; H3c-2: Supported

Full Survey Measurements

YouTube algorithm
(10 items; α = .78-.82; 1 = Strongly disagree, 5 = Strongly agree) 
"How do you evaluate YouTube's automatic recommendation service?
1 YouTube recommendations are well-tailored to my tastes and viewing purpose positive
2 YouTube recommendations are useful to me
3 YouTube provides diverse, non-uniform information rather than a single perspective
4 YouTube recommendations are unbiased and objective
5 I am generally satisfied with YouTube recommendations.
6 I will continue to use YouTube recommendations in the future.
7 Frequent use of YouTube recommendations will lead to biased values.
8 I worry that my personal information may be leaked when using YouTube
9 I worry that I might unknowingly be exposed to illegal content through YouTube. negative
10 I worry that YouTube recommendations may harm me by failing to recommend optimal content.
Note. Items 7-10 negatively worded and reverse-coded prior to averaging.
Transparency regulation support
(3 items; α = .88-.90; 1 = Strongly disagree, 5 = Strongly agree)
"When using AI recommendation services, to what extent do you agree with the following?
1 AI recommendation service providers should disclose to users the selection criteria used by AI algorithms to curate content.
2 AI recommendation service providers should inform users whether content creators are human or AI-generated.
3 AI recommendation service providers should explain to users how their personal data is collect-ed and used during AI service operation.
Accountability regulation support
(5 items; α = .90-.92; 1 = Strongly disagree, 5 = Strongly agree)
1 AI recommendation service providers should ensure users have the right to make reasonable choices about content provided by AI recommendation services.
2 AI recommendation service providers should allow users to select or adjust the level of exposure to unwanted recommended content.
3 AI recommendation service providers should explain to users any harms or disadvantages caused by incidents arising from AI recommendation services.
4 AI recommendation service providers should pre-verify content risks by considering users characteristics.
5 AI recommendation service providers have an obligation to address functional errors, malfunc-tions, and violations of current laws during system operation.
General digital competence
(12 items; α = .90-.93; 1 = Very poor, 5 = Very good)
"How would you rate your digital competence in the following areas?"
1 I can make purchases using online easy payment systems. Functional skills
2 I can find cheaper products through online price comparisons.
3 I can use e-government services for administrative tasks.
4 I can book or call transportation using Internet or apps.
5 I can subscribe to insurance products online.
6 I can use kiosks for ordering food, purchasing tickets, and making payments.
7 I can verify the source of information found online.
8 I know what the Koea Communications Commission does. + Rights protection
9 I know how to report phishing scams and seek remedies.
10 I know how to report copyright infringement and seek remedies.
11 I know how to report infringement of rights and seek remedies.
12 I know how to report cybercrime and illegal websites.
Note. +Items 8-12 also constitute the Right protection competence subdimension.
Rights protection competence
(5 items; α = .86-.89; 1 = Very poor, 5 = Very good)
Items 8-12 of the General digital competence scale (see above)
Critical understanding competence
(2 items; r = .55-.63; 1 = Very poor, 5 = Very good)
1 I can distinguish credible information from unreliable information by comparing search results with other sources.
2 I can identify sponsored and advertising content.
Note. Inter-item Pearson r is reported (2022: r= .80; 2023: r= .79; 2024: r= .75, all ;p ;< .001).