Implicit Acquisition
How Platform Literacy Develops Without Instruction
Nobody taught you how to use TikTok. No course explained that the For You Page responds to watch time more than likes, that looping a video signals engagement differently than pausing it, and that the first three seconds determine whether the algorithm distributes your content or buries it. You learned these things the way you learned to walk: through repeated attempts, failed experiments, and gradual calibration against feedback you could feel but not fully articulate. The absence of a teacher was not an oversight. It was the condition.
This is the fourth property of Application Layer Communication. The previous three established that users must specify intent in formats algorithms can parse, that algorithms interpret those inputs unilaterally rather than through negotiation, and that algorithms orchestrate coordination among parties who never directly interact. Implicit Acquisition addresses the question that those properties raise but do not answer: how does anyone learn to operate within this system?
The answer is deceptively simple. They figure it out. Or they do not.
The Instruction That Does Not Exist
Compare three communication systems. Writing has grammar instruction, composition courses, and centuries of pedagogy devoted to developing competence. Programming has documentation, tutorials, and formal education. Even organizational communication receives explicit onboarding: here is how we write memos, how reports reach executives, and how to format a proposal.
Platform communication has none of this. Uber does not teach drivers how its matching algorithm selects which ride offers reach which drivers. Spotify does not explain to musicians how metadata entries affect playlist placement. LinkedIn does not instruct job seekers on how its screening algorithms parse applications. Help documentation explains the interface mechanics, how to click buttons, and how to navigate menus. It does not explain the communication strategy or how to structure inputs so the algorithm processes them effectively. The gap between knowing what a platform’s interface does and knowing how to communicate through it successfully is the gap between reading a dictionary and writing a novel.
This absence is not accidental. Power asymmetry between platforms and users is intrinsic to platform economics and architecture; conditions of engagement are endogenous, self-interested, and opaque (Cutolo & Kenney, 2021). Platforms do not provide clear rules or training because transparency invites gaming, and gaming undermines the algorithmic optimization that generates platform value. A systematic review of algorithmic management in the gig economy documents three types of deliberately created asymmetry: information asymmetry, power asymmetry, and calculative asymmetry, the differential capacity to capitalize on available information (Kadolkar et al., 2025). Workers develop folk theories to compensate for information they cannot obtain through official channels.
The structural training gap creates a paradox. Platform coordination requires communication competencies that platforms refuse to teach. Users must acquire these competencies anyway, because coordination through platforms is increasingly non-optional. The acquisition mechanism cannot be instructed. It must be something else.
Learning Algorithms Without Being Told the Rules
An immediate objection: what you are describing is tacit knowledge. Organizations have always involved learning that resists articulation. Workers have constantly developed know-how through practice rather than manuals. Polanyi (1966) established this sixty years ago. Weick (1995) theorized about sensemaking in ambiguous situations three decades ago. What does “implicit acquisition” add?
The objection deserves a serious answer because it reveals what makes platform communication structurally different from prior organizational learning contexts.
Tacit knowledge in traditional organizations develops against a stable, if complex, backdrop. A machinist develops a feel for a lathe through repetition, but the lathe does not update its parameters quarterly. A trader develops intuition for market dynamics, but markets follow discoverable rules even when those rules produce unpredictable outcomes. Weick’s sensemaking operates in ambiguous environments, but the ambiguity arises from complexity and equivocality in environments that hold still long enough to be made sense of. The sensemaker extracts cues, constructs plausible narratives, and acts on provisional understanding that stabilizes over time.
Platform communication presents a categorically different learning problem along three dimensions.
First, the system being learned is opaque by design and subject to change by policy. Algorithmic parameters shift continuously without notification. The sensemaker cannot stabilize a narrative about a system that was different last month and will be different next month, because the object of sensemaking is not merely ambiguous but actively nonstationary. Algorithmic sensemaking goes beyond isolated perceptions to become institutionalized as users develop routines shaped by accumulated understanding of algorithmic functions, but those functions shift under them (Obreja, 2024). Tacit knowledge of a lathe accumulates toward mastery. Tacit knowledge of an algorithm accumulates toward a perpetual state of partial, revisable understanding.
Second, feedback is ambiguous, probabilistic, and confounded. A musician practicing scales hears immediately whether the note is correct. A platform user submitting content receives feedback, engagement metrics, ranking position, and algorithmic distribution that reflect not only their own communication quality but also the behavior of millions of other users, current algorithmic parameters, trending topics, time of day, and stochastic variation. The signal-to-noise ratio is structurally worse than in any traditional organizational learning context. Isolating what you did from what the system did from what everyone else did is a problem that Polanyi’s machinist never faces.
Third, no authoritative reference exists for self-correction, and the absence is enforced. A programmer debugging code can consult documentation. A trader can read market data. A bureaucrat can check the organizational manual. A platform user trying to understand why their content failed to reach its audience has no authoritative source to consult, because the platform withholds that information as part of its competitive strategy. The user must generate hypotheses, test them through behavioral experiments, and evaluate results against the ambiguous, confounded feedback described above. This is not sensemaking under ambiguity. It is reverse-engineering a system that refuses to explain itself, using tools it provides but does not describe.
These three conditions, nonstationarity, confounded feedback, and enforced absence of reference, do not differ from traditional tacit knowledge in degree. They differ in kind. They produce a learning dynamic that existing frameworks cannot adequately specify.
Cognitive science helps explain why this learning dynamic works at all. Humans extract complex patterns from their environments without conscious effort, formal instruction, or even awareness that they are learning. Statistical learning, the ability to detect regularities in patterned input, operates incidentally through passive exposure and produces knowledge that is difficult to verbalize (Frost et al., 2019). The brain tunes itself to environmental patterns through a plasticity-based mechanism: repeated exposure to particular types of patterned input causes the neural networks processing that input to show improved processing through cortical tuning (Conway, 2020). Immediate feedback engages implicit and procedural learning via the striatum, while delayed feedback engages explicit and declarative systems (Williams, 2020). Platform feedback operates on variable reinforcement schedules that mirror conditions conducive to implicit learning.
But, and this qualification matters, cognitive science describes a capacity, not a complete explanation. Statistical learning provides the neural machinery for extracting patterns from platform environments. It does not, by itself, explain how platform users move from unconscious pattern recognition to deliberate strategic behavior. The neural mechanism is a necessary, but not a sufficient, condition for implicit acquisition. The sufficient account requires understanding what users do with the patterns they extract, which is where the empirical evidence becomes essential.
From Pattern Recognition to Strategic Action
The folk theories literature provides the most direct evidence for implicit acquisition, and it resolves the question of whether users are pattern-extracting organisms or strategic agents. The answer is that they are both, sequentially.
LGBTQ+ social media users develop algorithmic understanding through an iterative cycle: observe platform behavior, form implicit theories, adjust behavior, notice outcomes, and refine theories (DeVito, 2021). Users speculate about how platforms work to decide how to behave, and this speculation constitutes the acquisition process. The cycle begins with implicit pattern recognition, noticing that certain behaviors produce specific outcomes, and progresses toward explicit strategic reasoning as accumulated experience consolidates into articulable theories. DeVito identifies “theorization complexity levels” ranging from basic functional awareness to detailed structural understanding, empirically demonstrating that implicit acquisition follows a developmental trajectory rather than a fixed state.
The developmental trajectory matters because it resolves an apparent contradiction. Workers can simultaneously be implicit learners and strategic gamers, becoming strategic gamers through the accumulation of implicitly acquired knowledge. Ride-hailing workers develop two types of “workplace games” through experience: relational games, crafting customer interactions for ratings, and efficiency games maximizing earnings per time (Cameron, 2022). These games are highly calculated, involving conscious manipulation of algorithmic systems. But the knowledge enabling the calculation was not itself acquired through calculation. Workers independently developed strategies through experience and self-created tracking tools outside the app. No organizational socialization taught them. The strategic sophistication emerged from experiential accumulation, not from instruction or deliberate study.
The transition from implicit pattern recognition to conscious strategy operates through a specific mechanism: folk theorization. Users construct informal yet functional theories of algorithmic behavior through observation, experimentation, and error correction, without formal instruction, and these theories serve as the basis for deliberate action. TikTok users co-produce knowledge of the For You Page algorithm through observation of recommended content, develop theories about how the algorithm constructs and reflects their identities, then engage in “algorithmic resistance,” changing behaviors to shape their algorithmic identities through individual and collective actions (Karizat et al., 2021). The resistance is conscious and strategic. The knowledge enabling the resistance was acquired implicitly through use.
Transfeminine TikTok creators navigate what researchers call the “algorithmic trap of visibility,” in which amplification invites harassment, through learned strategies about which content triggers algorithmic amplification and which leads to shadowbanning (DeVito, 2022). The strategies are sophisticated, involving deliberate modulation of content types, hashtags, and self-presentation approaches. The sophistication developed through iterative experimentation, not through training. The sequence is always the same: unconscious pattern recognition consolidates into folk theories, folk theories enable strategic reasoning, and strategic reasoning produces deliberate behavioral adaptation. Implicit acquisition does not end at compliance. It begins with compliance and, for users who persist, develops toward agency.
This developmental trajectory distinguishes implicit acquisition from behavioral conditioning. Conditioning produces fixed behavioral responses to environmental stimuli. Implicit acquisition produces an evolving capacity for strategic action within algorithmic systems. The difference is not semantic. Conditioned subjects repeat behaviors that produce rewards. Implicitly acquiring users develops a transferable understanding that enables novel strategic responses to changed conditions. When Spotify users in Costa Rica describe actively giving feedback, tapping the heart icon, following artists, repeating listening, to “train” the algorithm, they demonstrate tacit know-how about communicating preferences to the algorithm. Still, they also demonstrate a mental model of the algorithm as a trainable system (Siles et al., 2020). That mental model is not a conditioned response. It is an implicit theory enabling flexible action.
The community dimension extends the developmental trajectory beyond individual cognition. TikTok users share “stories about algorithms,” narratives of lived experience that function as mechanisms through which algorithmic understanding forms collectively (Schellewald, 2022). Users adjust behavior based on observations of algorithmic responses, then share these learnings as stories, creating a collective knowledge base. YouTube creators develop expertise through experimentation and personal research, then translate the knowledge they implicitly acquire into instructional content for others (Bishop, 2020). A secondary economy of expertise emerges as intermediaries codify implicit knowledge into “algorithmic lore,” speculative and experientially derived understanding that functions as a market device shaping visibility strategies within creator communities (MacDonald, 2023). Folk theorization is simultaneously individual and social, implicit and (as it matures) increasingly explicit, automatic and (as it consolidates) increasingly strategic. Practical knowledge of algorithms, knowing how to accomplish goals within algorithmic systems, develops at the intersection of practice and discourse, Bourdieu’s practical sense applied to algorithmic environments (Cotter, 2024).
The Empirical Pattern Across Platform Domains
The developmental sequence from implicit pattern recognition through folk theorization to strategic action appears consistently across platform contexts.
eBay sellers develop strategies to manage algorithmic evaluation systems through experience, navigating power asymmetries created by the construction of visibility gaps, implicit coalitions, and unilateral rule changes (Curchod et al., 2020). Freelancers develop “anticipatory compliance practices,” learned behaviors to avoid algorithmic punishment, through community discussion and personal experimentation: avoiding trigger words, undervaluing work for ratings, curtailing outreach, managing emotional expression (Bucher et al., 2021). A comparative ethnography of Uber/Lyft and Upwork reveals that workers develop sophisticated resistance tactics through trial and error: gaming rating algorithms, manipulating customer perceptions, and exploiting customers’ lack of knowledge about how the platforms organize work (Cameron & Rahman, 2022). Workers learn to navigate constrained choice architecture through lived experience, developing implicit knowledge about how to work within algorithmic systems (Cameron, 2024). In every case, the starting point is experiential pattern recognition, and the endpoint is strategic behavioral adaptation.
The most direct empirical test of implicit acquisition versus formal training comes from research on food delivery riders in China. Platform training correlates negatively with rider incomes, while learning by doing correlates positively (Zheng et al., 2024). Formal platform training does not merely fail to help; it actively harms. It predicts worse outcomes. A large-scale quantitative study of over one million delivery orders confirms the pattern: newcomers heavily rely on algorithmic recommendations, but with experience, workers develop personalized strategies and increasingly deviate from platform suggestions (Jiang & Sinchaisri, 2025). Implicit acquisition produces knowledge that eventually supersedes the platform’s own guidance.
Musicians demonstrate the same developmental sequence. They learn to optimize music for Spotify through three strategy types acquired through practice: sonic optimization, creating “playlist-friendly” sounds, data optimization, manipulating metadata, and infrastructural optimization gaming platform systems (Morris, 2020). Musicians discover surprising algorithmic categorizations, classification by nationality rather than genre, through exploration and change professional strategies based on experientially acquired algorithmic insights (Prey & Esteve Del Valle, 2024). No Spotify training manual teaches these strategies. Musicians develop them through the accumulation of platform experience, shared informally through community networks.
Prompt engineering for AI systems represents the newest domain of implicit acquisition. Users learn to calibrate communication with large language models through trial and error, adjusting wording based on AI responses, because a slight alteration in phrasing can determine whether the system misinterprets an instruction or exceeds expectations (Federiakin et al., 2024). Students develop intuitive prompting strategies through interaction with ChatGPT before any formal instruction, and higher-quality prompt engineering skills predict better output quality (Knoth et al., 2024). The developmental trajectory from unconscious calibration to deliberate strategic prompting follows the same pattern observed across other platform domains.
Dating app users develop concrete behavioral strategies on Tinder, where no information exists about how the algorithm works, including creating new profiles for anticipated ranking boosts and strategic swiping patterns based on implicit theories formed through trial and error (Nader & Lee, 2022). Facebook users develop an implicit understanding through years of unexamined platform use that becomes visible only when explicitly confronted with the system’s outputs (Büchi et al., 2023). Users develop mental models of algorithmic news recommenders in which knowledge, emotions, and everyday interactions interconnect; users with greater understanding acknowledge the roles of companies and developers, suggesting that broader use leads to more sophisticated implicit learning (Martens et al., 2023).
A transparency-creepiness trade-off operates in algorithmic understanding, complicating simple prescriptions for explicit instruction (Ngo & Krämer, 2022). Users desire transparency but find detailed algorithmic explanations unsettling, preferring hidden explanations over explicit ones. They have developed implicit, practice-based understandings they are comfortable with. Explicit instruction can create discomfort rather than empowerment. The implicitness of acquisition is not merely a structural consequence of platform opacity. It is, for many users, the preferred mode of engagement.
The Question of Stratification
The implications of implicit acquisition are not neutral. The developmental trajectory from pattern recognition through folk theorization to strategic action is available to all platform users in principle. In practice, differential access to the conditions enabling that trajectory produces systematic variance in where users end up along it.
An immediate challenge to this claim: if a PhD student learns TikTok faster than a high school dropout, is that “implicit acquisition” producing the stratification, or is it general cognitive ability? If wealthy users develop better platform strategies than poor users, is the mechanism implicit acquisition or simply the resource advantages that correlate with wealth?
The challenge is fair but answerable. Approximately sixty percent of Norwegian internet users demonstrate low or no algorithmic awareness (Gran et al., 2021). Algorithmic awareness functions as a meta-skill unevenly distributed across demographic groups, with those engaging in more intensive and diverse platform use developing greater awareness. Socioeconomic background shapes both the quantity and quality of platform use, creating uneven knowledge distribution (Cotter & Reisdorf, 2020). Education alone does not predict algorithmic knowledge; practice, measured by breadth and frequency of platform use, matters independently. Algorithms are “experience technologies,” more easily understood through use than through explanation.
The independent contribution of practice to platform competency is the key empirical finding. If stratification were purely a function of general intelligence or socioeconomic capital, then education level and income would fully predict algorithmic literacy. They do not. Practice makes an independent contribution because implicit acquisition is a practice-dependent process, not merely a cognition-dependent one. Socioeconomic capital matters, but it matters primarily because it creates differential access to the sustained, diverse platform engagement through which implicit acquisition operates. The mechanism producing stratification is the acquisition process itself, operating on unequal inputs.
Students reveal sophisticated implicit knowledge of algorithmic operations developed through everyday use rather than instruction (Koenig, 2020). Their awareness is surface-level until prompted to articulate it, demonstrating substantial algorithmic literacy that exists before any educational intervention. Users demonstrate functional mastery through habitual practice but lack declarative knowledge of the mechanisms producing their results (Bakke, 2020). Digital literacy practices are always situated within specific discourse communities, and skills-based approaches alone are insufficient (Bacalja et al., 2022). Users interact with algorithmically curated content daily without awareness that they are developing platform communication competencies (Hobbs, 2020). The most comprehensive review of algorithmic literacy research confirms that most research focuses on cognitive dimensions, awareness, and knowledge, while neglecting how literacy actually develops (Oeldorf-Hirsch & Neubaum, 2023). Existing measurement instruments capture declarative knowledge but not practical, embodied competence (Dogruel et al., 2022).
Three categories of folk pedagogies address data and algorithmic literacy: formal school-based approaches, personal self-regulation strategies, and grassroots folk learning (Pangrazio & Sefton-Green, 2020). The folk category maps directly onto implicit acquisition. But folk pedagogies are unevenly distributed. Communities with existing technical knowledge develop richer folk pedagogies. Communities without that knowledge develop thinner ones. The implicit acquisition process does not merely reflect existing inequality. It amplifies it because the very mechanism of acquisition, practice-based learning in communities of use, advantages communities that already possess the resources enabling sustained, reflective practice.
But the stratification implicit acquisition produces is not identical to the standard digital divide, and conflating the two misses what ALC specifies. The digital divide describes differential access to technology. Implicit acquisition describes the differential development of communication competency among users with equivalent access. Two users with the same device, the same internet connection, the same platform account, and the same number of hours spent on that platform can develop radically different levels of ALC fluency because implicit acquisition does not guarantee uniform outcomes from uniform exposure. The variance arises from the acquisition process itself: from differences in the quality of attention during platform use, from differences in exposure to community folk pedagogies, from differences in the cognitive resources available for pattern extraction from confounded feedback, and from differences in the motivation to persist through the frustration of learning a system that refuses to explain itself. These are not access differences. They are acquisition differences. And they produce the systematic coordination variance that the ALC framework exists to explain.
The anticipatory compliance documented in gig work illustrates a further dimension of the stratification dynamic. Workers who implicitly acquire platform communication competency do not simply become more effective coordinators. The strategies workers develop through experience, avoiding trigger words, undervaluing work for ratings, managing emotional expression, and reinforcing algorithmic power rather than challenging it (Bucher et al., 2021). Implicit acquisition produces competency within the system’s terms. Users at lower developmental stages learn to comply more effectively. Users at higher developmental stages learn to strategize, resist, and sometimes exploit. The stratification is not merely a matter of more or less competency. It is a matter of qualitatively different relationships to the algorithmic system, ranging from reactive compliance to proactive manipulation, with most users clustered toward the compliance end because the developmental trajectory from pattern recognition to strategic agency takes sustained practice that most users’ circumstances do not support.
What Implicit Acquisition Means for ALC
Implicit acquisition is not an incidental feature of platform communication. It is a defining property. The absence of formal instruction is not a gap to be filled by better onboarding or more transparent documentation. It is a structural characteristic of a communication system where the rules are opaque, shifting, probabilistically applied, and deliberately withheld.
The first three ALC properties describe what platform communication is: intent must be specified in machine-readable formats, meaning is determined unilaterally by algorithms, and algorithms orchestrate coordination among distributed parties. Implicit acquisition describes how users enter this system. They learn by doing. They learn through a developmental trajectory that begins with unconscious pattern extraction, consolidates through folk theorization, and, for those with sufficient practice and community resources, matures into strategic agency. They learn unevenly, not because some users are brighter than others, but because the conditions enabling the developmental trajectory are unequally distributed.
Every historical literacy transition produced populations divided between those who acquired the new competency and those who did not. Writing created literate and illiterate classes. Print created information-rich and information-poor populations—digital technology created connected and disconnected communities. Platform communication is producing its own division. But unlike prior transitions, this division does not fall along the line of access. It falls along the line of acquisition, between users who develop the implicit competencies for algorithmic coordination and users who, despite equivalent access, do not. The division compounds over time because higher fluency produces better coordination outcomes, which in turn produce more platform engagement, which provides more practice, which in turn drives further fluency development. The compounding dynamic produces systematic variance in coordination outcomes, which the final property of ALC, stratified fluency, explains.
That is the subject of the final essay in this series.
References
Bacalja, A., Beavis, C., & O’Brien, A. (2022). Shifting landscapes of digital literacy. The Australian Journal of Language and Literacy, 45, 253–271.
Bakke, A. (2020). Everyday Googling: Results of an observational study and applications for teaching algorithmic literacy. Computers and Composition, 57, Article 102577.
Bishop, S. (2020). Algorithmic experts: Selling algorithmic lore on YouTube. Social Media + Society, 6(1), 1–11.
Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., & Velidi, S. (2023). Making sense of algorithmic profiling: User perceptions on Facebook. Information, Communication & Society, 26(4), 809–825.
Bucher, E. L., Schou, P. K., & Waldkirch, M. (2021). Pacifying the algorithm: Anticipatory compliance in the face of algorithmic management in the gig economy. Organization, 28(1), 44–67.
Cameron, L. D. (2022). “Making out” while driving: Relational and efficiency games in the gig economy. Organization Science, 33(1), 231–252.
Cameron, L. D. (2024). The making of the “good bad” job: How algorithmic management manufactures consent through constant and confined choices. Administrative Science Quarterly, 69(2), 458–514.
Cameron, L. D., & Rahman, H. (2022). Expanding the locus of resistance: Understanding the co-constitution of control and resistance in the gig economy. Organization Science, 33(1), 38–58.
Christiansen, M. H. (2019). Implicit statistical learning: A tale of two literatures. Topics in Cognitive Science, 11(3), 468–481.
Conway, C. M. (2020). How does the brain learn environmental structure? Ten core principles for understanding the neurocognitive mechanisms of statistical learning. Neuroscience & Biobehavioral Reviews, 112, 279–299.
Cotter, K. (2024). Practical knowledge of algorithms: The case of BreadTube. New Media & Society, 26(5), 2668–2689.
Cotter, K., & Reisdorf, B. C. (2020). Algorithmic knowledge gaps: A new dimension of (digital) inequality. International Journal of Communication, 14, 745–765.
Curchod, C., Patriotta, G., Cohen, L., & Neysen, N. (2020). Working for an algorithm: Power asymmetries and agency in online work settings. Administrative Science Quarterly, 65(3), 644–676.
Cutolo, D., & Kenney, M. (2021). Platform-dependent entrepreneurs: Power asymmetries, risks, and strategies in the platform economy. Academy of Management Perspectives, 35(4), 584–605.
DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 339, 1–38.
DeVito, M. A. (2022). How transfeminine TikTok creators navigate the algorithmic trap of visibility via folk theorization. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), Article 380, 1–31.
Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115–133.
Federiakin, D., Molerov, D., Zlatkin-Troitschanskaia, O., & Maur, A. (2024). Prompt engineering as a new 21st-century skill. Frontiers in Education, 9, Article 1366434.
Frost, R., Armstrong, B. C., & Christiansen, M. H. (2019). Statistical learning research: A critical review and possible new directions. Psychological Bulletin, 145(12), 1128–1153.
Gran, A.-B., Booth, P., & Bucher, T. (2021). To be or not to be algorithm aware: A question of a new digital divide? Information, Communication & Society, 24(12), 1779–1796.
Hobbs, R. (2020). Propaganda in an age of algorithmic personalization: Expanding literacy research and practice. Reading Research Quarterly, 55(3), 521–533.
Jiang, S., & Sinchaisri, W. P. (2025). Learning on the go: Understanding how gig economy workers learn with recommendation algorithms. SSRN Working Paper No. 5376462.
Kadolkar, P., Keegan, A., Meijerink, J., & Bondarouk, T. (2025). Algorithmic management in the gig economy: A systematic review and research integration. Journal of Organizational Behavior, 46, 1–30.
Karizat, N., Delmonaco, D., Eslami, M., & Andalibi, N. (2021). Algorithmic folk theories and identity: How TikTok users co-produce knowledge of identity and engage in algorithmic resistance. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 305, 1–44.
Knoth, N., Tolzin, A., Janson, A., & Leimeister, J. M. (2024). AI literacy and its implications for prompt engineering strategies. Computers and Education Open, 6, Article 100170.
Koenig, A. (2020). The algorithms know me, and I know them: Using student journals to uncover algorithmic literacy awareness. Computers and Composition, 58, Article 102611.
MacDonald, T. W. L. (2023). “How it actually works”: Algorithmic lore videos as market devices. New Media & Society, 25(7), 1601–1619.
Martens, M., De Wolf, R., Berendt, B., & De Marez, L. (2023). Decoding algorithms: Exploring end-users’ mental models of the inner workings of algorithmic news recommenders. Digital Journalism, 11(1), 203–225.
Morris, J. W. (2020). Music platforms and the optimization of culture. Social Media + Society, 6(3), 1–12.
Nader, K., & Lee, M. K. (2022). Folk theories and user strategies on dating apps. In iConference 2022: Information for a Better World (LNCS 13192, pp. 445–458). Springer.
Ngo, T., & Krämer, N. (2022). Exploring folk theories of algorithmic news curation for explainable design. Behaviour & Information Technology, 41(15), 3346–3359.
Obreja, D. M. (2024). When stories turn institutional: How TikTok users legitimate the algorithmic sensemaking. Social Media + Society, 10(1).
Oeldorf-Hirsch, A., & Neubaum, G. (2023). What do we know about algorithmic literacy? The status quo and a research agenda for a growing field. New Media & Society, 27(2), 681–701.
Pangrazio, L., & Sefton-Green, J. (2020). The social utility of ‘data literacy.’ Learning, Media and Technology, 45(2), 208–220.
Polanyi, M. (1966). The tacit dimension. University of Chicago Press.
Prey, R., & Esteve Del Valle, M. (2024). The algorithmic network imaginary: How music artists understand and experience their algorithmically constructed networks. The Information Society, 40(1), 1–15.
Schellewald, A. (2022). Theorizing “stories about algorithms” as a mechanism in the formation and maintenance of algorithmic imaginaries. Social Media + Society, 8(1), 1–10.
Siles, I., Segura-Castillo, A., Solís, R., & Sancho, M. (2020). Folk theories of algorithmic recommendations on Spotify: Enacting data assemblages in the global South. Big Data & Society, 7(1), 1–15.
Swart, J., & Broersma, M. (2023). Learning in and about a filtered universe: Young people’s awareness and control of algorithms in social media. Learning, Media and Technology, 49(3), 309–325.
Vetter, P., Bola, Ł., Reich, L., Bennett, M., Muckli, L., & Amedi, A. (2022). Unlocking adults’ implicit statistical learning by cognitive depletion. Proceedings of the National Academy of Sciences, 119(2), e2026011119.
Weick, K. E. (1995). Sensemaking in organizations. Sage.
Williams, J. N. (2020). The neuroscience of implicit learning. Language Learning, 70(S2), 248–307.
Zheng, Q., Zhan, J., & Xu, X. (2024). Platform training and learning by doing and gig workers’ incomes: Empirical evidence from China’s food delivery riders. SAGE Open, 14(4).

Hey Roger - your whole viewpoint & perspective intrigue me…
I will need to read & re-read this post and some other (previous) posts - maybe more than once???
The cognitive and learning processes you describe seem quite familiar to me (from a VERY different perspective!)???
Except - I view these differently and use different terms to describe what I think are the same / or similar processes…
Maybe this might not make sense to you now - as your way of describing and labelling these cognitive & learning processes is TOTALLY new to me?!?!
My doctoral research started with understanding “matching” processes for teaching complex concepts (more than 20 years ago) - THAT seems to be the ONLY label we have “in common”?
I think (from this first reading?!) I would probably use terms /labels like “transfer” & “generalisation” to describe similar processes to what you outline here?
My doctoral research had the goal of teaching students to “learn how to learn” - and I was able to make visible - the cognitive processes for how to answer questions about a text that students read?
I succeeded in the doctoral research in Grade 5 in Australia 🇦🇺 & teachers who used these materials reported anecdotally that these materials “taught them how to teach more effectively”! My statistically significant results were then replicated in schools - without me there - if teachers taught these in similar ways (now called “dosage”?) to what happened in my research…
And I generalised these materials - extending the strategy instruction and teaching examples for Grade 2 to Grade 6 in Australia 🇦🇺… My Postgrad students in Special Education taught these with success with students with special needs in Grades 7-10!
So - from a VERY different background and perspective- I am NOW reading your work and trying to make it all fit and extend my own knowledge???
Sorry, in advance, for this long reply - and hope this makes some sense to you???
Best always Gail 👍👍😊