Welcome to Implicit Acquisition
The missing structural question about algorithms, and why it matters more than the ones everybody is asking
The variance puzzle
An analysis of administrative data on over one million Uber drivers found that experienced drivers earn a 14% hourly premium over newcomers, a premium attributable to learning where to drive, when to drive, and how to strategically accept and cancel trips (Cook, Diamond, Hall, List, & Oyer, 2021). The premium does not come from better cars, longer hours, or algorithmic favoritism. It comes from something the drivers learned to do inside the system. The same study found a roughly 7% gender earnings gap, entirely explained by three behavioral differences: experience on the platform, preferences for where to drive, and driving speed. Demographic discrimination explained none of it.
The pattern repeats across every platform economy researchers have studied. Delivery workers who strategically deviate from algorithmic recommendations outperform those who passively follow them. These workers develop personalized strategies that diverge more from algorithmic defaults as they gain experience (Jiang & Sinchaisri, 2025). Content creators on TikTok, YouTube, and Instagram form theories of different sophistication about how distribution algorithms operate. These theories range from simple awareness to causal fragments to integrated structural models. The sophistication of such theories predicts the effectiveness of their strategic behavior (DeVito, 2021). Freelancers on opaque evaluation platforms split into different paths. Some experiment with improvement strategies, others restrict their activity, and still others leave entirely. This divergence depends on performance level, platform dependence, and whether the worker has faced algorithmic setbacks (Rahman, 2021). A seven-year ethnography of ride-hailing drivers documents workers developing distinctly different tactical repertoires to navigate identical algorithmic systems. Some use engagement tactics and follow algorithmic nudges, while others manipulate algorithmic inputs through deviance tactics. Both types produce a sense of skillful agency within structurally confined choices (Cameron, 2024).
The convergent finding is a variance puzzle. Identical systems produce non-identical outcomes, and the divergence follows patterns that look less like random noise and more like compounding advantage. Most public conversation about algorithms asks whether they are fair, biased, or will replace human workers. Those are real questions, but they skip past a prior one: why do people facing the same algorithm end up in such different places?
That prior question is what this Substack is about.
The wrong question about algorithms
The dominant conversation about AI and algorithms asks a tool question: how do we use these tools well, how do we prevent them from causing harm, how do we regulate them? The framing treats algorithms as instruments humans wield, and the policy challenge as ensuring that those instruments serve human purposes.
The tool framing is not wrong so much as incomplete, and its incompleteness produces a specific blind spot.
Consider an analogy. Before widespread literacy, the dominant social technology for coordinating collective action among strangers was speech. Oral societies developed sophisticated coordination mechanisms: oral contracts enforced by reputation, mnemonic devices for preserving collective knowledge, and ritualized speech acts that bound communities together. The transition to literacy did not simply give people a new instrument for doing what they already did with speech. It restructured the institutional foundations of coordination. People who learned to read and write gained access to contracts, bureaucracies, legal systems, and financial instruments that enabled coordination among strangers who never met, across distances that oral communication could not span. The cognitive historian Walter Ong documented this transformation in detail: literacy restructured thought itself, enabling abstract, sequential, context-independent reasoning that oral cultures organized differently (Ong, 1982).
I want to be precise about the nature of this claim, because the oracy-to-literacy transition has been the subject of sustained scholarly debate. The strongest version of the argument, sometimes called the “Great Divide” thesis, holds that literacy directly causes cognitive transformation. That version has been credibly challenged. Scribner and Cole (1981), studying Vai literacy in Liberia, found little evidence that literacy per se produces major cognitive changes independent of schooling and urbanization. Street (1984) argued that treating literacy as a neutral cognitive technology ignores the social practices and power relations within which reading and writing are embedded. These critiques correctly identify the limits of a purely cognitive account.
The version of the argument that survives the critique is the institutional and structural one. Writing did not emerge as a literary technology but as a coordination technology. The classic archaeological account locates its origin in Near Eastern clay tokens, abstract representations of agricultural commodities that preceded written language by millennia and enabled accounting and exchange among parties who did not share a physical space (Schmandt-Besserat, 1992). Writing systems developed from these tokens into cuneiform, and cuneiform enabled the legal codes, administrative records, and commercial contracts that organized Mesopotamian city-states. The institutional expression of this restructuring runs from the earliest written legal codes through double-entry bookkeeping, joint-stock companies, and the Bretton Woods system of international monetary coordination. Each innovation extended the radius of coordination among strangers by leveraging the properties of written communication, whether or not it also transformed individual cognition. Kalantzis and Cope (2025), writing as co-founders of the New London Group’s multiliteracies framework, recently extended this trajectory into digital and algorithmic contexts, noting that social media environments operate increasingly through algorithmic curation rather than user navigation. The transition from oral to literate coordination is not an artifact of Ong’s strongest claims. It is a documented structural transformation in how strangers coordinate, and it provides the template for understanding what algorithms do to coordination now.
The third party in the room
Every framework currently used to study how people communicate through digital systems assumes a two-party structure. Computer-Mediated Communication examines how humans communicate with one another through technology (Walther, 1996). Human-Machine Communication examines how humans talk to machines (Guzman & Lewis, 2020). AI-Mediated Communication examines how AI modifies messages between humans (Hancock, Naaman, & Levy, 2020). Each tradition captures something real about a genuinely important phenomenon. Each assumes the fundamental unit of analysis is a dyad: two parties, exchanging messages, through a channel.
The assumption shapes what these frameworks can see. Hancock, Naaman, and Levy (2020) explicitly excluded coordination algorithms, recommendation engines, and newsfeeds from the framework’s scope in their foundational definition of AI-Mediated Communication. AI-MC concerns cases where AI operates “on behalf of” one communicator to modify messages sent to another. Platform coordination, where an algorithm mediates between two parties while pursuing its own objectives, falls outside the definition by design. The exclusion is theoretically principled within a dyadic ontology and marks the exact gap that needs to be filled.
Platform coordination is triadic by structure. When a driver accepts a ride on Uber, the driver interacts with an algorithm that monitors behavior, measures it against hidden models, and makes decisions that affect the passenger’s experience. The passenger also interacts with an algorithm, which uses ratings, location, and history to select a driver. The driver and the passenger coordinate. However, they do so through an intermediary that has its own goals, interprets their behavior through its own models, and shapes their interaction according to its own logic. This intermediary does not simply pass along messages. It interprets, transforms, and allocates. It has preferences and optimizes for outcomes that may differ from those of the parties involved.
Georg Simmel recognized over a century ago that adding a third party to a dyad transforms the structure of the relationship itself, creating possibilities for mediation, coalition, and strategic manipulation that two-party relationships cannot generate (Simmel, 1950). The philosopher C. S. Peirce made a parallel argument in formal logic: triadic relations are irreducible to combinations of dyadic ones (Peirce, 1931). You cannot build a triangle from lines. You need a relationship among all three simultaneously.
These are not merely philosophical observations. Operations management scholars have applied Simmel’s triadic logic to service contexts in which a buyer contracts a supplier to deliver services to the buyer’s customer, thereby creating the same structural dynamics of coalition, opportunism, and mediated quality control as platform coordination (Wynstra, Spring, & Schoenherr, 2015). Recent organizational research confirms the pattern empirically: studies of the Swedish gig economy document all three actors in the worker-platform-client triangle forming bilateral coalitions against excluded third parties, with what the authors call “opportunistic agency” emerging from the triadic structure itself (Öborn, MacKenzie, Örnebring, & Van Couvering, 2024). The triadic structure is observable in the field.
If platform coordination is structurally triadic, then the competency required to navigate it must address all three parties. A skill for using a tool addresses a dyad of user and instrument. A media literacy for interpreting algorithmic outputs addresses a dyad of reader and system. Neither captures the challenge of coordinating effectively with a stranger through an intermediary that pursues its own objectives and mediates every interaction.
Algorithmacy
I call the missing competency algorithmacy: the communication competency required to navigate triadic algorithmic coordination structures.
The word follows the pattern of literacy and numeracy. Literacy names the competency that the communication system of reading and writing both makes possible and demands. Algorithmacy names the competency that algorithmic coordination systems both make possible and demand. The parallel is structural. Literacy did not merely add a skill to an oral society; it reorganized the institutional foundations of coordination. Algorithmacy does not merely add a skill to a literate society; it reorganizes the terms on which coordination among strangers occurs.
The sequence is historical: oracy, literacy, algorithmacy. Each names a communication competency that emerged with a new coordination medium and restructured how strangers coordinate collective action.
Oracy refers to the competency of coordinating through speech. Oral societies coordinated through face-to-face interaction, reputation, ritual, and mnemonic devices. The coordination radius was limited by the reach of the human voice and the reliability of human memory.
Literacy refers to the competency of coordinating through written language. Literate societies developed contracts, bureaucracies, legal systems, and financial instruments that enabled coordination among strangers who never met, across distances that oral communication could not span. The very concept of a binding agreement between parties who could not see each other required a literate infrastructure of witnesses, seals, and archives.
Algorithmacy names the competency for coordinating through algorithmic intermediation. Algorithmic systems coordinate action among strangers who never communicate directly, at speeds and scales that written communication cannot match. The platform economy is the most visible expression of this coordination mode, but it extends well beyond gig work. Every recommendation engine, every matching algorithm, every content distribution system, every dynamic pricing model operates as a triadic coordination structure that interprets, transforms, and distributes human behavior through computational models that participants do not fully control.
Organizational theorists have recently proposed co-optation as a fourth coordination mechanism alongside markets, hierarchies, and networks (Stark & Vanden Broeck, 2024). The concept extends Selznick’s (1949) classic analysis of organizational absorption: platforms co-opt autonomous actors, enrolling providers and users in the practices of algorithmic management without delegating managerial authority. Workers in hierarchies follow commands. Traders in markets honor contracts. Partners in networks reciprocate trust. Workers on platforms are co-opted, enrolled in coordination structures they did not design, governed by rules they cannot fully observe, producing value that accrues to an intermediary they cannot negotiate with as equals. Co-optation explains how workers get enrolled into algorithmic coordination. Algorithmacy explains what develops after enrollment, and why that development varies so dramatically across workers facing identical systems.
Algorithmacy is not algorithmic literacy.
The distinction from algorithmic literacy requires precision, because the neighboring concept is well-established and growing rapidly. Over 169 publications now address algorithmic literacy across communication, information science, and media studies (Gagrčin, Naab, & Grub, 2026). Validated measurement scales exist (Dogruel, Masur, & Joeckel, 2022). Research agendas have been published (Oeldorf-Hirsch & Neubaum, 2025). The field is productive and serious, and algorithmacy does not seek to replace it.
Algorithmic literacy, as typically defined, refers to awareness that algorithms exist and shape one’s information environment, knowledge of how algorithms work, the ability to evaluate algorithmic decisions, and the skills to cope with or influence algorithmic operations (Dogruel, Masur, & Joeckel, 2022). The construct is rooted in media literacy traditions. It concerns the individual’s relationship to an algorithmic system, which makes it, in the terms of this essay, a dyadic construct: one person, one system.
Algorithmacy differs on three axes. First, it is a communication competency for coordination, not a media literacy for consumption. The relevant question shifts from “do you understand how this algorithm works?” to “can you coordinate effectively with a stranger through an algorithmic intermediary that neither of you fully controls?” Second, algorithmacy theorizes a triadic structure in which the coordinating system has its own objectives, rather than a dyadic relationship between user and platform. Third, algorithmacy is situated in organizational and platform coordination contexts where the stakes are economic, not solely informational.
A recent finding sharpens the distinction. Chung (2025), studying young adults’ algorithmic knowledge, documented an awareness-action gap: higher algorithmic knowledge correlated with greater concern about misinformation but paradoxically with less corrective action. Knowing more about algorithms produced cynicism, not competence. The gap between awareness and effective action is precisely what algorithmacy is designed to address. A person can score perfectly on an algorithmic literacy scale and still fail to coordinate effectively through algorithmic intermediation, just as a person can define every term in a foreign language dictionary and still fail to hold a conversation.
Tarafdar, Page, and Marabelli (2023) offer the strongest alternative framing within organizational theory. They treat algorithms as “role senders” in organizational role theory, modeling a dyadic relationship in which algorithms send role expectations and humans receive them. The analysis is insightful, and their own findings reveal a dyadic limitation: they identify what they call “broken loop learning,” in which the algorithm records all human task actions but remains ignorant of the human’s cognitive reactions, while the human observes algorithmic outputs but cannot access the models generating them. Each party possesses information that the other cannot see. This information asymmetry is a structural property of a three-party arrangement, not an anomaly within a two-party one. In a dyad, both parties can, in principle, access the same information. In a triad where the intermediary controls information flow, asymmetry becomes a feature of the architecture.
Why the gap compounds
Literacy scholars have documented a consistent pattern across communication transitions: when a new medium restructures coordination, competency in that medium is distributed unevenly, producing compounding advantages. Stanovich (1986) named this the Matthew Effect in reading: children who acquire reading skills early read more, gain vocabulary and comprehension advantages, and compound those gains over time. Children who struggle with early acquisition avoid reading, fall further behind, and develop generalized deficits that extend well beyond reading itself. The mechanism is a positive feedback loop between competency and the information environment it opens.
The variance puzzle in platform work appears to be the same dynamic, relocated to a new medium. Small initial differences in navigating an algorithmic system yield better feedback, which accelerates learning and produces even better feedback. The process is multiplicative rather than additive, and multiplicative processes generate power-law distributions rather than normal ones. A few participants accumulate large advantages, while the majority cluster near the median or below it. Platforms with stronger recommendation algorithms exhibit stronger power-law earnings distributions and lower median earnings, directly linking the strength of algorithmic mediation to the magnitude of inequality (Strauss, Yang, & Mazzucato, 2025).
The cognitive mechanism through which workers interpret algorithmic systems is well established. Möhlmann, Salge, and Marabelli (2023) adapted Weick’s sensemaking framework to algorithmic contexts, identifying how workers attend to algorithmic cues through focused enactment, interpret those cues through pattern discovery, and retain successful interpretations in personal memory and peer networks. This framework, algorithm sensemaking, explains the cognitive process through which workers build working models of opaque systems and documents that the process varies across workers. It does not explain why.
Formal instruction does not close the gap. Platform training correlates negatively with gig worker income, while learning by doing correlates positively (Zheng, Zhan, & Xu, 2024). The finding is counterintuitive but consistent with a broader principle in learning science: competency in opaque, dynamic environments develops through structured struggle with the environment itself, not through instruction about it. Algorithmic environments are what cognitive scientists call “wicked” learning environments, where feedback is delayed, rules change without notification, and the relationship between actions and outcomes is opaque (Hogarth, Lejarraga, & Soyer, 2015). Formal training calibrated to simplified models fails in environments where the simplification is the problem.
Workers develop algorithmacy not from manuals but from the system itself, through a process of experimentation, feedback interpretation, and strategic adjustment that resembles the implicit acquisition of language far more than the explicit instruction of a technical skill. That process of implicit acquisition is the namesake of this Substack.
The structural objection and the structural response
A serious critique of any competency-based explanation of platform outcomes argues that structural factors determine divergence, not individual competency. Capital, demographics, geography, race, and market position shape outcomes in ways that no amount of individual skill can overcome. Schor, Attwood-Charles, Cansoy, Ladegaard, and Wengronowitz (2020), in an interview study of 112 workers across seven platforms, found that economic dependence on platform income explains more variance in satisfaction and outcomes than any individual-level factor. Workers who rely on platform income for basic expenses experience lower satisfaction, less autonomy, and lower effective earnings than supplemental earners on the same platforms. Critical scholars have documented that race, geography, and labor market informality structure platform outcomes in ways that individual competency alone cannot remedy (van Doorn & Badger, 2020).
Algorithmacy does not deny structural determination. It identifies the specific competency through which individuals navigate structural constraints, or fail to. The relationship parallels the historical case. Literacy did not eliminate class inequality. Literate societies remained stratified by wealth, geography, race, and political power. What literacy did was create a new axis along which advantage and disadvantage compounded. Harvey Graff (1979) called the belief that literacy acquisition invariably produces economic uplift “the literacy myth,” and the label is well earned. The claim here is more modest: algorithmacy names a competency that a structural transformation in coordination requires, not a pedagogical intervention that guarantees improved outcomes.
Whether algorithmacy will follow the historical pattern toward broader universalization or produce permanent stratification is an open question. The compressed timescale of algorithmic change, the opacity of algorithmic systems, and the continuous evolution of platform architectures all work against easy universalization. The question is among the most consequential of the current transition, and this essay does not resolve it.
What you will find here
This Substack is the public-facing home of a research program. I am a PhD candidate in Organizational Theory at Bentley University, and my dissertation develops the theoretical foundations of algorithmacy. The first paper establishes the variance puzzle and diagnoses the structural limitation that prevents existing frameworks from explaining it. The second specifies the communication system through which algorithmacy operates. The third tests a key prediction experimentally.
The essays here are not dissertation excerpts. They are explorations of the ideas that feed the dissertation, written for readers who find these questions interesting, regardless of whether they have read a single organizational theory paper. Some essays trace the historical sequence from oracy through literacy to algorithmacy. Some examine specific coordination problems, from ride-hailing to content creation to international monetary systems, through a triadic lens. Some engage the philosophy that informs the theoretical framework: Simmel on triads, Peirce on irreducibility, Stiegler on the relationship between technology and human cognition. Some address practical questions about how people learn to navigate algorithms, what organizations can do to support that learning, and what it means for education, labor policy, and the design of AI systems.
The common thread is a conviction that the most important question about algorithms concerns the kind of competency they demand, how that competency develops, and what happens to the people and institutions that fail to develop it. Answering that question requires thinking about algorithms not as tools humans use but as coordination structures humans inhabit, and taking seriously the possibility that we are living through a transition as consequential as the transition from orality to literacy, one that reorganizes the institutional terms on which strangers coordinate and creates new axes of advantage and disadvantage.
The evidence suggests this transition is already producing its distributional consequences. The variance puzzle is not a research curiosity. It is a signal. The question is whether we can name what it signals clearly enough to study, measure, and respond to it before the compounding dynamics produce inequalities that become structural facts rather than contingent outcomes.
That is the work ahead. Welcome to Implicit Acquisition.
Roger Hunt is a PhD candidate in Organizational Theory at Bentley University and an AI Engineer specializing in Application Layer Communication. His research examines how algorithmic coordination systems restructure the competencies required for economic participation. Find him at rogerhuntphdcand.substack.com and ideatrek.io.
References
Cameron, L. D. (2024). The making of the “good bad” job: How algorithmic management manufactures consent through constant and confined choices. Administrative Science Quarterly, 69(2), 458–514. https://doi.org/10.1177/00018392241236163
Chung, M. (2025). When knowing more means doing less: Algorithmic knowledge and digital (dis)engagement among young adults. Harvard Kennedy School Misinformation Review, 6(5). https://doi.org/10.37016/mr-2020-186
Cook, C., Diamond, R., Hall, J. V., List, J. A., & Oyer, P. (2021). The gender earnings gap in the gig economy: Evidence from over a million rideshare drivers. Review of Economic Studies, 88(5), 2210–2238. https://doi.org/10.1093/restud/rdaa081
DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 339. https://doi.org/10.1145/3476080
Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115–133. https://doi.org/10.1080/19312458.2021.1968361
Gagrčin, E., Naab, T. K., & Grub, M. F. (2026). Algorithmic media use and algorithm literacy: An integrative literature review. New Media & Society, 28(2), 575–597. https://doi.org/10.1177/14614448241291137
Graff, H. J. (1979). The literacy myth: Literacy and social structure in the 19th-century city. Academic Press.
Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human-Machine Communication research agenda. New Media & Society, 22(1), 70–86. https://doi.org/10.1177/1461444819858691
Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022
Hogarth, R. M., Lejarraga, T., & Soyer, E. (2015). The two settings of kind and wicked learning environments. Current Directions in Psychological Science, 24(5), 379–385. https://doi.org/10.1177/0963721415591878
Jiang, S., & Sinchaisri, W. P. (2025). Learning on the go: Understanding how gig economy workers learn with recommendation algorithms. Proceedings of the ACM on Human-Computer Interaction, 9(CSCW1), Article 105. https://doi.org/10.1145/3757621
Kalantzis, M., & Cope, B. (2025). Literacy in the time of artificial intelligence. Reading Research Quarterly, 60(2), 247–264. https://doi.org/10.1002/rrq.591
Möhlmann, M., Salge, C. A. de L., & Marabelli, M. (2023). Algorithm sensemaking: How platform workers make sense of algorithmic management. Journal of the Association for Information Systems, 24(1), 150–181. https://doi.org/10.17705/1jais.00791
Öborn, E., MacKenzie, R., Örnebring, H., & Van Couvering, E. (2024). Understanding the triangle: Platform, gig worker, and client agency in the gig economy. New Technology, Work and Employment, 39(3), 450–471. https://doi.org/10.1111/ntwe.12291
Oeldorf-Hirsch, A., & Neubaum, G. (2025). What do we know about algorithmic literacy? The status quo and a research agenda for a growing field. New Media & Society, 27(2), 681–701. https://doi.org/10.1177/14614448231182662
Ong, W. J. (1982). Orality and literacy: The technologizing of the word. Methuen.
Peirce, C. S. (1931). Collected papers of Charles Sanders Peirce (Vols. 1–6, C. Hartshorne & P. Weiss, Eds.). Harvard University Press.
Rahman, H. A. (2021). The invisible cage: Workers’ reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945–988. https://doi.org/10.1177/00018392211010118
Schmandt-Besserat, D. (1992). Before writing, Volume I: From counting to cuneiform. University of Texas Press.
Schor, J. B., Attwood-Charles, W., Cansoy, M., Ladegaard, I., & Wengronowitz, R. (2020). Dependence and precarity in the platform economy. Theory and Society, 49(5–6), 833–861. https://doi.org/10.1007/s11186-020-09408-y
Scribner, S., & Cole, M. (1981). The psychology of literacy. Harvard University Press.
Selznick, P. (1949). TVA and the grass roots: A study in the sociology of formal organization. University of California Press.
Simmel, G. (1950). The triad. In K. H. Wolff (Ed. & Trans.), The sociology of Georg Simmel (pp. 145–169). Free Press. (Original work published 1908)
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21(4), 360–407. https://doi.org/10.1598/RRQ.21.4.1
Stark, D., & Vanden Broeck, P. (2024). Principles of algorithmic management. Sociologica, 18(1), 21–47. https://doi.org/10.1177/26317877241257213
Strauss, I., Yang, L., & Mazzucato, M. (2025). The distributive effects of platform algorithm intensity. Cambridge Journal of Economics, 49(1), 91–117. https://doi.org/10.1093/cje/beae040
Street, B. V. (1984). Literacy in theory and practice. Cambridge University Press.
Tarafdar, M., Page, X., & Marabelli, M. (2023). Algorithms as co-workers: Human algorithm role interactions in algorithmic work. Information Systems Journal, 33(2), 232–267. https://doi.org/10.1111/isj.12389
van Doorn, N., & Badger, A. (2020). Platform capitalism’s hidden abode: Producing data assets in the gig economy. Antipode, 52(5), 1475–1495. https://doi.org/10.1111/anti.12641
Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43. https://doi.org/10.1177/009365096023001001
Wynstra, F., Spring, M., & Schoenherr, T. (2015). Service triads: A research agenda for buyer-supplier-customer triads in business services. Journal of Operations Management, 35(1), 1–20. https://doi.org/10.1016/j.jom.2014.10.002
Zheng, Q., Zhan, J., & Xu, X. (2024). Platform training and learning by doing and gig workers’ incomes: Empirical evidence from China’s food delivery riders. SAGE Open, 14(3). https://doi.org/10.1177/21582440241284555
