Algorithmacy and the Co-optation of the Subject
There was no literacy until there were letters. Cognitive competency and communication technology emerged together. Nobody was “literate” before writing existed. Nobody needed to be. Writing created the problem; it also supplied the means to solve it. The same structural observation applies to rhetoric: nobody cultivated the arts of memory, persuasion, and formulaic repetition until oral coordination in shared physical space demanded them. The competency and the technology co-emerged.
A third term belongs in this sequence. Rhetoric. Literacy. Algorithmacy. Each names a cognitive competency tied to a communication technology that did not exist before and that restructured cognition once it arrived. Algorithmacy is the cognitive competency developed through navigating algorithmically-mediated coordination systems. It is not “algorithmic literacy” in the existing sense, which typically means understanding what algorithms are and how they work—algorithmacy names what happens to cognition when the primary mode of coordination operates through algorithmic intermediaries. The word should feel unfamiliar. It resists easy pronunciation, sits awkwardly alongside its predecessors, and demands that the reader slow down. That discomfort is appropriate. The cognitive restructuring it describes is equally uncomfortable, equally resistant to assimilation into existing categories. The term has appeared exactly once before, in an unpublished conference talk on algorithmic threat reasoning in security studies, bearing no relation to the present usage (Leese & Matzner, 2016). It has never appeared in a published academic text. It is available.
The Ong Correction
Walter Ong’s Orality and Literacy (1982) remains the definitive account of how writing transformed human cognitive capacity. Oral cultures organize knowledge through additive and aggregative structures, situational rather than abstract reasoning, redundant and participatory expression, and homeostatic preservation of the present (Ong, 1982). Literate cultures subordinate rather than aggregate, abstract rather than situate, analyze rather than participate. Ong called this a restructuring of “consciousness.” That overstates the claim.
The changes Ong documented are cognitive operations: abstraction, subordination, analytical reasoning, cumulative knowledge construction. These are specific capacities of information processing, memory organization, and inferential structure. They are not “consciousness” in any phenomenological sense. Ong worked within a Catholic humanist framework that assumed irreducible interiority, and his language reflects that assumption. The empirical evidence does not require it.
Sylvia Scribner and Michael Cole demonstrated the point decisively. Studying the Vai people in Liberia, who developed an indigenous script independent of formal schooling, they found that literacy produced only specific cognitive skills tied to specific literacy practices, not general transformations of consciousness (Scribner & Cole, 1981). Vai script literacy improved task performance related to its particular uses but did not generate the broad cognitive shifts Ong predicted. Formal schooling, not writing per se, produced increased metacognitive ability. Scribner and Cole redefined literacy as “a set of socially organized practices which make use of a symbol system” (Scribner & Cole, 1981, p. 236). David Olson refined the point further: writing brings aspects of spoken language into conscious awareness, enabling metacognitive operations on linguistic structures, not transforming consciousness as such (Olson, 1994).
Bernard Stiegler was more precise than Ong on the underlying mechanism. Technologies externalize human capacities and in doing so reconstitute the cognitive operations available to those who use them (Stiegler, 1998). The crucial formulation: “the interior is constituted in exteriorisation” (Stiegler, 1998, p. 152). On this account, which grounds what follows, there is no pre-technical interiority that technology subsequently modifies. Cognitive operations co-emerge with their technical supports. The aporia of origin dissolves the question of prior consciousness entirely: the human and the technical are co-constituted from the start, and asking what cognition looked like “before” technology is like asking what water looks like before hydrogen.
Stiegler’s concept of grammatization tracks this co-constitution across historical epochs. Each new technology of inscription, from alphabetic writing through the printing press to analog recording and digital computation, discretizes and externalizes a further domain of human experience (Tinnell, 2015). Alphabetic writing grammatized the continuous flow of speech into discrete, repeatable units. Analog recording grammatized gesture, tone, and temporal sequence. Digital computation grammatizes behavioral patterns, preferences, attentional allocation, and social relations into processable data (Wu, 2023). Each stage simultaneously enables new cognitive capacities and produces new forms of what Stiegler called “proletarianization”: the loss of knowledge through its externalization into technical systems. The nineteenth century proletarianized manual savoir-faire; the twentieth, savoir-vivre through consumer capitalism; the twenty-first, theoretical knowledge through algorithmic automation (Stiegler, 2016). In the epoch of algorithmic implementation, Stiegler argued, “there is no longer any need to think: thinking is concretised in the form of algorithmic automatons” (Stiegler, 2016). Algorithmacy occupies the pharmacological position within this framework: it is both the competency that algorithmic grammatization destroys (by automating cognitive operations) and the competency that develops through navigating the automated environment. The pharmakon gives and takes in the same gesture.
The correction matters for what follows. If Ong documented cognitive operations rather than phenomenological consciousness, then the claim about algorithmacy becomes tractable. Algorithmacy names the specific cognitive operations that co-emerge with algorithmic coordination, just as literacy named the cognitive operations that co-emerged with writing. The claim is about reasoning patterns, memory strategies, analytical capacities, and knowledge organization. It is not about something as philosophically burdened as “consciousness.”
Co-optation as Coordination Mechanism
Coordination theory in organizational studies has operated through a stable typology for decades. Hierarchies coordinate through command. Markets coordinate through contract. Networks coordinate through collaboration. In each case, actors arrive with the competencies required for coordination. Managers know how to manage before entering hierarchies. Buyers and sellers know how to transact before entering markets. Collaborators bring relational skills to networks. Competence precedes participation (Thompson, 1967; Williamson, 1975; Powell, 1990).
David Stark and Pieter Vanden Broeck identified a fourth mechanism: co-optation, extending Philip Selznick’s 1949 concept from his study of the Tennessee Valley Authority (Stark & Vanden Broeck, 2024). Their formulation is precise: “Whereas actors in hierarchies command, in markets they contract, and in networks collaborate, on platforms they are co-opted” (Stark & Vanden Broeck, 2024, p. 8). Co-optation enrolls legally autonomous actors into coordination systems through participation itself. The Uber driver does not arrive knowing how to coordinate through an algorithmic system. The driver learns through driving. Competence does not precede participation. Participation produces competence. Stark and Vanden Broeck oppose “digital Taylorism” readings of algorithmic management, arguing that.
In contrast, scientific management conceived of humans as programmable machines, algorithmic management conceives of machines as capable of learning (Stark & Vanden Broeck, 2024). The distinction is consequential. Co-optation does not merely control workers. It constitutes them as particular kinds of workers through the architecture of participation.
Stark and Vanden Broeck develop co-optation as an organizational enrollment strategy, a way of explaining how platforms achieve coordination without the command structures of hierarchies or the contractual commitments of markets. I extend their concept beyond organizational analysis into the domain of subject formation. If co-optation enrolls through participation, and if that participation restructures cognition, then co-optation does not merely enroll actors into coordination systems. It produces new kinds of actors. The organizational mechanism becomes ontological: the platform does not just coordinate what you do; it reshapes what you are capable of thinking. This is a stronger claim than Stark and Vanden Broeck make. It requires its own justification, which the empirical evidence in the sections that follow is meant to supply.
Co-optation, extended in this way, produces a specific empirical puzzle. If participation constitutes the coordinating subject, why does identical participation produce different subjects? Both the high-earning and low-earning Uber driver are co-opted. Both are enrolled in the same algorithmic coordination system. They develop different capacities, different relationships to the system, different strategic orientations. Co-optation enrolls. Something else differentiates.
That something else is algorithmacy: the differential development of the cognitive competency required to navigate co-optation deliberately rather than passively absorb its structuring effects.
Algorithmacy Is Not Algorithmic Literacy
The distinction from “algorithmic literacy” is fundamental and must be stated precisely.
Algorithmic literacy, as the dominant paradigm defines it, means awareness of algorithms, understanding of how they function, and ability to evaluate algorithmic decisions critically (Dogruel, Masur, & Joeckel, 2022). The most widely validated instrument measures two dimensions: awareness of algorithm use and knowledge about algorithms, assessed through 22 items across self-report scales (Dogruel et al., 2022). Related instruments measure algorithmic media content awareness (Zarouali, Boerman, & de Vreese, 2021) and algorithmic knowledge across six user types from “unaware” to “critical” (Gran, Booth, & Bucher, 2021). A comprehensive review synthesizing 169 contributions proposes an experiential learning cycle framework incorporating cognitive, affective, and behavioral dimensions (Gagrčin, Naab, & Grub, 2024). A second major review identifies four research directions including the need for affective and behavioral specification (Oeldorf-Hirsch & Neubaum, 2023).
All of this work treats the subject as given and asks what that subject knows about algorithms. It operates within a knowledge-deficit model: people lack information about algorithmic systems, and providing that information produces empowerment. The assumption collapses under empirical pressure. A study of 348 U.S. young adults found that higher algorithmic awareness and knowledge were associated with greater concerns about misinformation but with a lower likelihood of correcting misinformation or engaging with opposing viewpoints (Chung, 2025). The author proposes “algorithmic cynicism” as the operative mechanism: knowing more about algorithms produces feelings of powerlessness that motivate withdrawal rather than engagement (Chung, 2025): more knowledge, less action. The literacy paradigm cannot account for this.
Algorithmacy operates at a different level of analysis entirely. It does not ask what the subject knows about algorithms. It asks what navigating algorithmic coordination does to cognition. The parallel to Ong is exact. “Literacy” in the Ong/Street/Scribner tradition is not “knowing about writing.” It is the cognitive transformation that sustained writing practice produces. Algorithmacy is the cognitive transformation sustained by algorithmic navigation. Algorithmic literacy can be taught in a workshop. Algorithmacy develops through practice within algorithmic systems, mostly below the threshold of conscious instruction.
Brian Street’s distinction between autonomous and ideological models of literacy grounds the point (Street, 1984). The autonomous model treats literacy as a neutral technical skill transferable across contexts. The ideological model recognizes that literacy always develops within specific power relations that shape what counts as competent performance. Algorithmic literacy research operates within the autonomous model. Algorithmacy, like Street’s ideological literacy, develops within power relations the user did not design and cannot fully perceive.
How Algorithmacy Develops
Algorithmacy develops through co-optation, not through instruction. Users do not learn about algorithms and then navigate them. They navigate algorithms and develop cognitive competencies through the navigation. The process is implicit, iterative, and situated within asymmetric power relations.
The mechanism can be specified with more precision than “trial and error” suggests. Predictive processing frameworks in cognitive science model the brain as a prediction engine that continually generates expectations about incoming stimuli and updates its internal models when those expectations fail (Clark, 2013). Navigating an opaque algorithmic system is, at the computational level, an exercise in minimizing prediction error against a system whose behavior is only partially observable. The user generates a model of what the algorithm will do, observes the result, registers the discrepancy between prediction and outcome, and adjusts the internal model accordingly. This is not metaphorical. It is the same Bayesian updating process that predictive processing identifies across perception, motor control, and social cognition (Friston, 2010). The difference is that the environment being modeled is itself a learning system, continuously adapting its own behavior in response to the aggregate of user behaviors. Algorithmacy develops as the user’s predictive model becomes increasingly calibrated to an environment that is itself in motion.
Research on folk theorization documents this mechanism in empirical detail. LGBTQ+ social media users observe platform behavior, form theories about algorithmic operation, adjust their behavior, observe results, and refine their theories (DeVito, 2021). The learning produces a hierarchy of increasingly sophisticated models, from basic functional theories establishing simple causal awareness to structural theories identifying “mechanistic fragments” and aggregating them into complex predictive models, accompanied by a holistic sense of the platform’s overall orientation that DeVito calls “perceived platform spirit” (DeVito, 2021). This hierarchy maps directly onto predictive processing’s account of hierarchical generative models: lower-level predictions about specific platform responses nested within higher-level models of overall platform logic. Subsequent work documents five distinct folk theories among transfeminine TikTok creators, distinguishing actionable theories that enable strategic behavior from demotivational theories that lead to withdrawal, and identifying perceived algorithmic paternalism as a structuring force in content-creation decisions (DeVito, 2022). TikTok users develop what one research program calls the “algorithmic crystal,” through which personalized algorithms reflect, refract, and diffract user self-concepts, producing self-understanding through algorithmic mediation (Lee, Mieczkowski, Ellison, & Hancock, 2022). YouTube beauty vloggers share “algorithmic gossip,” communally developed theories and strategies about recommendation algorithms that circulate as practical knowledge within creator communities (Bishop, 2019). Users report becoming “much more mindful” of their behaviors in response to perceiving algorithms as responsive to micro-actions (Schellewald, 2022). Taina Bucher’s concept of the “algorithmic imaginary” captures how users’ actions, informed by their models of algorithmic behavior, feed back into algorithmic training data, creating recursive loops between folk theorization and system optimization (Bucher, 2017).
Platform labor research documents the same developmental process in organizational coordination contexts. Ridehailing drivers develop “workplace games” through accumulated experience: a relational game involving customer service encounters and trust in the “Benevolent Algorithm,” and an efficiency game maximizing earnings through speed optimization and algorithmic competence (Cameron, 2022). Building on Burawoy’s (1979) concept of “making out,” Cameron demonstrates that workers’ meaning-making activities ultimately serve platform interests, channeling agency into platform-productive behavior. Gig economy workers develop what one ethnography terms “qualculation,” an affective, non-purely-calculative reasoning style that emerges in response to algorithmic management and diverges from the calculative rationalities companies project onto their workforce (Shapiro, 2018). Ethnographic research on gig workers across multiple platforms identifies distinct orientations shaped by platform participation: entrepreneurial self-understanding, precarious worker identity, and hustler disposition (Ravenelle, 2019). These are not pre-existing personality types matched by algorithm. They are cognitive orientations produced through differential algorithmacy within co-optation.
Recent empirical work extends these findings. Differential embeddedness shapes how workers develop platform competencies: embedded workers with stable primary employment and savings exhibit what one study calls “hegemonic consent” and attribute earnings to individual skill. In contrast, dis-embedded workers lacking such resources adopt collective injustice frames (Schor, Tirrell, & Vallas, 2024). Co-optation is class-differentiated. Ethnography of food delivery riders in Madrid identifies three modes of “participatory subjectivity”: seeking algorithmic recognition, acting to be ignored by the algorithm, and operating within designs that foster participation (Cañedo-Rodríguez & Allen-Perkins, 2024). In-platform grievance systems produce “mercy consent” through illusions of agency, facades of procedural fairness, and individualization of collective grievances, converting dissent into depoliticized acquiescence (Liu et al., 2025).
Platform onboarding makes the mechanism structurally visible. The platform does not teach users how to coordinate. It produces the cognitive orientation coordination requires. It solicits identity markers that become seed data for algorithmic classification. It shapes behavior through constrained choice architectures. It calibrates through deliberate opacity, because the opacity is what produces implicit learning. The user develops predictive models of system behavior precisely because the system does not explain itself. Transparency would short-circuit the mechanism. Opacity is not a design flaw; it is the condition under which algorithmacy develops.
The LLM Expansion
The scope of the claim must now be carefully stated, because algorithmacy is not about social media power users or gig-economy workers alone.
Large language models have made algorithmic coordination the default mode of knowledge work. When a lawyer uses an LLM to draft a brief, a student uses one to research a paper, and a doctor uses one to evaluate a differential diagnosis, each navigates an algorithmically mediated coordination system. Each learns to specify intent in structured ways the system can process. Each develops implicit models of what the system does well and badly. Each adjusts cognitive operations to the affordances and constraints of an algorithmic intermediary. Each develops algorithmacy.
The evidence supports framing this as cognitive restructuring rather than technical skill acquisition. Non-AI experts exploring prompt design proceed opportunistically, struggle with expectations transferred from human communication, and infer a lack of ability in themselves rather than trying alternative phrasings when prompts fail (Zamfirescu-Pereira, Kim, Murnane, Broquil, Feldman, & Hartmann, 2023). The finding is telling. When a prompt fails, users do not conclude that they phrased the request poorly. They conclude that they are not good at this. The response pattern mirrors early literacy acquisition: the novice writer does not distinguish between a bad sentence and an inability to write. The cognitive framework for parsing one’s own performance has not yet developed. LLM users are acquiring that framework through practice, forming implicit models of system behavior through trial and error rather than through instruction about transformer architectures or attention mechanisms.
Analysis of over 200,000 conversations documents a measurable cognitive transition: users initially employ structured, machine-like prompts, then shift to increased politeness, natural language, and contextually nuanced interaction, indicating evolving mental models of the system (Xie, Liu, Chen, & Zhai, 2025). The trajectory recapitulates in miniature what folk theorization research documents across platform contexts: users move from mechanical interaction to what feels like communication, developing increasingly sophisticated implicit models along the way. Users unconsciously apply social cognition frameworks, including Theory of Mind reasoning, to LLM interaction, deploying social-cognitive capacities in ways that differ fundamentally from technical tool use (Wester, Barik, Subramonyam, & van Berkel, 2024). One study framing prompt engineering as a distinct higher-order metacognitive skill identifies iterative processes activating multiple cognitive components in shifting configurations, analogous to complex information-processing skills rather than technical procedures (Federiakin, Molerov, Sanzharevsky, & Kardanova, 2024).
Nobody taught the millions of people now using LLMs how to prompt. They learned the way users always learn within co-optation: through practice, through failure, through implicit calibration to a system whose internal operations remain opaque. The label “prompt engineering” misnames the phenomenon. It suggests technical skill, such as learning a programming language. What actually occurs is cognitive restructuring: learning to decompose problems into structures an algorithmic system can process, learning to evaluate outputs against implicit quality models that the user herself is still developing, learning to iterate through feedback loops with a system whose responses are probabilistic rather than deterministic. These are new cognitive operations, not new technical skills. They parallel what writing did: create new ways of thinking, not merely new ways of communicating.
Someone will object that prompt engineering is just negotiation conducted entirely on the algorithm’s terms. The user has agency, but it is the agency of a command-line operator, not a conversational partner. The objection is half right. Users can shape outputs through skillful prompting, and the development of that skill constitutes part of algorithmacy. But shaping outputs is not shaping the system. Your individual rejection teaches the model nothing. Only aggregate patterns across thousands of users, filtered through reinforcement learning from human feedback, eventually shift model behavior. The asymmetry is structural: the user adapts in real time; the system adapts across populations and training cycles. Algorithmacy includes recognizing this asymmetry, which is itself a cognitive achievement the system does not teach.
The pharmacological dimension is also visible. LLM users experience significantly lower cognitive load across all facets than traditional search users, but they demonstrate lower-quality reasoning and argumentation (Abbas, Pickard, & Atwell, 2024). The ease of obtaining algorithmic outputs does not translate to deeper learning. Stiegler’s analysis of proletarianization applies directly here: general automation produces a general loss of knowledge (Stiegler, 2016). The LLM case is not an exception to the pattern of algorithmacy development. It is its clearest contemporary instance, precisely because it renders the cognitive restructuring visible across an enormous population in compressed time.
Three years ago, most knowledge workers had no direct interaction with algorithmic coordination systems beyond search engines and social media. Now LLMs mediate legal reasoning, medical diagnosis, educational practice, scientific research, and creative production. The cognitive restructuring Ong documented across centuries unfolds across months.
The Co-optation of the Subject
The strongest version of the claim: algorithmacy is not something the subject develops. It is something that develops a subject.
Stark and Vanden Broeck describe what co-optation produces at the organizational level. They do not ask what it does to the person. If algorithmic coordination restructures cognition, and if the restructured cognition orients the user toward self-understanding in platform categories, then co-optation does not merely enroll actors into coordination systems. It produces the kinds of actors the system requires. The user-as-product recognition enters here. Part of what algorithmacy involves, at its more sophisticated levels, is understanding that one’s participation generates the data that trains the system that shapes one’s subsequent participation. Every prompt entered into an LLM becomes training data. Every ride, every search, every engagement feeds back into the coordination architecture. The algorithmically competent user understands that she simultaneously navigates the system and constitutes it. She is both user and resource.
Heidegger called this condition standing-reserve (Bestand): beings revealed not as entities with their own presence but as resources available for ordering and optimization (Heidegger, 1954/1977). The platform user does not first exist as a subject and then become a resource. She appears from the start as both. The recursive structure of the condition warrants careful attention. In an algorithmic coordination system, the user’s intent is harvested not merely as data but as training material for the system that will eventually process the next user’s intent, and then her own again. The system learns from what she wants to predict better, and eventually pre-empt, what she will want next. Her intent today shapes the options available to her tomorrow. Iain Thomson’s recent application of Heidegger to artificial intelligence sharpens this point: LLMs tend to turn thinking itself into standing-reserve through the enframing of language (Thomson, 2025). AI “enframes the mind” in the same structural sense that industrial technology enframes nature: by revealing it as always already available for further processing. But the LLM case adds a layer Heidegger could not have anticipated: the standing-reserve is recursive. The enframing of thought produces data that refines the enframing that produced it. The user who develops algorithmacy develops, in part, the capacity to recognize this recursion, to see that the system’s helpfulness is itself a mode of extraction.
The structure of constitutive practice has deep precedent. Foucault argued institutions produce the subjects they govern. Butler argued norms produce the gendered subjects they describe. Marx argued that capitalism produces the consciousness that enables its reproduction. The framework is not original. What algorithmacy adds is a name for the specific cognitive competency that co-optation develops, and a position within a historical sequence that makes the current transition legible as a transition rather than a collection of platform-specific phenomena.
The distinction from Butler’s performativity, however, requires specification. Butlerian performativity operates through citational chains that are socially diffuse and without proprietary ownership. Nobody designed heteronormativity. Its grammar belongs to no one and everyone; subversive citation remains permanently available because the norms lack a central administrator. Algorithmacy develops within engineered systems where the normative grammar is proprietary. The platform can update it without notification, deprecate features that enable resistance, or restructure the choice architecture that produces particular cognitive orientations. Subversive use of gender norms operates against a diffuse social field. Subversive use of algorithmic systems operates against a commercial entity with identifiable designers, optimization functions, and legal resources. The conditions for resistance differ categorically.
Antoinette Rouvroy pushes the challenge further. Algorithmic governmentality, in her formulation, “produces no subjectification, it circumvents and avoids reflexive human subjects,” operating through infra-individual data fragments to build supra-individual statistical profiles (Rouvroy & Berns, 2013). The moment of reflexivity necessary for subjectification “seems to become more complicated or to be postponed constantly” (Rouvroy & Berns, 2013). If algorithmic power bypasses the subject entirely, then both Butler’s framework and the concept of algorithmacy face a challenge: what develops competency within a system designed to make competency irrelevant? The answer is that the bypassing is never total. Folk theorization, workplace games, algorithmic gossip, and prompt competence all demonstrate reflexive engagement with systems that neither require nor reward reflexivity. Algorithmacy develops in the gap between the system’s operational logic and the user’s lived experience of navigating it. That gap cannot be closed by design. It is structural, because the user experiences temporal continuity and embodied consequence while the system processes statistical distributions. Algorithmacy names the cognitive operations that emerge in that irreducible gap.
Differential Constitution
Algorithmacy does not develop under uniform conditions. Algorithmic systems constitute differentially racialized, gendered, and colonial environments, and these differential conditions shape the cognitive competencies that emerge through navigation.
The evidence is structural, not incidental. Search engines actively organize discourse and knowledge in ways that reproduce social hierarchies, constituting what one scholar calls “algorithmic oppression” and “technological redlining” (Noble, 2018). Race itself functions as a kind of technology designed to stratify social life, and algorithmic systems operationalize that design through “the New Jim Code,” creating self-fulfilling prophecies that enact what they predict (Benjamin, 2019). Anti-Blackness is not a consequence of surveillance but a condition for its historical development (Browne, 2015). Facial analysis systems produce error rates of 34.7% for darker-skinned females compared to 0.8% for lighter-skinned males (Buolamwini & Gebru, 2018). Automatic gender recognition systems consistently operationalize gender in trans-exclusive ways (Keyes, 2018). Posts sharing experiences of racism are disproportionately flagged by automated toxicity classifiers (Dias Oliva, Antonialli, & Gomes, 2021). Content moderation systems disproportionately censor Black speech. Hiring algorithms encode historical discrimination.
These are not bugs in otherwise neutral systems. They are the conditions under which algorithmacy develops. Algorithmic knowledge varies by socioeconomic advantage, with education, income, and social capital shaping who can understand and navigate algorithmic systems (Cotter & Reisdorf, 2020). An emerging “algorithmic divide” replicates and compounds existing digital inequality structures (Petrovčič, Rogelj, & Dolničar, 2024). The distribution of algorithmacy is not random. It follows existing structures of advantage and disadvantage, just as the distribution of literacy followed colonial and class structures for centuries.
The relationship between marginalization and algorithmacy is not merely a matter of simple deprivation. Du Bois described double consciousness as the condition of seeing oneself through the eyes of a hostile dominant culture while simultaneously maintaining one’s own perspective (Du Bois, 1903). Marginalized users of algorithmic systems develop an analogous doubled awareness that might be called double algorithmacy: the capacity to understand how the system sees them (as a data profile shaped by biased training data, as a set of behavioral signals legible to classifiers built on majority norms) while simultaneously maintaining their own experience of navigating that system. LGBTQ+ users who develop sophisticated folk theories about algorithmic content suppression possess a form of algorithmacy that privileged users, whose content circulates without friction, never need to develop (DeVito, 2021). Black users who learn to code-switch in content moderation environments develop predictive models of system behavior that white users, whose speech patterns align with training data defaults, never build. The most algorithmically competent users may be those navigating the most hostile systems, just as the most rhetorically sophisticated oral performances emerged from communities under the greatest pressure to persuade, to survive, to make themselves legible to power. This does not redeem the hostility. It identifies a cognitive surplus that develops under conditions of structural inequality and that the systems producing it cannot fully capture.
Street’s ideological model of literacy provides the theoretical grounding (Street, 1984). Literacy was never a neutral technical skill acquired under equal conditions. It was embedded in power relations that determined what counted as competent performance, whose literacy practices were valued, and whose were rendered invisible. The parallel to algorithmacy is exact in structure. Alphabetic literacy was imposed as a condition of legibility within colonial administrative systems. Algorithmacy is increasingly a condition of economic legibility within platform capitalism. Exclusion from algorithmic coordination means exclusion from growing domains of work, social connection, civic participation, and knowledge production. Couldry and Mejias frame the broader dynamic as data colonialism: datafication produces subjects of capital in distinctive new ways, making “us all” resources for extraction while distributing the costs of that extraction along pre-existing colonial fault lines (Couldry & Mejias, 2019).
Modulation, Dividuation, and the Specification Deleuze Did Not Provide
Gilles Deleuze diagnosed the environment. Algorithmic coordination operates through continuous modulation rather than discrete enclosure, through passwords rather than watchwords, through “dividuals” rather than individuals (Deleuze, 1992). Thirty years of scholarship have confirmed the diagnostic power of his three-page postscript. Updated analyses show how control society coerces without prohibition and through incentives that are “enjoyable, even euphoric” because they compel people to obey their own personal information (Brusseau, 2020). The dividual has been theorized through Foucault, Simondon, and digital practice (Bruno & Rodríguez, 2022). “Measurable types” and “soft biopolitics” specify how algorithmic classification constitutes identities through probabilistic rather than deterministic processes (Cheney-Lippold, 2017). Protocol functions for control societies as the panopticon functioned for disciplinary societies: as the technical architecture enabling a specific modality of power (Galloway, 2004).
Deleuze described the shift from mold to modulation. He did not specify what modulation does to cognition or how the dividual develops strategic capacity within continuous variation. Algorithmacy provides that specification. In a continuous modulation environment, users develop differential cognitive competencies for navigating the modulation itself. Some users recognize the recursive structure of their participation and develop abstract models of how their data feeds back into system behavior. Others remain bound to situational, platform-specific responses that modulation shapes without their awareness. The divide parallels the oral-to-literate cognitive transition Ong documented. Still, it operates within engineered systems optimized for extraction rather than across the diffuse social adoption of a communication technology. Deleuze’s dividual is split into data points that circulate independently of the person they describe. Algorithmacy is the competency of navigating that split: understanding that one exists simultaneously as an experiencing subject and as a data profile, and developing cognitive strategies for managing the gap between these two modes of existence.
Rouvroy and Stiegler frame the deepest challenge: in the digital regime of truth, “the language stock has sharply fallen; we are no longer in language” (Rouvroy & Stiegler, 2016). Algorithmic governmentality operates on patterns below the threshold of linguistic articulation. Hyper-individuation through personalization paradoxically produces disindividuation, undermining the individual agency that personalization rhetoric claims to serve (Rouvroy, 2025). Algorithmacy, at its most sophisticated, involves recognizing this paradox: that the system’s responsiveness to individual behavior serves aggregate optimization rather than individual flourishing.
Testability and the Surplus of Competency
Algorithmacy generates specific empirical predictions. Users with greater algorithmacy should demonstrate abstract rather than purely situational reasoning about algorithmic behavior. They should transfer coordination competence across platform contexts rather than developing only platform-specific skills. They should predict system responses to novel inputs with greater accuracy. They should articulate the recursive relationship between their participation and system behavior. These capacities are measurable through psychometric instruments, longitudinal studies, and cross-platform validation designs. The philosophy grounds the constructs. Empirical programs test whether they describe something real.
The deeper question is whether algorithmacy, like literacy before it, can produce capacities that exceed the intentions of the systems that produced them. If algorithmacy were merely a tool of domination, it would not deserve the name “competency.” Competencies, by definition, generate surplus capacity. A person taught to read for the purpose of following factory instructions can also read a union pamphlet. A person taught arithmetic for bookkeeping can also calculate that the books are cooked. The relationship between the intended use of a competency and its actual range of application is never one-to-one. This is what makes competencies dangerous to the systems that produce them.
Literate populations eventually used literacy for purposes their colonizers did not intend. Formerly enslaved people used the master’s language to articulate liberation. Colonized peoples turned administrative literacy into anti-colonial organizing. The anticolonial novel was written in the colonizer’s language with the colonizer’s cognitive tools, and it was devastating precisely because it turned those tools against their originators. Algorithmacy should generate analogous surpluses. The user who develops sophisticated predictive models of algorithmic behavior acquires, as a structural byproduct, the capacity to identify where the system’s optimization function diverges from her interests. The gig worker who develops “qualculation” (Shapiro, 2018) can redirect that affective-calculative reasoning toward collective organizing. The content creator who learns to “write for the algorithm” (Bishop, 2019) possesses the same competence needed to expose algorithmic bias, to document content suppression, to build counter-publics within and against the platform’s architecture. Cameron’s ride-hailing drivers display sophisticated strategic agency constituted by the system they navigate (Cameron, 2022). You cannot game an algorithm you do not understand. Understanding requires the cognitive restructuring co-optation produces. The resisting subject is a product of the system it resists. That is not a paradox. It is how every transition in the rhetoric-literacy-algorithmacy sequence has worked.
The forms this surplus might take are already partially visible. Adversarial perturbation strategies exploit the gap between how algorithms process data and how humans experience meaning, a gap only navigable through developed algorithmacy. Data poisoning campaigns, in which users collectively feed misleading information to training systems, require a coordinated understanding of how algorithmic learning works. Algorithmic auditing, in which users systematically test systems for bias, applies the predictive models folk theorization produces to purposes the platform did not intend. None of these constitute liberation. They constitute the beginning of what using algorithmacy against its conditions of production might look like, just as the first anticolonial pamphlets did not constitute decolonization but demonstrated that literacy’s surplus capacity could exceed colonial control.
The Greek rhetorical tradition illustrates the pattern at the prior transition. Rhetoric emerged as the cognitive competency of oral coordination: the capacity to persuade, to remember without external aids, to perform knowledge in shared physical space. Rhetoric was not a deficiency awaiting correction by writing. It was a complete cognitive orientation suited to a complete communication technology. But rhetoric also produced its own forms of domination: demagogic manipulation, sophistic distortion, and the tyranny of the present, as Ong identified as characteristic of oral culture’s homeostatic relationship with the past. Literacy did not resolve these tensions. It displaced them, producing new capacities (systematic analysis, cumulative knowledge, critical distance from one’s own tradition) and new dominations (colonial administration, bureaucratic control, the class divide between those who read and those who did not). Algorithmacy continues the pattern. The question is not whether it will produce both enabling and dominating effects. It will. The question is whether the enabling effects can develop faster than the dominating ones consolidate.
The current transition differs from the oral-to-literate shift in scope and speed. The oral-to-literate transition unfolded over millennia. The literate-to-algorithmic transition unfolds over years. Whether algorithmacy can develop resistant capacities faster than algorithmic systems can adapt to capture them is an open question. Platform grammars are proprietary and updatable. The resistance conditions that apply to diffuse social norms do not apply to engineered systems. An algorithm can be patched; heteronormativity cannot.
The question cannot even be asked without first recognizing algorithmacy as what it is: the latest transition in how human cognition organizes itself in relation to communication technology. Rhetoric emerged through oral coordination, restructured cognition around memory, persuasion, and formulaic expression, and produced both democratic deliberation and demagogic manipulation. Literacy emerged through written coordination, restructured cognition around abstraction, analysis, and cumulative knowledge, and produced both systematic inquiry and colonial administration. Algorithmacy emerges through algorithmic coordination, restructures cognition around folk theorization, intent specification, and recursive self-modeling, and produces both sophisticated platform navigation and algorithmic extraction. Each term names a competency that did not exist until a communication technology created the conditions for its development. Each restructured cognition, not consciousness. Each distributed its benefits and its dominations unevenly.
The distribution is not fate. But changing it requires naming what is being distributed.
References
Abbas, M., Pickard, A., & Atwell, E. (2024). Generative AI and cognitive load: Effects of LLM-assisted search on reasoning and argumentation quality—computers in Human Behavior, 159, 108334.
Benjamin, R. (2019). Race after technology. Polity.
Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New Media & Society, 21(11-12), 2589-2606.
Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press.
Brusseau, J. (2020). Deleuze’s “Postscript on the Societies of Control” updated for big data and predictive analytics. Theoria, 67(164), 1-25.
Bruno, F., & Rodríguez, P. M. (2022). The dividual: Digital practices and biotechnologies. Theory, Culture & Society, 39(3), 27-43.
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30-44.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77-91.
Cameron, L. D. (2022). “Making out” while driving: Relational and efficiency games in the gig economy. Organization Science, 33(1), 1-22.
Cañedo-Rodríguez, M., & Allen-Perkins, D. (2024). Weaving the algorithm: Participatory subjectivities amongst food delivery riders. Subjectivity (Springer).
Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. NYU Press.
Chung, M. (2025). When knowing more means doing less: Algorithmic knowledge and digital (dis)engagement among young adults. Harvard Kennedy School Misinformation Review, 6(5).
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181-204.
Cotter, K., & Reisdorf, B. C. (2020). Algorithmic knowledge gaps: A new dimension of (digital) inequality. International Journal of Communication, 14, 745-765.
Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336-349.
Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3-7.
DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-38.
DeVito, M. A. (2022). How transfeminine TikTok creators navigate the algorithmic trap of visibility. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW), 1-31.
Dias Oliva, T., Antonialli, D. M., & Gomes, A. (2021). Fighting hate speech, silencing drag queens? Artificial intelligence in content moderation and risks to LGBTQ voices online. Sexuality & Culture, 25, 700-732.
Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale. Communication Methods and Measures, 16(2), 115-133.
Du Bois, W. E. B. (1903). The souls of Black folk. A. C. McClurg.
Federiakin, D., Molerov, D., Sanzharevsky, I., & Kardanova, E. (2024). Prompt engineering as a new 21st century skill. Frontiers in Education, 9, 1366434.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
Gagrčin, E., Naab, T. K., & Grub, M. F. (2024). Algorithmic media use and algorithm literacy: An integrative literature review. New Media & Society.
Galloway, A. R. (2004). Protocol: How control exists after decentralization. MIT Press.
Gran, A.-B., Booth, P., & Bucher, T. (2021). To be or not to be algorithm aware: A question of a new digital divide? Information, Communication & Society, 24(12), 1779-1796.
Heidegger, M. (1977). The question concerning technology. In The question concerning technology and other essays (W. Lovitt, Trans.). Harper & Row. (Original work published 1954)
Keyes, O. (2018). The misgendering machines: Trans/HCI implications of automatic gender recognition—proceedings of the ACM on Human-Computer Interaction, 2(CSCW), Article 88.
Lee, A. Y., Mieczkowski, H., Ellison, N. B., & Hancock, J. T. (2022). The algorithmic crystal: Conceptualizing the self through algorithmic personalization on TikTok. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), Article 543.
Leese, M., & Matzner, T. (2016, July 26). New knowledges, new problems: Algorithmacy as threat reasoning [Conference talk]. Surveillance and Security in the Age of Algorithmic Communication, University of Leicester.
Liu, Y., et al. (2025). Mercy consent and contained resistance: Grievance systems in Chinese food-delivery platforms. New Technology, Work and Employment.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Oeldorf-Hirsch, A., & Neubaum, G. (2023). What do we know about algorithmic literacy? The status quo and a research agenda for a growing field. New Media & Society, 27(2), 681-701.
Olson, D. R. (1994). The world on paper: The conceptual and cognitive implications of writing and reading. Cambridge University Press.
Ong, W. J. (1982). Orality and literacy: The technologizing of the word. Routledge.
Petrovčič, A., Rogelj, V., & Dolničar, V. (2024). Disentangling the role of algorithm awareness and knowledge in digital inequalities. Information, Communication & Society, 27(4), 557-574.
Powell, W. W. (1990). Neither market nor hierarchy: Network forms of organization. Research in Organizational Behavior, 12, 295-336.
Ravenelle, A. J. (2019). Hustle and gig: Struggling and surviving in the sharing economy. University of California Press.
Rouvroy, A. (2025). Self and others in algorithmic governmentality. AI & Society, 40, 87-102.
Rouvroy, A., & Berns, T. (2013). Algorithmic governmentality and prospects of emancipation. Réseaux, 177, 163-196.
Rouvroy, A., & Stiegler, B. (2016). The digital regime of truth: From the algorithmic governmentality to a new rule of law. La Deleuziana, 3, 6-29.
Schellewald, A. (2022). Theorizing “stories about algorithms” as a mechanism in the formation and maintenance of algorithmic imaginaries. Social Media + Society, 8(1).
Schor, J. B., Tirrell, C., & Vallas, S. P. (2024). Consent and contestation: How platform workers reckon with the risks of gig labor. Work, Employment and Society, 38(5), 1423-1444.
Scribner, S., & Cole, M. (1981). The psychology of literacy. Harvard University Press.
Selznick, P. (1949). TVA and the grass roots. University of California Press.
Shapiro, A. (2018). Between autonomy and control: Strategies of arbitrage in the “on-demand” economy. New Media & Society, 20(8), 2954-2971.
Stark, D., & Vanden Broeck, P. (2024). Principles of algorithmic management. Organization Theory, 5(2), 1-24.
Stiegler, B. (1998). Technics and time, 1: The fault of Epimetheus (R. Beardsworth & G. Collins, Trans.). Stanford University Press.
Stiegler, B. (2016). Automatic society, volume 1: The future of work (D. Ross, Trans.). Polity.
Street, B. V. (1984). Literacy in theory and practice. Cambridge University Press.
Thompson, J. D. (1967). Organizations in action. McGraw-Hill.
Thomson, I. D. (2025). Heidegger on technology’s danger and promise in the age of AI. Cambridge University Press.
Tinnell, J. C. (2015). Grammatization: Bernard Stiegler’s theory of writing and technology. Computers and Composition, 37(1), 132-146.
Wester, J., Barik, A. K., Subramonyam, H., & van Berkel, N. (2024). Theory of mind and self-presentation in human-LLM interactions. CHI ‘24 Extended Abstracts.
Williamson, O. E. (1975). Markets and hierarchies. Free Press.
Wu, A. (2023). Digitalized grammatization and critical thinking. Humanities and Social Sciences Communications, 10, Article 639.
Xie, Q., Liu, J., Chen, Y., & Zhai, C. (2025). Mental model shifts in human-LLM interactions: An analysis of 200,000+ conversations. Journal of Intelligent Information Systems, 64(1), 203-228.
Zamfirescu-Pereira, J. D., Kim, R. Y., Murnane, E. L., Broquil, K., Feldman, A., & Hartmann, B. (2023). Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts—proceedings of CHI ‘23.
Zarouali, B., Boerman, S. C., & de Vreese, C. H. (2021). Does an algorithm recommend this? The development and validation of the algorithmic media content awareness scale. Telematics and Informatics, 62, 101607.
