The Variance Puzzle
Justice Theory After the Algorithmic Turn
Two users signed up for Twitter on the same day in 2019. They share demographic profiles, educational backgrounds, and initial follower counts of zero. Both post regularly, engage with trending topics, and respond to other users. By 2024, one has become a recognized voice in her field, with 80,000 followers, speaking invitations, and a book contract emerging from viral threads. The other has 340 followers, mostly spam accounts. Her posts, substantively indistinguishable in quality, disappear into silence. She has stopped posting.
Same platform. Same access. Same rules. Outcomes so divergent that they constitute different realities.
The trajectory from t₀ to t₃ makes the problem precise:
At t₀, both users begin with identical access. The algorithm has no behavioral data on either. It must decide whose content to surface based on minimal signal. This is the exploration phase: the system samples broadly, gathering information about which users generate engagement.
At t₁, minor differences emerge. Perhaps User A’s third post happened to appear when an influential account was browsing. Perhaps User B posted during a server lag. The differences may be stochastic, reflecting nothing about the users themselves. But the algorithm registers them. User A shows an early engagement signal. User B does not.
At t₂, the algorithm shifts from exploration to exploitation. Having identified User A as a probable engagement source, her content is amplified to larger audiences. User B, showing a weaker signal, receives less distribution. The algorithm is not punishing User B. It is optimizing for engagement by allocating attention to higher-probability sources. User A receives feedback that teaches her what resonates with her. User B receives silence that teaches nothing.
At t₃, the gap has become constitutive. User A has learned platform literacy through thousands of micro-feedback events. She knows which phrasings generate response, which posting times maximize reach, and which topics trigger amplification. User B lacks this knowledge because she never received the feedback that would have generated it. The algorithm’s assessment of their relative value has become self-fulfilling. User A is now genuinely more valuable as an engagement source because the algorithm’s earlier choices made her so.
The conventional explanations fail. Skill differentials? The skills that matter are developed through the platform itself. User B cannot acquire them because she lacks the feedback that would teach her what works. Her failures compound into ignorance while her counterpart’s successes compound into expertise. Prior social capital? Platforms promise to level that playing field. Effort and persistence? Both users posted consistently. The algorithm rewarded one and ignored the other.
These explanations presuppose what they are supposed to explain. They assume stable subjects who possess skills or lack them, command resources or want them, expend effort or withhold it. But the variance puzzle concerns the production of the subjects themselves. The divergence is not between two stable entities experiencing different outcomes. It is between two developmental trajectories producing two different kinds of platform subjects. One becomes literate, agentic, visible. The other becomes opaque to herself, unable to learn, rendered mute.
Liberal justice theory cannot address this. The dominant frameworks from Rawls through Sen to Dworkin presuppose a dyadic ontology that algorithmic mediation has superseded. These theories assume two poles: subjects and the world they confront, agents and the resources they deploy, choosers and the circumstances they navigate. Mediating terms connect these poles but leave them intact. Algorithms violate this architecture. They do not mediate between stable poles. They participate in constituting the poles themselves. The developmental trajectories that make users literate or leave them mute, that render some visible and others invisible, are not external to the subject. They are formative of it.
II. The Dialectical Inheritance: Justice as Dyadic Mediation
A. Rawls and the Veil as Instrument
The original position functions as a device of representation. Behind the veil of ignorance, parties do not know their place in society, their class position, their natural assets, their conception of the good, or the particular circumstances of their society. The veil strips away information that might bias their choice of principles. What remains are rational agents capable of deliberation about ends.
The veil assumes that parties possess “powers of judgment and deliberation in seeking ends” before donning it (Rawls, 1993, p. 50). They are already constituted as rational agents before the thought experiment begins. The veil does not produce deliberators. It receives them. It filters information from beings whose capacity for rational choice exists independently of the filtering mechanism. Rational Agent enters Mediating Device (the Veil), which yields Principles of Justice—two poles connected by an instrument that serves their relation without altering their fundamental character.
The original position models the requirements of fairness for persons conceived as free and equal (Freeman, 2019). This modeling presupposes that we already know what persons are. They are rational choosers with conceptions of the good, possessing the two moral powers: a capacity for a sense of justice and a capacity for a conception of the good. The veil hides information about particular lives but assumes the general structure of personhood.
The framework requires the “unencumbered self,” a subject whose identity exists before and independent of its ends, purposes, and social attachments (Sandel, 1982). The self must be able to stand back from any particular commitment, evaluate it, and potentially revise it. Otherwise, the original position makes no sense. You cannot reason about principles behind a veil of ignorance unless you exist as a reasoning subject independent of the particular features hidden by that veil.
Whether or not this metaphysics succeeds, it reveals the dyadic architecture. Subject and world. Agent and circumstances. The veil mediates but does not constitute.
Rawlsian ideal theory presupposes a cooperative scheme among persons who recognize each other as moral equals (Mills, 1997). It abstracts from the actual history of exploitation, domination, and exclusion that shaped existing social arrangements. The subject who enters the original position is already fully formed. The veil then operates on this completed subject, hiding information, structuring deliberation, and generating principles. It works on people. It does not work on the process by which persons come to be persons.
B. Sen’s Conversion Factors: Heterogeneity Within the Dyad
The capability approach emerged partly from dissatisfaction with Rawlsian primary goods. Equal distributions of primary goods do not guarantee equal capacity to achieve valued states and activities. A person in a wheelchair and a non-disabled person may have equal income but dramatically unequal mobility. The capability approach introduces “conversion factors,” the degree to which a person can transform a resource into a functioning (Robeyns, 2016). Personal conversion factors include physical condition, literacy, and intelligence. Social conversion factors include public policies, social norms, and discriminatory practices. Environmental conversion factors include climate, geographical location, and infrastructure.
The capability framework acknowledges human heterogeneity. Central human capabilities that any just society must secure include: life, bodily health, bodily integrity, senses and imagination, emotions, practical reason, affiliation, relation to other species, play, and control over one’s environment (Nussbaum, 2011).
The dyadic structure persists. Conversion occurs between a pre-existing agent and the resources that agent confronts. The agent is not constituted through conversion. She possesses conversion factors as features of her situation, and these factors determine how much capability she derives from resource holdings. The agent applies Conversion Factors to Resources, yielding Functionings and Capabilities. The agent enters the process intact. Her identity does not depend on how the conversion proceeds. It precedes conversion.
The framework “only considers states of affairs… in terms of how good or bad they are for an individual’s well-being” (Gore, 1997, p. 238). Some goods cannot be reduced to individual properties at all. The quality of public discourse, the vitality of democratic institutions, the richness of cultural heritage, and the trustworthiness of social cooperation are features of collectives that resist aggregation into individual capabilities.
Algorithms constitute precisely the kind of social environment that shapes what subjects can become, not just what resources they can convert. A platform that systematically rewards certain cognitive styles, specific communication patterns, and certain affective registers does not merely convert resources into functionings with greater or lesser efficiency. It participates in forming the subjects who will later convert resources. The conversion framework cannot capture this because it requires subjects to exist before conversion.
C. Dworkin’s Luck Partition: Choice Requires a Chooser
The distinction between brute luck and option luck aimed to reconcile egalitarian intuitions with respect for individual choice (Dworkin, 1981). Brute luck encompasses outcomes that befall a person independently of any gamble she deliberately undertook: being born with a disability, contracting a disease, or being struck by lightning. Option luck encompasses outcomes that flow from deliberate gambles. If I bet on a horse and lose, my diminished resources stem from my decision to gamble.
The distinction matters normatively because justice requires compensation for brute luck but not for option luck. A just society would ensure its members against brute misfortune but would not protect them from the consequences of their voluntary gambles. The framework requires a clear boundary between what subjects choose and what befalls them.
Luck egalitarianism requires people to “present evidence of their misfortune” to qualify for compensation (Anderson, 1999, p. 295). It demands that the poor prove they are not responsible for their poverty, that the sick prove they did not choose risky behaviors, and that the unemployed prove they did not simply refuse to work.
The brute/option luck distinction depends on “a metaphysically inflated conception of the significance of choice” (Scheffler, 2005, p. 7). It assumes we can identify a stable locus of authentic choice within the subject. But “an agent’s level of effort… might be inseparable from her level of talent” (Knight, 2013, p. 926). How hard someone works depends on her temperament, her upbringing, her neurochemistry, her social environment, all matters of brute luck. The distinction between what I choose and what I inherit threatens to collapse.
Algorithmic systems make this instability practically vivid. A user’s success on a platform depends on the choices she makes: what to post, when to post, and whom to engage. But it also depends on how the algorithm responds to those choices: whether it amplifies or suppresses them, whether it shows them to receptive or hostile audiences, whether it provides feedback that enables learning or withholds it. The algorithm’s responses depend on choices made by other users, on the aggregate behavior of millions, on model parameters set by engineers, on business decisions by executives, and on stochastic elements in recommendation systems.
What counts as the user’s choice versus her circumstance? The distinction presupposes a boundary that algorithmic mediation dissolves.
The user who tries to post high-quality content and receives no engagement has made a choice. Whether that choice succeeds depends on factors entirely beyond her control and largely beyond her knowledge. She did not accept a known risk of failure in exchange for a possible reward. She acted in opacity. The framework cannot render judgment because it presupposes a clarity about choice and circumstance that the algorithmic environment has destroyed.
III. The Dyadic Assumption and Its History
A. The Cartesian Inheritance
Western philosophy, since Descartes, has organized itself around the subject-object distinction. The Meditationsestablished the problematic: a thinking subject confronts a world of extended objects, and the task of philosophy is to explain how the subject can know that world.
Kant’s critical project radicalized rather than escaped this structure. The subject does not passively receive impressions from objects but actively constitutes experience through categories of the understanding. Space, time, causality, and substance are not features of things in themselves but forms that the subject imposes on the manifold of intuition. The basic architecture remains dyadic. A constituting subject faces a world that it constitutes according to invariant principles.
Phenomenology claimed to overcome the subject-object split by attending to consciousness as intentional, yet Husserlian phenomenology still privileges the constituting acts of transcendental subjectivity. Heidegger’s existential analytic and Merleau-Ponty’s phenomenology of embodiment complicate this picture without fundamentally abandoning it. Dasein is being-in-the-world, not a subject confronting an object. But Dasein is still the locus of analysis.
Marx inverted Hegel but retained the dyad. Material conditions determine consciousness rather than consciousness determining material conditions. The analysis still moves between two poles: material and ideal, base and superstructure, social being and social consciousness.
The standard structure: two poles with mediating terms between. The mediating term differs. The dyadic architecture persists.
B. Why the Dyad Was Adequate
These frameworks addressed genuine problems of liberal modernity and addressed them well within their domain of application.
Rawls confronted a pluralistic society in which people hold incommensurable conceptions of the good. How can such people live together under principles all can accept? The framework succeeds because the problem presupposes what the framework presupposes: rational agents exist, they have conceptions of the good, and they can reason about principles independently of those conceptions.
Sen confronted the limitations of GDP and other aggregate measures as indicators of human welfare. The framework succeeds because the problem presupposes what the framework presupposes: people have capabilities, capabilities can be measured and compared, and agents convert resources into functionings with greater or lesser success.
Dworkin confronted the tension between equality and responsibility. The framework succeeds because the problem presupposes what the framework presupposes: people make choices, choices can be distinguished from circumstances, and responsibility attaches to the former but not the latter.
These frameworks do not fail on their own terms. They fail when their terms no longer describe the relevant features of the situation.
C. Technologies That Withdrew
The technologies of liberal modernity permitted dyadic analysis because they functioned as instruments. They mediated between subjects and the world without disrupting the basic structure.
Books mediate knowledge but do not interpolate themselves between knower and known in ways that shape what knowing becomes. A book transmits content from the author to the reader. The reader may be changed by what she reads, but the book does not observe her reading, adjust its content based on her responses, predict what she will want to read next, or shape her future reading through recommendations derived from her past behavior. The book is static.
Markets mediate exchange but (in idealized form) do not predict behavior or shape preferences. A farmer who brings wheat to market does not find that the market has learned his selling patterns and adjusted prices specifically for him. The market treats him as interchangeable with any other seller of wheat. It coordinates without constituting.
Contracts mediate obligations but do not learn from their enforcement. A contract binds parties to specified performances. It does not observe the parties’ behavior, update its terms based on their compliance patterns, or predict their future conduct. It is inert.
These mediations could be treated as instruments because they lacked their own developmental dynamics. The subject remained stable across transactions with books, markets, and contracts. These technologies extended the subject’s reach without altering its constitution.
IV. The Triadic Rupture: Algorithms as Constitutive Participants
A. The Explore-Exploit Mechanism
The variance puzzle is not a metaphysical mystery. It is the predictable outcome of a specific engineering choice: the multi-armed bandit framework that governs content recommendation (Lattimore & Szepesvári, 2020).
The multi-armed bandit problem formalizes a fundamental tradeoff. A gambler faces multiple slot machines with unknown payout rates. She must decide when to explore (trying machines to learn their rates) and when to exploit (pulling the machine she believes pays best). Exploration generates information but foregoes immediate reward. Exploitation maximizes expected reward given current knowledge but foregoes learning.
Recommendation algorithms face an analogous problem. The platform must decide which content to surface. Showing content from proven engagement sources maximizes expected engagement. Showing content from unproven sources risks engagement loss but generates information about new potential sources. The algorithm must balance exploration against exploitation.
The critical feature: exploration budgets are finite. The algorithm cannot test every user’s content indefinitely. At some point, it must commit to the users it has identified as valuable and reduce investment in those it has not. This commitment point creates the divergence. Users who showed an early signal receive continued distribution, generating feedback loops that compound their advantage. Users who did not show an early signal receive diminishing distribution, foreclosing the feedback that might have revealed their potential.
The Thompson Sampling approach illustrates the mechanism (Russo et al., 2018). The algorithm maintains probability distributions over each user’s expected engagement value. Early observations update these distributions. Users with high early engagement see their distributions shift upward; their content receives more distribution. Users with low early engagement see their distributions remain diffuse or shift downward; their content receives less distribution. The algorithm is not biased against User B. It is rationally allocating limited attention based on the available signal. But rational allocation based on early signal produces divergent developmental trajectories.
This is not a bug to be fixed. It is a structural feature of any system that must learn under resource constraints. The algorithm cannot explore infinitely. It must commit. Commitment creates winners and losers. The losers are not merely disadvantaged. They are constitutively foreclosed from the developmental path that would have made them winners.
B. Postphenomenology and the Production of Subjects
Postphenomenology provides the philosophical vocabulary for what the engineering reveals. Human-technology relations fall into four types (Ihde, 1990). Embodiment relations occur when technology becomes quasi-transparent. Hermeneutic relations occur when technology provides a text for interpretation. Alterity relations occur when technology presents itself as quasi-other. Background relations occur when technology shapes experience without being directly engaged.
Technologies have no fixed essence. A smartphone in one context is an embodiment relation, in another a hermeneutic relation, in another an alterity relation, in another a background relation. The technology becomes what it is through use (Ihde, 1993).
“The relation between subject and object always precedes subject and object themselves. They are constituted in their interrelation” (Verbeek, 2005, p. 130). The mediation does not connect pre-existing poles. It produces them. “Humans and the world they experience are the products of technical mediation, not just the poles between which mediation plays out” (Verbeek, 2011, p. 6).
“Fusion relations” occur when technology merges with the body: cochlear implants, brain-computer interfaces, retinal prostheses. The technology does not extend the body but becomes part of it. “Immersion relations” occur when innovative environments perceive and act upon users. The user does not engage a discrete technology but inhabits a technologically saturated space that shapes her experience without presenting itself as an object of attention.
AI in medical diagnostics does not simply transmit information about the patient’s condition. It shapes how the condition is perceived, what treatment options appear salient, and how the physician-patient relationship unfolds (Friedrich et al., 2022). Remove the AI, and you do not have the same encounter minus a tool. You have a fundamentally different encounter.
C. The Platform Subject: Amplification versus Constitution
A predictable objection: the algorithm merely amplifies pre-existing differences. User A succeeded because she was funnier, more concise, more attractive, and more culturally attuned. The algorithm detected these pre-existing qualities and rewarded them. If this is correct, the dyadic model survives. User A possessed certain traits; the platform served as a conversion factor that translated those traits into high-efficiency outcomes.
The objection fails because it conflates two distinct entities: the Offline Self and the Platform Subject.
The Offline Self is the person who exists independently of platform engagement. She has traits, capacities, social positions, and cultural capital. These exist whether or not she ever creates a Twitter account.
The Platform Subject is the entity that emerges through platform interaction. She has an engagement history, an algorithmic classification, a follower graph, a content archive, and a learned repertoire of platform-effective behaviors. These exist only through and within platform engagement.
The critical point: the Platform Subject is not a representation of the Offline Self. It is a distinct ontological entity constituted through the triadic relation of user, platform, and algorithm. The algorithm does not measure the Offline Self and report its findings. It constitutes the Platform Subject through iterative interaction.
User A’s Offline Self may indeed be funnier than User B’s. But User A’s Platform Subject is not merely her Offline Self plus amplification. It is a new entity that emerged through the specific developmental trajectory the algorithm afforded her. Her Platform Subject knows things her Offline Self does not: what phrasings generate engagement, what topics trigger amplification, and what timing maximizes reach. These are not amplifications of pre-existing traits. They are constituted competencies that exist only because the algorithm provided the feedback environment that produced them.
User B’s Platform Subject is also distinct from her Offline Self. But where User A’s Platform Subject developed fluency, User B’s Platform Subject developed opacity. She does not know why her content fails. She cannot learn because the environment fails to provide the signals that would teach her. Her Platform Subject is constituted as incompetent regardless of her Offline Self’s capabilities.
The variance puzzle is not about differential amplification of stable traits. It is about the differential constitution of platform subjectivities. Same Offline Selves (by stipulation) produce different Platform Subjects because the algorithm affords different developmental trajectories.
D. Actor-Network Theory and Distributed Agency
Classical sociology distinguished nature from society and asked how natural facts and social facts relate to one another. Actor-network theory dissolves this distinction. Natural facts, social facts, and technological artifacts should be described in the same terms before analysis assigns them to different ontological categories. “Networks have no a priori order relation” (Latour, 2005, p. 63).
Humans and nonhumans alike function as actants: entities that make a difference in the course of events. The speed bump slows traffic. The automatic door closer shuts the door. The algorithm curates the feed. These are not mere instruments in human hands but participants in networks of action. They would not exist without humans, but humans would not act as they do without them. The action is distributed across the network.
Some entities occupy an unstable position between subject and object (Serres, 1980). The ball in a game organizes the players’ movements, structures their relations, determines who has possession, and who does not. Without the ball, the players would not be players in the relevant sense. The ball partially constitutes them as such. But the ball is not a subject in its own right.
“Technology is society made durable” (Latour, 1991, p. 103). Social relations tend to dissolve without material support. Agreements fade, memories fail, intentions shift. Technologies stabilize relations across time and space. The wall that separates neighbors, the contract that binds parties, the algorithm that enforces platform rules: each materializes social relations and makes them persist.
ChatGPT comprises “programs devised to prompt responses based on human input, creating a cycle of actions and reactions” (Gutiérrez, 2023, p. 4). People, algorithms, smartphones, and interface elements “all act together to shape the network” (Gutiérrez, 2023, p. 7). No single actant controls the network. Each shapes and is shaped by the others. The human user does not simply deploy the AI as a tool. The AI shapes what the user becomes within the network.
E. Stiegler’s Grammatization: Technology as Memory
Technics is not external to the human but constitutive of it. No human exists before technical prostheses and then invents tools—the human and its tools co-evolve (Stiegler, 1998).
Tertiary retention is external, technical memory existing before individual experience. “A newborn child arrives in a world in which tertiary retention both precedes and awaits it” (Stiegler, 2009, p. 8). This retention “constitutes this world as world” (Stiegler, 2009, p. 9). The child does not first exist and then encounter stored memories. The child becomes what it is through engagement with the already-constituted technical environment.
“The interior is constituted in exteriorisation” (Stiegler, 1998, p. 152). The human invents technologies that then reinvent the human. The process has no origin because origin presupposes a stable starting point, and the process produces rather than presupposes its terms.
Proletarianization is the loss of knowledge through its externalization in technical systems. The first stage involved loss of know-how (savoir-faire): industrial machines captured artisanal skills, and workers became appendages to processes they no longer understood. The second stage involved loss of know-how-to-live (savoir-vivre): mass media captured cultural practices, and consumers received standardized lifestyles rather than cultivating their own. The third stage involves loss of theoretical knowledge itself: AI systems capture cognitive skills, and users become operators of processes whose logic exceeds their comprehension (Stiegler, 2019).
Digital grammatization produces “reticular writing” that is simultaneously networked reading. When you write a search query, you are also reading the autocomplete suggestions that shape your query. When you post on social media, you are also reading the engagement metrics that shape your next post. Writing and reading collapse into a single operation mediated by algorithms that intervene between expression and reception.
“Artificial intelligence processes extracted data to re-modulate user behaviour according to inaccessible norms” (Nony, 2024, p. 12). The AI observes user behavior, extracts patterns, constructs models, generates predictions, and intervenes to shape future behavior based on those predictions. The user cannot access the norms that govern the intervention. She experiences the intervention but not the logic that produced it.
F. Heidegger’s Gestell and Algorithmic Revealing
Technology is a mode of revealing, a way that beings show up as what they are. Modern technology reveals beings as “standing-reserve” (Bestand): resources standing by for use, available for ordering and optimization (Heidegger, 1954/1977). The hydroelectric dam reveals the Rhine as a source of energy. The forestry industry reveals the forest as a timber stock. Nothing is allowed to rest in its own being.
Gestell (enframing) is “the gathering together of the setting-upon that sets upon man, i.e., challenges him forth, to reveal the actual, in the mode of ordering, as standing-reserve” (Heidegger, 1954/1977, p. 20). Gestell is not something humans do. It is something that claims humans, challenges them forth, sets upon them. Humans do not merely use technology. They are claimed by a mode of revealing that compels them to see everything, including themselves, as standing-reserve.
If Gestell becomes the only mode of revealing, if everything appears only as a resource for optimization, then humans lose access to other ways that beings might show themselves. “The rule of enframing threatens man with the possibility that it could be denied to him to enter into a more original revealing” (Heidegger, 1954/1977, p. 28).
Google and Facebook “have developed a technology that turns human experience (rather than labor) into raw material” (Zuboff, 2019, p. 8). Behavioral surplus becomes the standing reserve from which prediction products are manufactured. Users are not customers but resources. “Mankind looks indeed like a ‘standing reserve’: a pasture that AI masters can appropriate, ring-fence and exploit” (Dreyfus, 2004, p. 53).
The platform user does not first exist as a subject and then become a resource. She appears from the start as a resource: a source of behavioral data, an attention unit to be captured, an engagement metric to be optimized. The platform does not treat her as a resource. It reveals her as one. She learns to see herself through the platform’s categories, to measure her worth in its metrics, to optimize her behavior for its reward functions.
V. From Bias to Constitution: The Deeper Problem
A. The Inadequacy of Fairness Frameworks
The literature on algorithmic fairness has identified genuine problems but misunderstood their character. Five traps characterize fairness research (Selbst et al., 2019). The framing trap: failure to model the entire system leads to interventions that do not address the actual problem. The portability trap: abstracting away context means solutions do not transfer to new situations. The formalism trap: mathematical fairness definitions fail to capture social conceptions of fairness. The ripple effect trap: ignoring how a system’s effects ripple out through society leads to unintended consequences. The solutionism trap: assuming fairness can be solved technically obscures political dimensions.
These traps share a standard structure. They all involve abstracting away from the social and relational dimensions that constitute the situation. Mathematical fairness frameworks treat subjects as given. They possess demographic attributes, receive algorithmic classifications, and experience outcomes. The question is whether the mapping from attributes through classifications to outcomes is fair. The frameworks cannot ask whether the attributes themselves are partly algorithmic productions, whether the subjects who bear them have been constituted through prior algorithmic processing, or whether the capacity to recognize unfairness and demand remedy has itself been shaped by the systems being assessed.
No specific, plausible fairness metrics can be simultaneously satisfied, except in degenerate cases. “Statistical properties of algorithms tend to have at best a nebulous relationship with real-world outcomes” (Green, 2022, p. 8). The metrics measure formal properties of systems. Fairness is not a formal property. It is a relational achievement among persons capable of recognizing and claiming it.
Algorithmic fairness may be a “category error” (Narayanan, 2022). Fairness makes sense between persons who can enter into relations of mutual accountability. The algorithm does not recognize the user as a person to be treated fairly or unfairly. It recognizes patterns in data and generates predictions.
B. Statistical Parity and Constitutive Erasure
Consider statistical parity, one of the most common fairness metrics. A system satisfies statistical parity if it produces positive outcomes at equal rates across demographic groups. If 30% of Group A and 30% of Group B receive loans, the lending algorithm is statistically fair.
The metric presupposes that Group A and Group B are pre-existing categories encountered by the algorithm. But algorithmic systems participate in constituting the groups they classify. An algorithm that systematically provides feedback to some users and withholds it from others does not merely sort a pre-existing population. It produces two populations with different developmental trajectories.
Statistical parity can mask constitutive erasure. Suppose a platform achieves equal visibility rates across demographic groups. Thirty percent of Group A’s posts receive significant engagement; thirty percent of Group B’s posts receive significant engagement. The platform appears fair. But suppose the 30% of Group A who engage are drawn from a wide range of content styles.
In comparison, the thirty percent of Group B who receive engagement are drawn from a narrow range that conforms to stereotypical expectations. Group A develops diverse platform literacies. Group B develops only those literacies that align with algorithmic expectations for what Group B content should look like.
The groups that emerge from this process are not the groups that entered it. Group A’s Platform Subjects have been constituted with diverse competencies. Group B’s Platform Subjects have been constituted with narrow competencies that reinforce the stereotypes the algorithm learned from biased training data. Statistical parity is satisfied. Constitutive injustice persists.
The fairness framework cannot detect this because it assumes stable subjects with fixed attributes. Constitutive analysis reveals that the subjects and their attributes are produced through the processes being assessed.
C. Algorithmic Identity and Soft Biopower
“Soft biopower” describes how algorithms “construct categories within populations according to users’ surveilled internet history” (Cheney-Lippold, 2011, p. 165). The algorithm does not tell you what to do. It predicts what you will do and arranges your environment accordingly. You experience the arrangement as natural. The power operates softly precisely because it does not feel like power.
Identity under soft biopower becomes “a feedback loop of mutual renegotiation between the category and individual instances” (Cheney-Lippold, 2011, p. 169). The algorithm classifies you as a particular kind of user based on your behavior. This classification shapes what content you see, which shapes your behavior, which updates the classification. You are not assigned a fixed identity. You are caught in a loop that continuously produces and modifies your algorithmic identity. Because you cannot access the classification logic, you cannot negotiate the terms.
Ad-targeting systems classify users as male or female based on behavioral signals. These classifications have no necessary relationship to biological sex or self-identified gender. They are statistical constructs derived from aggregate patterns. “Gender categories have no necessary properties and are constantly open to reinterpretation” (Cheney-Lippold, 2017, p. 45). If enough users classified as female click on specific content, that content becomes part of what algorithmic femaleness means.
The question cannot be: how do we make algorithms fair to existing demographic groups? The groups themselves are partly algorithmic constructions. The question must become: what forms of subjectivity are algorithms producing? Are these forms compatible with justice?
The divergence between our two Twitter users is not primarily a matter of fairness in the distribution of platform attention. It is a matter of the constitution of platform subjectivity. One user was shaped into a fluent, agentic, visible platform actor. The other was shaped into an opaque, disempowered, invisible one. The platform did not merely treat them differently. It produced them differently.
D. Algorithmic Governmentality
Algorithms operate on “infra-individual data,” signals that are meaningless in themselves and only become meaningful through aggregation (Rouvroy, 2013). A single click says nothing. A million clicks generate patterns. The algorithmic system builds “supra-individual models” that describe populations rather than persons. It then uses these models to generate predictions about individuals.
This process “circumvents and avoids reflexive human subjects” entirely (Rouvroy, 2013, p. 172). Classical governance operates through subjects. The law addresses you as a responsible agent capable of understanding rules and conforming your behavior to them. The market addresses you as a rational actor capable of assessing options and choosing among them. The algorithm does not address you at all. It processes data, detects patterns, and generates predictions. You appear in the process only as a data source and a target for intervention.
“Algorithmic governmentality produces no subjectification” (Rouvroy, 2013, p. 178). Classical governance, even when oppressive, produced subjects. The disciplined subject of the prison, the normalized subject of the school, and the productive subject of the factory could recognize themselves as subjected and potentially resist their subjection. The “moment of reflexivity, critique and recalcitrance necessary for subjectification to form seems to become more complicated or to be postponed constantly” (Rouvroy & Berns, 2013, p. 10).
By operating on data rather than subjects, by predicting behavior rather than commanding it, by personalizing interventions rather than applying general rules, algorithmic governance evacuates the space where reflexivity might form. You cannot resist a prediction. You can only behave in ways that update the prediction. You cannot critique a personalized intervention because you cannot know that it is personalized or how. You cannot recognize yourself as a subject because the subjection operates through environments that appear natural.
Justice between persons presupposes persons capable of claiming it. If algorithmic governmentality produces no subjectification, if it operates below and before the subject, then the conditions for justice claims do not obtain. The variance puzzle is not merely a problem of unfair treatment but a symptom of a situation in which the capacity to recognize it has been structurally undermined.
VI. The AI Veil: A New Kind of Opacity
A. Rawls’s Veil Hides Position
Behind the veil, you do not know your place in society, your class position, your natural talents, or your conception of the good. You do not know whether you are rich or poor, talented or ordinary, religious or secular. The veil hides facts about your particular situation.
But you know what you are. You are a deliberator capable of reasoning about principles. You have the two moral powers: a capacity for a sense of justice and a capacity for a conception of the good. These capacities define rational personhood. They are not hidden behind the veil. The thought experiment presupposes them.
The veil hides position while preserving identity. You do not know where you will end up in the social order, but you know what kind of being will end up there. It will be a being like you, a rational deliberator with moral capacities. You can reason about what principles such a being would want, regardless of position, because you know what wanting and reasoning are.
If the veil hid not just your position but your capacities, if you did not know whether you would be a rational deliberator or a creature incapable of rational deliberation, the thought experiment would collapse. You could not reason about what you would want because you would not know whether you would be the being that can want.
B. The AI Veil Hides Constitution
Algorithmic mediation introduces a different kind of opacity. You do not know what kind of subject you are becoming through algorithmic interaction. The process that constitutes you as a fluent or disfluent platform user, as a visible or invisible participant, as an agent capable of learning or a subject condemned to ignorance, is hidden from you. You experience its outputs but not its logic.
Even your capacities are in play. The capacity to learn what works on the platform depends on receiving feedback that teaches. If the algorithm withholds such feedback, you cannot develop the capacity to succeed. The algorithm’s current behavior shapes your future capacity. And the algorithm’s current behavior depends on patterns you cannot observe in data you cannot access, processed by models you cannot understand.
The AI veil hides not just facts about your situation but facts about your constitution as a being capable of having situations.
Rawls’s veil presupposes that behind it stands a rational deliberator. The AI veil cannot make this presupposition, because the rational deliberator is what algorithmic processes produce. Whether you become a capable deliberator about platform matters depends on how the platform treats you.
Traditional frameworks cannot simply be extended to address algorithmic conditions. They presuppose the stability of precisely what algorithmic mediation destabilizes.
C. The Recursive Temporal Dimension
Rawls’s veil operates synchronically. At a single moment, deliberators who do not know their position choose principles they will apply once the veil lifts. The choice is once-for-all.
Sen’s conversion factors are relatively stable. Your physical capabilities, social circumstances, and environmental conditions change slowly and can be measured at any given time.
The AI veil operates diachronically and recursively. The algorithm learns from your interactions. It adjusts its models. It changes its outputs. These changed outputs alter your interactions, which in turn alter their learning. The recursive loop never stabilizes. There is no moment at which you can step back and assess the situation, because your assessment changes it.
Return to the t₀ to t₃ trajectory. At which moment should we assess fairness? At t₀, everything was equal. At t₁, minor random differences emerged. At t₂, the algorithm amplified those differences. At t₃, the differences became capacities. At every moment, the assessment target is already produced by prior moments and productive of subsequent ones. The situation is constitutively temporal.
Same platform. Same algorithm. Different subjects produced. Not different treatments of the same subjects, but different constitutions of different subjects. The divergence compounds through time. The gap at t₃ exceeds the gap at t₂, which exceeds the gap at t₁, which exceeds the gap at t₀ (which was zero). Early differences become constitutive. The process produced subjects who might claim injustice and would need to be critiqued.
VII. Triadic Alternatives: Resources for New Frameworks
A. Visualizing the Triad
The dyadic model pictures a line: User ←→ Platform. The user approaches the platform with pre-existing traits and goals. The platform provides resources and constraints. Mediation occurs along the line connecting them.
The triadic model pictures a triangle:
Algorithm/\
/ \
/ \
/ \
/ \
User ——————— Platform
Each vertex constitutes and is constituted by its relations to the other two. The user does not approach the platform directly. She approaches it through the algorithm’s mediation, which shapes what the platform shows her and what the platform sees of her. The platform does not present itself directly to the user. It presents itself through algorithmic filtering that determines which features become salient. The algorithm exists only in the users and the platform. It emerges through their interaction and shapes the terms of that interaction.
The triangle is not static. Each interaction updates all three vertices. The user’s behavior updates the algorithm’s model of her. The algorithm’s model updates what the platform shows, which in turn changes the user’s behavior. The triangle rotates through time, each vertex transforming as the relations transform.
This visualization makes clear why dyadic frameworks fail. They treat the algorithm as a feature of the line between user and platform, either as a property of the user (her “digital literacy”) or a property of the platform (its “design”). The triadic model recognizes the algorithm as an irreducible third term that participates in constituting both the user and platform.
B. Peirce’s Irreducibly Triadic Semiotics
Saussure’s linguistics operated dyadically. The sign comprises the signifier (sound-image) and signified (concept). Two terms in a differential relation constitute meaning. Peirce developed a triadic alternative. The sign comprises representamen (sign-vehicle), object (what the sign stands for), and interpretant (the sense in which the sign stands for the object). Three terms are irreducible to any pair.
“Semiosis is an action, or influence, which is, or involves, a cooperation of three subjects… this trirelative influence not being in any way resolvable into actions between pairs” (Peirce, 1931-1958, 5.484). You cannot decompose the sign relation into representamen-object plus object-interpretant plus interpretant-representamen. The three-way relation is primitive.
“The fact that A presents B with gift C is a triple relation. As such, it cannot possibly be resolved into any combination of dual relations. The very idea of a combination involves that of thirdness, for a combination is something which is what it is owing to the parts which it brings into mutual relationship” (Peirce, 1931-1958, 1.363).
Genuinely triadic predicates characteristically express representation or mediation. Algorithmic mediation is precisely such a case. The algorithm does not merely connect the user and platform as a two-way channel. It interprets. It generates an interpretant that shapes how the user and platform appear to each other. The user sees a feed; the platform sees a data source; the algorithm mediates their mutual appearance—three irreducible terms.
In the variance puzzle, the algorithm functions as an interpretant. User A’s posts are representations. Their object is whatever User A intended to communicate. But the interpretant, the sense in which they signify, is determined by the algorithm: engagement-worthy content deserving amplification. User B’s posts are also representations with intended objects. But the algorithm generates a different interpretant: low-value content warranting suppression. The same representamen-object relation yields different interpretants depending on algorithmic processing. The interpretant is not a passive registration. It is a constitutive determination that shapes what the sign becomes.
Dyadic frameworks must treat the algorithm as either a transparent instrument (part of the user’s relation to the platform) or an external factor (part of the circumstances users face). Neither captures its constitutive role as interpretant.
C. Process-Relational Philosophy
Whitehead developed a metaphysics in which relations are ontologically primary. Reality consists not of substances that enter into relations but of “relational encounters and events” that constitute whatever substances there are (Whitehead, 1929). “Every actual occasion of experience is internally related to every other actual occasion” (Whitehead, 1929, p. 22). The relations are not external connections between independently existing things but internal constituents of what the things are.
Nothing exists independently. Every actual occasion arises from the world and perishes into it. The universe is not a collection of separate things but a process of mutual constitution.
This framework dissolves the subject-object problem. There is no subject here and object there, with the philosophical task being to explain how they connect. There are experiences, each inheriting from and contributing to others, none existing apart from its relations (Stengers, 2011).
Human-algorithm relations are not external connections between pre-existing entities but internal relations that constitute what the entities are. The user and the algorithm do not first exist and then enter into relation. They are constituted through their interrelation. What the user is depends on how the algorithm processes her. What the algorithm is depends on the patterns it detects in her behavior.
The variance puzzle’s divergent trajectories are not two pre-existing subjects having different experiences. They are two different constitution processes producing two different subjects. The sameness lies in the initial conditions. The difference is at the level of the relational processes through which subjects emerge.
VIII. What Justice Requires Now
A. The Threshold of Constitutive Agency
Not all differential production is unjust. A platform that produces a “highly literate” medical researcher and a “moderately literate” hobbyist may simply reflect different levels of engagement. The researcher invests more, learns more, and develops more sophisticated platform competence. This differential outcome does not obviously violate justice if both users received adequate feedback to understand their situation.
The threshold concerns not equality of outcome but adequacy of feedback. A platform is constitutively unjust when its recursive loops produce subjects who lack the minimal feedback required to understand the cause of their own trajectory.
User B’s injustice is not that she has fewer followers than User A. It is that she cannot understand why. The algorithm’s opacity, combined with its withdrawal of engagement, produces a subject who is opaque to herself. She cannot learn because she receives no teaching signal. She cannot adjust because she does not know what to adjust. She cannot complain because she does not know she has been wronged. The platform has constituted her as a subject incapable of recognizing her own constitution.
The threshold has three components:
Minimal Feedback Adequacy: Every user must receive sufficient feedback to form working hypotheses about platform dynamics. A user need not understand the algorithm’s exact functioning. She must receive enough signal to develop, however imperfect, a model of what generates engagement. Zero engagement teaches nothing. Occasional engagement teaches something.
Causal Transparency: Users must be able to distinguish between content-caused and system-caused outcomes. If a post fails because it is genuinely uninteresting, the user should be able to learn this. If a post fails because the algorithm chose not to distribute it, the user should be able to learn this, too. Conflating the two produces subjects who cannot distinguish their own qualities from the algorithmic treatment they receive.
Exit Intelligibility: Users who wish to leave the platform must be able to understand what they are leaving. A user who has been constituted as invisible may not realize that her invisibility is platform-produced rather than intrinsic. She may carry her Platform Subject’s learned helplessness into other contexts, having internalized an algorithmic verdict as a personal characteristic.
These thresholds define the minimum conditions for what might be called “constitutive agency”: the capacity to understand, at least partially, the processes that are producing you as a platform subject.
B. Constitutive Transparency
Traditional transparency requirements focus on explaining how the algorithm works: what features it considers, how it weights them, and what its optimization target is. This is necessary but insufficient. Constitutive transparency requires explaining how the algorithm is working on you.
The distinction matters. A user might understand perfectly well that the algorithm favors engagement signals. She might even understand the explore-exploit tradeoff that governs content distribution. This knowledge does not tell her whether she is currently being explored or exploited. It does not tell her whether her invisibility reflects the algorithm’s judgment of her content or merely its decision to stop exploring her. It does not tell her what kind of platform subject she is becoming.
Constitutive transparency would require platforms to provide:
Trajectory Information: Not just “your post reached X people” but “your posts over time have been shown to Y% of your potential audience, compared to Z% for similar users.” This allows users to understand their developmental trajectory relative to others.
Classification Disclosure: Not just “we use machine learning” but “our system currently classifies you as [category] based on [signals], which affects your distribution as follows.” This allows users to understand how the algorithm sees them.
Counterfactual Indication: Not just “your post performed poorly” but “similar posts by users with different engagement histories performed as follows.” This allows users to distinguish content-caused from system-caused outcomes.
Developmental Prognosis: Not just “your current reach is X” but “based on current patterns, users with your trajectory typically see Y outcomes over the next N months.” This allows users to understand where their constitution is heading.
Such transparency would not guarantee equal outcomes. It would guarantee that users possess the information required to understand their own constitution as platform subjects. This is the minimum condition for the kind of reflexive agency that justice presupposes.
C. The Research Program
Justice theory faces a research program, not merely an extension of existing frameworks.
Phenomenology of the hybrid subject: careful description of what it is like to be partly constituted through algorithmic interaction. Not speculation about AI consciousness or science-fiction scenarios. Attention to the experience of users whose capacities are augmented or degraded through platform engagement. What is it like to learn to see oneself through algorithmic categories? To optimize one’s behavior for recommendation algorithms? To experience the silence when one’s posts receive no engagement?
Theory of constitutive mediation: conceptual tools for thinking about mediators that do not merely transmit but participate. The algorithm is the third term, not the instrument. Postphenomenology, actor-network theory, Stiegler, Peirce, and Whitehead provide resources. The task is integration: building a coherent framework that captures what these various traditions illuminate.
Temporal analysis of recursive dynamics: understanding how initial conditions compound through feedback loops. The variance puzzle’s trajectory structure requires conceptual tools adequate to its temporal complexity. Path dependence, lock-in, and recursive self-fulfilling dynamics are scattered concepts across various literatures. The task is to bring them together in a framework tailored explicitly to algorithmic constitution.
Normative frameworks for subject-constitution: principles for assessing not who gets what but who becomes what. Evaluating the production of subjectivity rather than merely the distribution of resources. What kinds of subjects should algorithmic systems be permitted to produce? What constitutional trajectories should they be required to support? What forms of subject-production should be prohibited as unjust?
D. The Transformed Question
Rawls asked: What principles would rational agents choose behind a veil of ignorance that hides their particular position in society?
The question now must be: What principles would govern the very constitution of rational agency when that constitution is algorithmically mediated?
The first question presupposes what the second question problematizes. Rawls assumes rational agents and asks what they would choose. If algorithmic systems constitute rational agency, if they produce or fail to produce the capacity to reason about platform matters, then we cannot simply assume rational agents as our starting point. The agents are products. The production process is the one that requires normative assessment.
We cannot answer Rawls’s question better or extend his framework to new circumstances. We must reframe the question itself. Subject and object. Agent and circumstance. Choice and luck. These categories organized an intellectual tradition. Algorithmic mediation reveals its limits.
The reframing is not a critique from outside. It is an argument that the internal logic of justice theory, pursued rigorously, leads to recognition of its own limits. Take Rawls’s concerns seriously: fairness among persons who reasonably disagree. Take Sen’s concerns seriously: welfare beyond mere resource holdings. Take Dworkin’s concerns seriously: responsibility compatible with equal concern. Pursue these concerns into algorithmic conditions. Watch the frameworks break.
IX. Conclusion
Billions of people interact daily with algorithmic systems that shape what they see, what they can learn, what capacities they develop, and who they become. Divergent trajectories producing divergent subjects from identical starting points are happening now, at scale.
We observe differential outcomes among platform users and ask why some succeed while others fail. But the differential outcomes are really differential productions. The users who succeed and the users who fail are not the same subjects; they have different experiences. They are different subjects, produced through different developmental trajectories, constituted as capable or incapable through the very processes we would assess for fairness. The surface question (is the platform fair?) gives way to the deeper question (what kinds of subjects is the platform producing, and is that production compatible with justice?).
The dominant frameworks of liberal justice theory cannot answer this question. They presuppose a dyadic ontology that algorithmic mediation has superseded. The subject-world structure that organizes Rawls, Sen, Dworkin, and their critics assumes stable subjects confronting circumstances to be assessed. Algorithmic systems violate this assumption. They do not confront stable subjects. They constitute them. The circumstances include the production of whoever will assess the circumstances.
The frameworks succeed on their own terms. They fail when their terms no longer describe the situation. They assume subjects. We must theorize the production of subjects. They assume stable identities. We must theorize the constitution of identities through recursive algorithmic processing. They assume a moment at which assessment can occur. We must theorize constitutively temporal situations in which no moment can be privileged.
Rawls’s veil hides position. The AI veil hides the constitution. Behind Rawls’s veil, you do not know what life you will live, but you know what kind of being will live it. Behind the AI veil, you do not know what kind of being you are becoming. Behind the original position, you choose principles for an unknown life. Behind the AI veil, you do not know what kind of cognitive, communicative, and deliberative subject the systems are producing you as.
What principles for algorithmically mediated environments would you choose if you did not know what kind of subject those environments would make you into?
The threshold of constitutive agency provides a starting point: ensure that whatever subject emerges possesses the minimal capacity to understand its own emergence. This is not equality. It is not even fairness in traditional terms. It is the precondition for any justice claim.
References
Anderson, E. (1999). What is the point of equality? Ethics, 109(2), 287-337.
Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of control. Theory, Culture & Society, 28(6), 164-181.
Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. NYU Press.
Dreyfus, H. L. (2004). Heidegger on gaining a free relation to technology. In R. C. Scharff & V. Dusek (Eds.), Philosophy of technology: The technological condition (pp. 41-54). Blackwell.
Dworkin, R. (1981). What is equality? Part 2: Equality of resources. Philosophy & Public Affairs, 10(4), 283-345.
Freeman, S. (2019). Original position. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/original-position/
Friedrich, O., Seifert, J., & Schleidgen, S. (2022). AI-based diagnosis: The human in the loop. American Journal of Bioethics, 22(5), 14-16.
Gore, C. (1997). Irreducibly social goods and the informational basis of Amartya Sen’s capability approach. Journal of International Development, 9(2), 235-250.
Green, B. (2022). Escaping the impossibility of fairness: From formal to substantive algorithmic fairness. Philosophy & Technology, 35(90), 1-29.
Gutiérrez, A. (2023). ChatGPT and the new AI: An analysis using actor-network theory. AI and Ethics. Advance online publication.
Heidegger, M. (1977). The question concerning technology. In The question concerning technology and other essays (W. Lovitt, Trans., pp. 3-35). Harper & Row. (Original work published 1954)
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.
Ihde, D. (1993). Postphenomenology: Essays in the postmodern context. Northwestern University Press.
Knight, C. (2013). Luck egalitarianism. Philosophy Compass, 8(10), 924-934.
Latour, B. (1991). We have never been modern (C. Porter, Trans.). Harvard University Press.
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
Lattimore, T., & Szepesvári, C. (2020). Bandit algorithms. Cambridge University Press.
Mills, C. (1997). The racial contract. Cornell University Press.
Narayanan, A. (2022). The limits of the quantitative approach to discrimination. In S. Barocas, M. Hardt, & A. Narayanan (Eds.), Fairness and machine learning: Limitations and opportunities. MIT Press.
Nony, A. (2024). AI and proletarianization: Between Stiegler and Marx. Trópos, 16(1), 1-24.
Nussbaum, M. C. (2011). Creating capabilities: The human development approach. Harvard University Press.
Peirce, C. S. (1931-1958). Collected papers of Charles Sanders Peirce (Vols. 1-8). Harvard University Press.
Rawls, J. (1971). A theory of justice. Harvard University Press.
Rawls, J. (1993). Political liberalism. Columbia University Press.
Robeyns, I. (2016). The capability approach. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/capability-approach/
Rouvroy, A. (2013). The end(s) of critique: Data behaviourism versus due process. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn (pp. 143-167). Routledge.
Rouvroy, A., & Berns, T. (2013). Gouvernementalité algorithmique et perspectives d’émancipation: Le disparate comme condition d’individuation par la relation? Réseaux, 177, 163-196.
Russo, D. J., Van Roy, B., Kazerouni, A., Osband, I., & Wen, Z. (2018). A tutorial on Thompson sampling. Foundations and Trends in Machine Learning, 11(1), 1-96.
Sandel, M. J. (1982). Liberalism and the limits of justice. Cambridge University Press.
Scheffler, S. (2005). Choice, circumstance, and the value of equality. Politics, Philosophy, and Economics, 4(1), 5-28.
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59-68.
Serres, M. (1980). The parasite (L. R. Schehr, Trans.). Johns Hopkins University Press.
Stengers, I. (2011). Thinking with Whitehead: A free and wild creation of concepts (M. Chase, Trans.). Harvard University Press.
Stiegler, B. (1998). Technics and time 1: The fault of Epimetheus (R. Beardsworth & G. Collins, Trans.). Stanford University Press.
Stiegler, B. (2009). Technics and time 2: Disorientation (S. Barker, Trans.). Stanford University Press.
Stiegler, B. (2019). The age of disruption: Technology and madness in computational capitalism (D. Ross, Trans.). Polity.
Verbeek, P.-P. (2005). What things do: Philosophical reflections on technology, agency, and design. Penn State University Press.
Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.
Whitehead, A. N. (1929). Process and reality: An essay in cosmology. Free Press.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Brilliant reframing of the constitutive dimension algorithms introduce. The explore-exploit tradeoff as a subject-production mechanism rather than just a distribution problem is exactly what fairness metircs miss. I've seen this play out in content moderation where early classification as "borderline" locks creators intoperpetual algorithmic quarantine regardless of content improvement.