Asymmetric Interpretation
Why You Can’t Argue With an Algorithm
This is the second post in a series exploring Application Layer Communication (ALC), a framework for understanding how humans coordinate through algorithm-driven systems such as social platforms and recommendation engines. In this context, ‘algorithmic communication’ refers to exchanges in which messages are processed and interpreted by computational algorithms rather than by humans. The first post covered Intent Specification.
Schegloff, Jefferson, and Sacks (1977) documented this process: conversation has built-in error detection and correction, handled by both participants.
Algorithmic communication is trial and error. You submit input. You receive output. If the output is wrong, you cannot ask for clarification. You cannot explain what you actually meant. You can only guess at what went wrong, modify your input, and try again. The algorithm that processed your first attempt learns nothing from your frustration. It will interpret your second attempt using the same patterns it applied to your first.
This difference defines the second property of Application Layer Communication: asymmetric interpretation. In platform communication, meaning is determined unilaterally by algorithms based on training data, not negotiated. The algorithm interprets. You adapt.
Grounding and Its Absence
Clark and Brennan (1991) established “grounding”: mutual understanding requires providing and receiving evidence of comprehension via acknowledgments, relevant responses, or clarification requests. Both shape the outcome.
The grounding process operates through what Clark and Brennan call the “presentation-acceptance” cycle. A speaker presents information. The listener provides evidence of understanding or signals confusion. If confusion arises, repair sequences activate: the speaker reformulates, the listener requests clarification, and both parties work together until mutual understanding is achieved. The criterion is not perfect understanding but understanding “sufficient for current purposes.” Humans constantly calibrate this sufficiency, adjusting their efforts based on the stakes of the interaction.
Grounding is so automatic in human interaction that we notice it only when it breaks down. You mention “Sarah,” and your friend looks confused. You add “from college, the one who moved to Denver.” Recognition dawns. The confusion surfaced, got addressed, and resolved in seconds. Neither of you controlled the process unilaterally. You built shared understanding together.
Algorithms do not ground. There is no presentation-acceptance cycle. There is no moment where the algorithm confirms that it understood your intent. There is no mechanism for you to interrupt the interpretation process and signal a problem. The algorithm processes your input, applies statistical patterns derived from training data, and produces output. You observe the output. If it is wrong, you have learned something about how the algorithm interprets, but the algorithm has learned nothing about what you meant.
H.P. Grice’s “cooperative principle” captures what algorithms lack (Grice, 1975). Human conversation operates on a mutual assumption of cooperation: both parties assume the other is trying to communicate effectively and interpret each other’s utterances charitably based on that assumption. This assumption enables humans to communicate far more than the literal content of their words. We infer meaning from context, read between the lines, and understand implicature.
Algorithms do not assume cooperation. They do not try to understand what you mean. They match patterns. An image generator processing your prompt does not care about your intent. It correlates your input with training data and produces a corresponding output. Your frustration when it fails is invisible to its operation.
The Hermeneutic Problem
Gadamer (1960/1989) argued that genuine understanding requires dialogue: his “fusion of horizons” is the blending of two perspectives through exchange, with neither dominating and both transforming assumptions.
Gadamer’s key insight: understanding transforms both parties. When you genuinely understand something, you are changed by the encounter. Your interpretive horizon expands or shifts. The interpreted subject talks back to you, challenges your assumptions, and forces revision.
Algorithmic interpretation involves no transformation of the algorithm. Its interpretive framework, derived from training data, cannot be revised by individual users during interaction. You cannot challenge an algorithm’s assumptions. You cannot force it to reconsider. The algorithm processes your input according to fixed patterns. You observe the output and adjust your subsequent inputs accordingly.
Habermas criticized Gadamer for overlooking power asymmetries. He argued that “systematically distorted communication” can corrupt interpretation, preventing genuine dialogue, so hermeneutics must also critique power structures.
Algorithmic interpretation represents something more radical. It is not distorted communication. It is communication that structurally lacks the conditions for dialogue in the first place. The algorithm cannot recognize its own assumptions. They cannot be questioned. It cannot revise them in response to your challenges. The interpretive relationship is not merely asymmetric in power. It is asymmetric in kind. One party interprets. The other party adapts to that interpretation.
What Trial-and-Error Looks Like
Eslami et al. (2015) found 62.5% of Facebook users did not know the News Feed was curated. Users thought friends had stopped posting, misattributing missing posts to social changes. Study title: “I always assumed that I wasn’t really that close to [her].”
Users inferred about their social relationships from algorithmic outputs they did not understand and could not contest. The algorithm had decided unilaterally what to show them. They interpreted those decisions as reflecting social realities. They could not ask the algorithm why it made those choices. They could not argue that they had misunderstood what they wanted to see.
The study revealed a specific pattern of discovery. When researchers told participants that the News Feed was algorithmically curated, participants reacted with surprise and often anger. They had developed elaborate theories about why certain content appeared, and other content did not, theories that attributed agency to their friends rather than to the platform. Learning about algorithmic curation required them to revise not just their understanding of the platform but their understanding of their own social relationships. The misinterpretation was not merely technical. It had emotional and relational consequences.
Erika Rader and Rebecca Gray found users developing behavioral workarounds (Rader & Gray, 2015). Unable to directly query algorithms about their operations; users conducted experiments. They would visit particular friends’ profile pages, hoping this would signal to the algorithm that they wanted to see those friends’ posts. They would like content strategically, not because they actually liked it, but because they believed liking would influence future content delivery. They manipulated their own behavior as a form of communication with a system that could not actually receive communications.
These workarounds represent significant cognitive labor. Users must develop theories about algorithmic behavior, design experiments to test those theories, run experiments on their own platform, observe the results, and revise their theories accordingly. This is the scientific method applied to everyday platform use, but without the transparency that makes actual science possible. Users cannot see the algorithm’s code. They cannot access its training data. They cannot isolate variables or control for confounds. They are doing empirical research under conditions designed to make empirical research impossible.
Eslami’s 2019 study of Yelp’s review filtering found that users could not inquire why their reviews were hidden (Eslami et al., 2019). One user: “Maybe this is all like Biology; you cannot really ask why, it just is.” Users could not opt out. They could only adapt their behavior to match what they inferred the algorithm wanted, a practice they described as “writing for the algorithm.”
The phrase “writing for the algorithm” deserves attention. It reverses the expected relationship between writer and audience. Normally, writers adapt their communication to human readers. Here, writers adapt their communication to algorithmic interpreters. The algorithm becomes the audience whose preferences must be anticipated and satisfied. The actual human readers of the review are secondary to the algorithmic gatekeeper that determines whether they will ever see it.
Erin Luger and Abigail Sellen’s research on conversational AI found users experienced these systems as “like having a really bad PA” (Luger & Sellen, 2016). Users expected a human-like understanding but encountered repeated failures. They could not determine why the system interpreted requests as it did. They developed trial-and-error strategies to phrase requests so the system would interpret them correctly, treating communication with the algorithm as a puzzle to solve rather than a dialogue to conduct.
The “really bad PA” metaphor is instructive. A competent personal assistant learns their employer’s preferences, asks clarifying questions, and improves over time through feedback. A bad PA misunderstands repeatedly but cannot be trained because they do not recognize their failures as failures. Users of conversational AI experience this pattern: repeated misunderstanding without the normal mechanisms for correction and improvement that human relationships provide.
Folk Theories Fill the Void
Because users cannot directly access algorithmic interpretation, they develop theories about how algorithms work. These theories emerge from observation, experimentation, and social sharing rather than from documentation or direct explanation.
Taina Bucher introduced the concept of the “algorithmic imaginary” to describe how users develop frameworks for understanding systems they cannot directly examine (Bucher, 2017). Because algorithmic operations are opaque, users must imagine how they work. These imaginaries shape behavior, create anxiety, and influence the content people produce.
Michael DeVito and colleagues identified multiple distinct folk theories people hold about social media feeds (DeVito et al., 2017, 2018). Some users believe algorithms show popular content. Others believe algorithms show recent content. Others believe algorithms favor certain types of posts. These theories are often inconsistent and frequently incorrect, but they serve a practical function: they give users a framework for predicting and influencing algorithmic behavior.
Consider the complexity of folk theory formation through a specific case. When Twitter announced in 2016 that it would shift from a reverse-chronological to an algorithmic feed, users erupted with #RIPTwitter. DeVito’s research found that users had developed strong theories about what the platform was “supposed” to do based on its previous behavior. The algorithmic shift violated those theories. Users could not directly examine what the new algorithm would do. They could only predict, based on experiences with other platforms like Facebook, that algorithmic curation would mean seeing less of what they wanted and more of what the platform wanted them to see. Their resistance was resistance to losing what little interpretive predictability they had achieved through years of use.
Sophie Bishop found that YouTube creators share theories socially because they cannot directly interrogate algorithms (Bishop, 2019). She calls this “algorithmic gossip.” Creators exchange tips, warn each other about perceived algorithm changes, and collectively construct theories about what the algorithm “wants.” A creator might report that videos over ten minutes perform better. Another might claim that uploading on Tuesdays increases visibility. A third might insist that certain keywords trigger demonetization. These theories spread through creator communities, becoming shared knowledge that substitutes for the documentation the platform never provides. Some theories are accurate. Some are superstitions. Creators cannot easily distinguish because they have no direct access to ground truth.
Ignacio Siles and colleagues found users describing themselves as “training” algorithms through repeated interactions (Siles et al., 2024). They provide inputs and observe outputs to learn system behavior. The language of training is revealing. Users have reframed their relationship with platforms as a teaching process, in which they must educate the system about their preferences through behavioral signals rather than direct expression. But the training metaphor obscures the power relationship. A teacher shapes the student. These users are not shaping the algorithm. They are learning the fixed patterns and adapting their behavior to match.
The Prompt Engineering Objection
A reasonable objection: prompt engineering suggests users can negotiate with algorithms. Skilled prompters achieve dramatically better results than novices. They learn what phrasings produce what outputs. They develop sophisticated techniques for steering algorithmic behavior. Is this not a form of dialogue?
Prompt engineering is negotiation, but negotiation conducted entirely on the algorithm’s terms and in the algorithm’s language. The skilled prompter has learned to think like the training data. They have internalized the statistical patterns the model uses to interpret input. They craft prompts not to express their intent clearly in natural language, but to activate the appropriate latent-space regions in the model’s representation.
This is the agency of a command-line operator, not a conversational partner. A skilled Unix user achieves remarkable results by learning the precise syntax the system requires. But no one would call their relationship with the command line a “dialogue.” The user adapts entirely to the system’s interpretive framework. The system adapts poorly to the user.
The asymmetry remains fundamental even for expert prompters. When a prompt fails, the user cannot ask, “Why did you interpret it that way?” They can only try again with a different input. When the model’s training data creates blind spots, users cannot argue that the model is wrong. They can only work around the limitation. The interpretive authority remains entirely with the algorithm. The user’s skill lies in learning to work within that authority, not in negotiating with it.
There is a longer-term sense in which user behavior shapes algorithms. Reinforcement Learning from Human Feedback aggregates user feedback to adjust the model’s behavior over time. User rejection of outputs influences future training. But this is not dialogue in any meaningful sense. Your individual rejection teaches the model nothing. Only aggregate patterns across thousands of users, processed through training pipelines controlled by platform companies, eventually shift model behavior. The individual user, in the moment of interaction, faces the same unilateral interpretation as every other user.
Power Concentrates in Interpretation
In human communication, interpretive authority is distributed across participants. Both parties influence meaning. Both can contest interpretations. Both can demand clarification. Power may be unequal, but it is not unilateral.
In algorithmic communication, interpretive authority concentrates. Platforms control algorithms. Algorithms determine interpretation. Users adapt or fail. There is no mechanism for users to collectively negotiate changes to interpretive frameworks, the way language communities gradually shift word meanings through usage.
Natural language evolves through speaker behavior. When enough people use a word a certain way, its meaning shifts. “Literally” has come to mean “figuratively” in many contexts precisely because so many people used it that way. Language communities exercise collective control over interpretation through usage patterns. Prescriptive authorities can complain, but they cannot stop linguistic change driven by speakers.
Algorithmic interpretation does not work this way. If users collectively wanted “fire” to mean “aesthetically excellent” in image generators, they could not make it happen through usage. The algorithm’s interpretation depends on the platform’s training data. Users have no mechanism to revise the interpretive framework. Platform companies define the conditional rules that determine what inputs produce what outputs. Users live inside those rules (Bucher, 2018).
The asymmetry becomes visible in disputes. When a human misunderstands you, you can argue. You can present evidence. You can appeal to shared context. You can escalate to a third party who might adjudicate. When an algorithm misunderstands you, none of these remedies are available. The algorithm has no concept of evidence or argument. It has no shared context to appeal to. And escalating to the platform typically means interacting with another algorithmic system, or, at best, reaching a human support agent who cannot directly examine or modify the algorithm that interpreted your original input.
Tarleton Gillespie’s work on content moderation illustrates this vividly (Gillespie, 2018). Platforms make algorithmic decisions about what content is visible, what is removed, and what is recommended. These decisions constitute interpretations of user content in accordance with platform criteria. Users often cannot determine why their content was treated a certain way. They cannot argue their case in real time. They can only observe outcomes and try to infer the rules. When users appeal content moderation decisions, they typically receive form responses that offer no insight into the interpretive process that led to the original decision. The appeal itself is often processed algorithmically.
Safiya Umoja Noble demonstrated how asymmetric interpretation encodes and perpetuates bias (Noble, 2018). When someone searches for “Black girls,” the algorithm interprets that query based on training data that reflects historical biases. The results perpetuate harmful stereotypes. Users cannot contest this interpretation at the moment. They cannot argue that the algorithm has misunderstood the query’s intent. They receive the biased output and must work around it. Noble’s research reveals that asymmetric interpretation is not merely inconvenient. It has material consequences for how groups are represented and perceived.
Coordination Without Dialogue
Market coordination operates through prices. Buyers and sellers communicate through bids, offers, and transactions. When a seller sets a price and a buyer rejects it, the seller receives feedback and can adjust. Price negotiation involves genuine back-and-forth, with both parties influencing the eventual transaction. Hierarchical coordination operates through authority, but includes feedback mechanisms: subordinates can ask questions, request clarification, and raise concerns. Network coordination operates through relationships built on mutual understanding and reciprocity.
Platform coordination resembles all three but differs in its interpretive structure. Like markets, platforms mediate transactions. Like hierarchies, platforms channel behavior through defined structures. Like networks, platforms enable distributed coordination among parties who may not know each other. But platforms interpose algorithmic interpretation between human communicators in ways those communicators cannot contest or negotiate.
The Uber driver receiving a ride assignment cannot negotiate with the algorithm about its interpretation of their availability. The algorithm determines that the driver is available, assigns a ride, and expects acceptance within seconds. If the driver wanted to explain that they prefer airport runs, or that they are about to end their shift, or that traffic makes a particular route undesirable, there is no mechanism to communicate any of this. The algorithm interprets the driver’s logged-in status as availability. The driver’s preferences, context, and intentions are invisible to the interpretive process.
The Spotify artist whose song is placed in a playlist cannot contest the algorithm’s interpretation of their music’s genre or mood. The algorithm analyzes audio features and metadata, classifies the song according to its training, and places it accordingly. If the artist believes the classification is wrong, if they made a jazz album that the algorithm interprets as background music, there is no dialogue to be had. The artist can modify metadata and hope for different results. They cannot argue that the algorithm misunderstood their artistic intent.
The job applicant whose resume is filtered by an applicant tracking system cannot argue with the algorithm’s interpretation of their qualifications. The system parses the resume, extracts keywords, scores it against criteria, and passes or rejects it. The applicant’s actual capabilities are irrelevant to this process. What matters is whether the resume triggers the right interpretive patterns in the system. Applicants learn to optimize for algorithmic interpretation rather than accurate self-presentation, a phenomenon that itself demonstrates adaptation to asymmetric interpretation.
Karl Weick’s concept of “sensemaking” describes how people in organizations collectively interpret ambiguous situations (Weick, 1995). Sensemaking is social, ongoing, and grounded in identity. It involves narration and dialogue. People make sense of events by talking about them with others, constructing shared narratives, and iteratively revising interpretations.
Platform coordination disrupts organizational sensemaking by substituting algorithmic interpretation for human dialogue. When organizations coordinate through platforms, algorithmic systems interpret members’ actions based on training data. Members cannot engage in sensemaking with algorithms. They cannot collaboratively construct interpretations of ambiguous situations. They can only observe algorithmic outputs and infer their meanings.
The Communication Constitutes Organization perspective holds that organizations emerge and persist through ongoing communication processes (Taylor & Van Every, 2000). This perspective emphasizes coorientation: the mutual alignment of organizational members toward common objects and toward each other. Algorithmic coordination challenges this framework because coorientation requires mutual awareness and mutual adjustment. Algorithms and humans cannot co-orient. Humans orient to algorithmic outputs. Algorithms process human inputs according to fixed patterns. The mutuality that constitutes organization is absent.
Programming Languages Are Different
Programming languages also involve asymmetric interpretation. When you write code, the compiler processes your input according to fixed rules. You cannot negotiate with the compiler. You must conform to syntax requirements or fail.
But programming languages come with comprehensive documentation, deterministic behavior, and explicit error messages. When you make a syntax error in Python, you get an error message specifying exactly what went wrong and where. You can look up documentation explaining the language’s syntax. Given the same code, Python behaves the same way every time. This predictability and transparency make the asymmetry manageable.
Algorithmic systems lack these features. Documentation is minimal or absent. Behavior may be probabilistic, with the same input producing different outputs. Error messages, when they exist, provide no insight into what went wrong or how to fix it. Users must infer patterns from observation rather than learn explicit rules from documentation.
The LinkedIn algorithm deciding which posts to show, the Spotify algorithm deciding which songs to recommend, the Uber algorithm deciding which rides to offer: none come with documentation users can consult. None produces error messages when user input fails to achieve desired outcomes. None behaves deterministically. Users learn these systems the way children learn language, through immersion, pattern recognition, and trial-and-error, not through formal instruction.
Fluency Stratifies
Asymmetric interpretation creates a specific burden for users. Because they cannot negotiate meaning, they must develop accurate mental models of how algorithms interpret inputs. This requires ongoing experimentation, observation, and revision.
The process resembles implicit learning. Users acquire knowledge of algorithmic patterns without being explicitly taught them. They cannot articulate rules because no rules were ever explained. They develop intuitions through repeated exposure and feedback. The knowledge is procedural rather than declarative: users know how to do things without necessarily knowing why those approaches work.
High-fluency users develop a sophisticated understanding of algorithmic interpretation. They know what words, images, formats, and timing produce desired outputs on specific platforms. They have internalized implicit rules governing algorithmic behavior even though those rules were never explained to them. Their expertise manifests as successful prediction: they can anticipate how algorithms will interpret their inputs.
A content creator who has developed fluency with a social media algorithm knows that certain posting times generate more reach, that certain formats trigger different distribution patterns, and that certain words attract algorithmic attention in particular ways. None of this knowledge came from documentation. All of it came from experimentation, observation, community knowledge-sharing, and gradual pattern recognition. The creator cannot explain the algorithm’s rules because they were never disclosed. But they can predict its behavior with reasonable accuracy.
Low-fluency users lack this predictive capability. They experience platforms as unpredictable or arbitrary. The same actions sometimes produce desired results and sometimes do not. They cannot identify what differentiates success from failure because they have not developed accurate models of algorithmic interpretation.
The gap between high-fluency and low-fluency users is not primarily a gap in motivation or intelligence. It is an opportunity to learn. Fluency develops through practice. Practice requires time and access. Users who spend more time on platforms have more bandwidth to experiment and participate in communities that share algorithmic knowledge, and develop fluency faster. Users with demanding jobs, caregiving responsibilities, limited internet access, or less exposure to digital culture develop fluency more slowly or not at all.
This fluency stratification explains why identical platforms produce such different outcomes for different users. The platform is the same. The features are the same. The algorithms are the same. But outcomes vary dramatically. Users with accurate mental models of algorithmic interpretation can reliably translate their intentions into effective inputs. Users without such models cannot.
The stratification compounds over time. High-fluency users get more value from platforms. They continue using platforms intensively. Their fluency increases further. Low-fluency users get less value. They may use platforms less frequently or abandon them entirely. Their fluency stagnates or decays. What began as a small difference in initial learning conditions becomes a large and persistent gap in coordination capability.
Developing accurate mental models requires extensive practice and sustained engagement. Users must try things, observe results, formulate hypotheses, test hypotheses, and revise understanding. This learning is effortful, time-consuming, and never complete because platforms continually update their algorithms. A user who becomes fluent in the 2023 version of an algorithm may find their knowledge obsolete when the 2024 version launches. Maintaining fluency requires continuous investment.
The burden falls unevenly. Users with more time, more resources, more prior digital experience, and more access to communities sharing algorithmic knowledge develop fluency faster. Users lacking these advantages fall behind. Asymmetric interpretation contributes to the broader stratification of platform benefits along existing lines of inequality. Those who already have advantages gain more from platforms. Those who already face disadvantages gain less.
The Framework Connection
Asymmetric interpretation is one of five properties defining Application Layer Communication:
Intent Specification: Communication serves instrumental coordination rather than social relationship-building or mutual understanding.
Asymmetric Interpretation: Meaning is determined unilaterally by algorithms, not negotiated between communicative partners.
Machine Orchestration: Individual communications trigger coordination among distributed actors and resources beyond the dyadic interaction.
Implicit Acquisition: Users learn ALC through practice rather than explicit instruction.
Stratified Fluency: Populations exhibit systematic variance in ALC competence.
These properties connect systematically. Because interpretation is asymmetric, users cannot learn effective communication through dialogue with the system. They must learn through trial-and-error. Because learning is trial-and-error-based and time-intensive, competence stratifies across users. Because communication is instrumental and triggers distributed coordination, the stakes of fluency are high: users who cannot communicate effectively with platforms cannot coordinate effectively through them.
Asymmetric interpretation is a structural feature that shapes how platform literacy develops, how platform benefits are distributed, and how platform coordination differs from traditional coordination mechanisms.
Markets, hierarchies, and networks all assume actors can communicate with their coordination partners. They assume that interpretation is at least somewhat negotiable, meaning it can be clarified through interaction, and that parties can work toward mutual understanding, even if imperfectly.
Platform coordination makes no such assumption. Algorithms interpret. Users adapt.
Bibliography
Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New Media & Society, 21(11-12), 2589-2606.
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30-44.
Bucher, T. (2018). If…Then: Algorithmic Power and Politics. Oxford University Press.
Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on Socially Shared Cognition (pp. 127-149). American Psychological Association.
DeVito, M. A., Gergle, D., & Birnholtz, J. (2017). “Algorithms ruin everything”: #RIPTwitter, folk theories, and resistance to algorithmic change in social media. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3163-3174.
DeVito, M. A., Birnholtz, J., Hancock, J. T., French, M., & Liu, S. (2018). How people form folk theories of social media feeds and what it means for how we study self-presentation. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper 120.
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162.
Eslami, M., Vaccaro, K., Lee, M. K., Elazari Bar On, A., Gilbert, E., & Karahalios, K. (2019). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Paper 494.
Gadamer, H.-G. (1989). Truth and Method (2nd rev. ed.). Continuum. (Original work published 1960)
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics, Vol. 3: Speech Acts(pp. 41-58). Academic Press.
Luger, E., & Sellen, A. (2016). “Like having a really bad PA”: The gulf between user expectation and experience of conversational agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5286-5297.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 173-182.
Schegloff, E. A., Jefferson, G., & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53(2), 361-382.
Siles, I., Valerio-Alfaro, L., & Meléndez-Moran, A. (2024). Learning to like TikTok…and not: Algorithm awareness as a process. New Media & Society, 26, 5702-5718.
Taylor, J. R., & Van Every, E. J. (2000). The Emergent Organization: Communication as Its Site and Surface. Lawrence Erlbaum Associates.
Weick, K. E. (1995). Sensemaking in Organizations. Sage Publications.
