The Aggregation Problem
How Millions of Individual Platform Acts Become Collective Coordination
An Uber driver in Queens accepts a ride request. A Spotify listener in São Paulo skips a track after four seconds. An Airbnb host in Barcelona adjusts a nightly rate by twelve euros. Each act is individual, private, and locally rational. No central authority designs the coordination that follows. No dispatcher distributes drivers across the city to match demand patterns. No programmer manually curates the playlist that 500 million listeners will encounter tomorrow. No pricing committee aligns short-term rental rates with seasonal demand across 220 countries. Yet coordination emerges. Drivers distribute. Playlists converge. Prices clear markets. Millions of individual acts, performed by people who will never meet, never communicate, and never learn of each other’s existence, produce collective outcomes that resemble organizational coordination without any organization producing them.
A term used this loosely needs definition. Coordination in platform contexts means the effective alignment of distributed participant actions toward system-level outcomes that no individual participant designed or directed. In ride-hailing, coordination is matching efficiency: the ratio of rider wait time and driver idle time to completed trips across a metropolitan area. In short-term rental markets, coordination is market-clearing accuracy: the degree to which available inventory meets demand at prices acceptable to both hosts and guests without persistent surplus or shortage. In content platforms, coordination is recommendation relevance: the degree to which algorithmic distribution connects content with audiences whose engagement validates the match. In each case, coordination produces a measurable collective outcome from distributed individual choices. The outcome is not planned. It is aggregated.
The puzzle deepens when coordination fails. Identical platform architectures produce tight coordination in some markets and persistent dysfunction in others. The same Uber algorithm that efficiently distributes drivers in San Francisco produces chronic shortages in smaller cities. The same Airbnb search ranking that generates competitive pricing in London produces erratic pricing in emerging markets. The algorithm did not change. The platform did not change. Something about the user population changed, and that something determined whether individual acts aggregated into coordination or dissolved into noise.
Every major coordination theory has an aggregation story. Markets aggregate through prices: individual transactions encode information into price signals that coordinate subsequent transactions without central direction (Hayek, 1945). Hierarchies aggregate through authority: individual task execution follows organizational rules that compile outputs into collective production. Networks aggregate through reputation and trust: repeated interactions build relational expectations that coordinate behavior across organizational boundaries (Powell, 1990). Platforms aggregate through algorithms. But “algorithms” name the technology, not the mechanism. Saying platforms coordinate through algorithms is like saying markets coordinate through telephones. The question is not what infrastructure carries the coordination signals, but what generates signals worth carrying.
This is the micro-macro bridging problem, and it is the third gap in the theoretical architecture supporting Application Layer Communication. The first essay in this series established that platform coordination operates through a communication system existing frameworks cannot capture: the dyadic assumptions of computer-mediated communication and human-machine communication cannot handle the triadic structure of human-algorithm-human interaction. The second essay established that competence in this communication system transfers across platforms through deep structural similarity, challenging the dominant consensus on far-transfer failure. This third essay addresses the question the first two leave open: once individuals develop differential competence in platform communication, how does that differential competence aggregate into differential coordination at the collective level?
Three Frameworks That Almost Answer the Question
The most influential micro-macro framework in social science decomposes the problem into three mechanisms. Macro structures shape individual situations (Arrow 1). Individuals form actions in those situations (Arrow 2). Individual actions aggregate to macro outcomes (Arrow 3). The framework, developed across decades of sociological theory, provides the conceptual vocabulary for linking levels of analysis and has been explicitly recommended for management research by journal editors calling for greater micro-macro specification (Coleman, 1990; Cowen et al., 2022).
The framework works cleanly for classical coordination mechanisms, and the cleanness is instructive. In markets, price signals shape individual buying and selling decisions. Individuals transact based on those signals. Transactions aggregate to market equilibria. The actor’s competence to read a price and act on it is assumed, not theorized, because markets developed alongside populations that already understood exchange.
In hierarchies, authority structures shape individual role performance. Individuals execute assigned tasks within defined parameters. Task execution aggregates to organizational output. The actor’s competence to follow instructions and produce specified work is a hiring prerequisite, screened before participation begins.
In networks, relational expectations shape individual cooperative behavior. Individuals reciprocate based on accumulated trust. Reciprocation aggregates to network-level coordination. The actor’s competence to sustain reciprocal relationships develops through socialization that predates any specific network.
Each mechanism assumes the actor is competent to participate. The mechanism channels existing competencies. It does not create them.
For platforms, Arrow 1 already fails. Platform structures do not channel pre-existing competencies. They develop competencies through participation. A new Uber driver does not know how to read surge pricing, position for airport queues, or time shift changes. A new Airbnb host does not know how to write algorithmically favorable descriptions, price dynamically, or respond to reviews in ways that boost search ranking. These competencies emerge from platform engagement. They are outputs of the system, not inputs. Arrow 1 must include competency transformation, not merely situation-shaping, and this cascades: if Arrow 1 transforms actors rather than constraining them, then Arrow 2 involves actors whose capabilities changed through structural engagement, and Arrow 3 aggregates actions from transformed actors rather than stable ones (Felin et al., 2015; Ployhart & Moliterno, 2011).
A recent meta-synthesis applying the framework to corporate social responsibility research illustrates the scale of the specification problem. Even in conventional organizational settings with clear hierarchies and established routines, specifying how individual behaviors aggregate to organizational outcomes required identifying entirely new causal mechanisms that prior macro-level research had missed (Remmer & Gilbert, 2025). A review of 313 cross-level articles in ten management journals found that 62% of papers making cross-level claims exhibited theoretical flaws in their micro-macro specifications (Lemoine et al., 2025). If the field struggles to specify aggregation within organizations, the difficulty multiplies enormously for platform contexts where organizational structure itself is absent.
The microfoundations movement in organizational theory offers a second approach. Organizational capabilities, routines, and performance ultimately rest on individual actions, interactions, and cognition (Felin et al., 2015). The movement correctly identifies the problem: macro outcomes require micro-level specification. But microfoundations scholarship assumes that organizational structure provides the aggregation technology. Routines aggregate individual actions into collective patterns. Reporting structures aggregate individual information into organizational knowledge. Incentive systems aggregate individual effort into collective production (Felin et al., 2012; Puranam, 2018).
Platforms lack this technology. No organizational routines connect an Uber driver in Queens to one in Brooklyn. No reporting structure aggregates Airbnb host decisions into market-level pricing. No shared incentive system coordinates Spotify listeners into the recommendation patterns that shape what half a billion people hear. The organizational mediation that microfoundations assumes as given is absent. Without organizational routines, the individual-to-collective pathway must operate through something else entirely.
Complexity theory offers a third framework that initially appears to solve the problem. Stigmergy describes coordination through environmental traces: an agent acts, the action leaves a trace in the environment, the trace stimulates subsequent actions by other agents, and coordination emerges without direct inter-agent communication (Heylighen, 2016a). The concept originated in entomology, where termites coordinate nest-building through pheromone deposits rather than direct communication (Grassé, 1959), and has been generalized to explain coordination in systems ranging from ant colonies to Wikipedia to open-source software development (Theraulaz & Bonabeau, 1999; Zheng et al., 2023; Bolici et al., 2016).
Platforms resemble stigmergic systems more closely than they resemble either markets or organizations. Each user action leaves a digital trace: a rating, a click, a purchase, a dwell time. Algorithms process traces into environmental modifications: updated recommendations, revised search rankings, adjusted prices. Subsequent users encounter modified environments and act accordingly. Platform architecture does not inherently require or facilitate direct user-to-user communication about coordination; users sometimes communicate through off-platform channels like Reddit forums and WhatsApp groups, but these are supplements to the system rather than features of it. The environment mediates the coordination that matters. The most rigorous quantitative test of stigmergy in digital contexts found that the degree of stigmergy in Wikipedia is positively associated with both participation and knowledge quality, providing empirical support for the mechanism at scale (Zheng et al., 2023).
The framework breaks at a specific point. Stigmergy assumes agents with stable, roughly uniform competencies for reading and producing traces. Ants do not vary in their capacity to detect pheromones. Termites do not differ in their ability to deposit building material at appropriate locations. The homogeneity of agent response is what makes stigmergic coordination reliable: any agent encountering a strong trace responds identically to any other agent encountering the same trace. Platform users vary enormously in their capacity to produce algorithmically processable traces. The quality of the trace depends on the competence of the actor producing it. Two Airbnb hosts listing identical apartments in the same neighborhood produce traces of vastly different coordination value depending on how they title, describe, photograph, price, and respond to guests. Stigmergy cannot explain why identical trace-leaving opportunities produce radically different coordination outcomes across users (Heylighen, 2016b).
Experimental evidence confirms this limitation. When human subjects engage in stigmergic coordination tasks, strategic manipulation and deceptive signaling emerge, producing behavioral profiles that differ systematically in cooperation level (Bassanetti et al., 2023). Knowledge heterogeneity among contributors favors stigmergic interaction, but experience heterogeneity increases the need for explicit communicative interaction, suggesting that stigmergic coordination degrades precisely when actors differ in the type of competency they bring (Qiu et al., 2021). Human stigmergy involves intentional communication, causal inference, and metacognition, features that introduce competency variance absent from biological stigmergy (Topf & Speekenbrink, 2021). The framework works for systems with homogeneous agents. Platforms are systems with profoundly heterogeneous agents.
The Common Failure Point
All three frameworks share a hidden assumption: actor competencies are inputs, not outputs, of the coordination process. The micro-macro bridging framework treats competencies as given properties of actors who enter structured situations. The microfoundations movement treats competencies as organizational resources that can be deployed. Stigmergy treats competencies as uniform biological endowments that agents bring to their interactions with the environment.
Platform coordination violates this assumption. The capacity to participate effectively in aggregation develops through participation itself. Classical organizational sensemaking theory established that individuals in organizations construct meaning through ongoing retrospective interpretation of cues extracted from ambiguous environments (Weick, 1995). That framework assumed organizational membership, providing shared vocabularies, inherited frames, and institutional traditions against which sensemaking occurs. Platform workers lack these organizational anchors. Yet they engage in sensemaking anyway, developing routines shaped by accumulated understanding of algorithmic functions, routines that evolve as algorithmic parameters shift (Möhlmann et al., 2023). What Weick described as organizational sensemaking reappears in platform contexts as algorithm sensemaking, but without the organizational infrastructure Weick assumed enabled it. Folk theories of algorithms emerge through ongoing encounters with platforms, not through formal instruction or pre-existing expertise (DeVito, 2021; Bucher, 2017). Gig economy workers demonstrate clear learning curves: newcomers rely heavily on algorithmic recommendations while experienced workers deviate from recommendations, developing personalized strategies through accumulated engagement (Jiang & Sinchaisri, 2025). The first validated scale of algorithmic competency identifies four dimensions, understanding, embracing, leveraging, and remediating algorithmic management, all of which develop through participation rather than arriving as pre-formed capabilities (Zhou et al., 2025).
The aggregation problem, therefore, has a prior problem: the competency problem. Before explaining how individual actions aggregate into collective coordination, a theory must explain how individuals develop the capacity to produce actions that aggregate well. And “aggregate well” in platform contexts means something specific: produce data that algorithms can process into coordination outputs.
What Gets Aggregated
Algorithms do not aggregate actions. They aggregate data. The distinction matters. An action is something a person does. Data is the machine-readable representation of something a person does. The gap between the two is where coordination succeeds or fails.
A driver who understands surge pricing generates data that the algorithm can use to distribute drivers efficiently. A driver who does not understand surge pricing generates data that the algorithm still processes, but the resulting distribution is suboptimal for that driver and for the system. Both drivers act. Both generate data. The data differs in coordination value because the competence behind it differs.
Algorithms also aggregate inaction, and inaction carries coordination value that varies with user competence in the same way. A Spotify listener who does not skip a track generates a dwell-time signal that the recommendation algorithm interprets as positive engagement. A TikTok viewer who lingers on a video without liking or sharing generates watch-completion data that weighs more heavily in distribution decisions than any explicit interaction. An Uber driver who lets a ride request expire generates a non-acceptance signal that reshapes the matching algorithm’s subsequent distribution. These are not absences of data. They are data. The algorithm reads inaction as information, and the coordination value of that information depends on whether the user understands that inaction communicates. A literate user who strategically allows a low-value ride request to expire while repositioning toward a surge zone generates meaningful coordination data. A confused user who fails to accept a request because the interface is unfamiliar generates noise. Both actions look identical in the system’s data layer. The competence behind them determines their coordination value.
Platform coordination, therefore, aggregates not actions or inactions but translations. Each user translates intentions into platform-parsable inputs, including everything the user does and does not do within the platform’s data-collection scope. The quality of translation depends on the user’s communicative competence within the platform’s specific communication system, which this essay series has called ALC literacy. High-literacy users generate high-fidelity translations: their data accurately represents their intentions in forms the algorithm can process effectively. Low-literacy users generate lossy translations: their data misrepresents their intentions, or represents them in forms the algorithm processes poorly.
This reframing resolves a persistent puzzle in platform studies. Algorithmic management research documents extensive control mechanisms, restricting, recommending, recording, rating, replacing, and rewarding, that aggregate individual worker data into system-level outputs (Kellogg et al., 2020). But the same control mechanisms operating on the same platform produce radically different coordination quality across markets and user populations. The explanation cannot lie in the control mechanisms themselves, because those mechanisms are constant across contexts. The explanation lies in the quality of input data, which varies with user competence. Input quality determines output quality at every level of algorithmic processing (Whang et al., 2021). Moderate input control on platforms maximizes complementor quality: too little control risks degradation from low-quality participants, too much control deters participation entirely (Adam et al., 2022). The optimal balance depends not on the algorithm but on the competency distribution of the user population.
The co-optation framework identifies the coordination mode but not the aggregation mechanism. Platforms co-opt users through participation: the most valuable assets and activities reside on the platform rather than in the firm, and users contribute to organizational value creation while remaining external to the organization (Stark & Vanden Broeck, 2024). This describes what happens. It does not specify how millions of individual co-optation episodes aggregate into collective coordination. The Möbius organizational topology described by co-optation theory requires a process that connects individual participation to system-level outcomes. That process operates through the quality of data that users generate when translating their intentions into platform-parsable inputs (Stark & Pais, 2020).
Redrawing the Boat
Coleman’s boat requires three specified mechanisms. The proposed aggregation pathway does not merely modify Coleman’s arrows. It replaces the diagram with a four-step recursive architecture designed for systems where structure transforms the actor rather than constraining an already competent one.
Step one: platform structures develop user competencies through participation, replacing Coleman’s Arrow 1 (structures constrain pre-competent actors). Users develop ALC literacy through implicit acquisition and the development of communicative competence through repeated practice, experimentation, and calibration against ambiguous feedback, rather than through formal instruction or explicit training. Platforms do not teach their own communication requirements; users extract patterns from platform behavior, form provisional theories about algorithmic logic, test those theories through behavioral experiments, and revise them in light of confounded and probabilistic feedback. This competence varies systematically across users and develops at different rates depending on adjacent competencies, learning context, and accumulated platform experience.
Step two: users generate machine-parsable data whose coordination value depends on the competencies developed in step one, replacing Coleman’s Arrow 2 (actors choose from pre-existing repertoires). An Airbnb host with high ALC literacy produces listings where title keywords, description structure, photography quality, pricing strategy, and response patterns all align with algorithmic ranking criteria. A ride-hailing driver with high ALC literacy generates acceptance patterns, positioning data, and rating trajectories that the matching algorithm can process into efficient distribution. The data these users generate carries high coordination value because it accurately encodes the information algorithms need to produce effective matches, rankings, and recommendations.
Step three: algorithms process heterogeneous-quality data into coordination signals, replacing Coleman’s Arrow 3 (individual actions aggregate to macro outcomes through social interaction). Recommendations, matches, rankings, prices, visibility allocations, and queue positions all emerge from algorithmic processing of user-generated data. The quality of coordination signals depends on the quality of input data. The algorithm itself may be technically identical across contexts, yet produce different coordination quality because it receives different quality inputs.
Step four, which has no analog in Coleman’s framework: coordination signals reshape the environment for subsequent competency development, closing a recursive loop. Effective coordination signals create environments where users receive clear feedback on their communication effectiveness. Degraded coordination signals create environments where feedback is noisy, confusing, or misleading. Step four feeds back into step one, making the model cyclical, where Coleman’s is linear.
Each step introduces variance. Literacy varies across users. Data quality varies with literacy. Algorithmic processing quality varies with data quality. Environmental feedback quality varies with processing quality—variance compounds across steps and across cycles.
Recursive Amplification
The recursion produces what the cumulative advantage literature describes at the system level but cannot explain at the mechanism level. Small initial differences in user capability amplify through algorithmic feedback loops into large outcome disparities. Competitive amplification models demonstrate that small differences in resource positions compound over time, with amplification largest in markets featuring highly scalable and rapidly depreciating resources, precisely the characteristics of digital platform contexts (Wibbens, 2021). Platforms with stronger recommendation systems show more concentrated inequality and lower median earnings among content creators, suggesting that algorithmic architecture itself drives differential outcomes rather than merely reflecting pre-existing talent distributions (Strauss et al., 2025). Algorithms reinforce pre-existing visibility advantages so powerfully that small newspapers actually perform worse when featured on aggregator platforms than when they operate independently, because the algorithmic aggregation mechanism amplifies competency differences rather than correcting them (Meyer et al., 2024).
The mechanism is more specific than the generic cumulative advantage. Cumulative advantage can be reconceptualized as involving heterogeneous individual capabilities and deliberate strategies for positioning within self-reinforcing systems, with dependence on such dynamics increasing outcome variance rather than mean outcomes (Vashevko, 2024). Platform users do not passively receive cumulative advantage. Highly literate users actively invest in the strategies that trigger algorithmic amplification: they optimize metadata, time content releases, structure interactions to generate favorable signals, and develop routines for monitoring and responding to algorithmic changes. This investment compounds because each successful cycle of literacy-driven data quality generates resources, visibility, engagement, and revenue that fund further investment. Low-literacy users lack the competence to make initial investments and therefore do not receive the returns that would fund subsequent competence development.
Recommendation algorithms create self-reinforcing feedback spirals between information provision and information consumption (Mansoury et al., 2020; Jiang et al., 2025). On e-commerce platforms, slight differences in early conversion rates yield large differences in subsequent traffic allocation because the platform’s optimal strategy is to route traffic to sellers whose data signals higher coordination value (Yu et al., 2022). The literature on algorithmic attention rents theorizes how platforms extract value through algorithmic control of user attention, with platform power enabling attention extraction that reinforces market dominance (O’Reilly et al., 2024). Each of these dynamics operates through the same pathway: differential user competence produces differential data quality, which produces differential algorithmic treatment, which produces differential learning environments, which produce further competency divergence.
What This Explains and Where It Does Not
The aggregation pathway resolves three explanatory puzzles that no existing framework addresses.
It explains platform-level coordination quality variance. Identical algorithms yield different coordination quality across markets because coordination quality depends on the literacy distribution of the user population, not solely on algorithmic parameters. A city where ride-hailing drivers collectively possess high ALC literacy generates better coordination data, which produces better matching, which creates better feedback loops, which sustains high literacy levels. A city where drivers collectively possess low ALC literacy generates degraded coordination data, and the recursive loop drives coordination quality further downward. The algorithm is constant. The literacy distribution varies. The coordination outcome follows the distribution of literacy.
It reframes the organizational training question. If aggregation depends on literacy, and literacy develops through implicit acquisition, then organizations cannot produce coordination by implementing platforms. They must develop the communicative competencies that enable their members to generate data compatible with coordination. Platform implementation without literacy development is infrastructure without capacity. Current organizational practice inverts the priority, investing in platform selection and interface training while neglecting the communication competencies that determine whether platform data aggregates into coordination or noise. Platform owners, lacking formal power over participants, influence coordination through technology-mediated orienting rather than direct authority. Still, orienting requires participants whose communicative competence enables them to respond to coordination signals with appropriate data (Leong et al., 2023).
It exposes the equity dimension of platform coordination. Aggregation mechanisms that depend on unequally distributed competencies yield unequal coordination benefits. Platform coordination is not a neutral infrastructure that anyone can use equally. It is a communication system that rewards fluency and penalizes its absence. Those who arrive with adjacent competencies, digital fluency, analytical reasoning, and comfort with structured input develop ALC literacy faster and generate higher-quality coordination data earlier. Algorithmic knowledge gaps represent a new dimension of digital inequality that operates independently of traditional access divides (Cotter & Reisdorf, 2020). Negative consequences cascade through chain reactions: cognitive vulnerability leads to economic divides, which produce information divides, which generate social divides (Potnis et al., 2024). Even small algorithmic biases prove more harmful than viewer biases in perpetuating rich-get-richer effects, because algorithmic amplification operates at scale while human biases operate locally (Ionescu et al., 2023). Platforms create a lemon market when low- and high-quality participation become indistinguishable, driving high-quality participants out of the ecosystem (Tavalaei et al., 2024).
The theory has boundary conditions, and specifying them sharpens its explanatory range. ALC literacy matters most where the translation gap between user intention and machine-parsable data is widest. Platforms with rich, ambiguous input requirements generate the largest competency-driven variance: Airbnb hosts must compose descriptions, select photographs, set dynamic prices, and manage review interactions, each requiring substantial communicative judgment about what the algorithm will reward. Content creators on TikTok or YouTube must make decisions about framing, pacing, metadata, and timing that depend on implicit models of algorithmic distribution. Ride-hailing drivers must interpret surge maps, anticipate demand patterns, and manage acceptance rates within opaque algorithmic parameters.
ALC literacy matters least where the translation gap is narrow. A digital tollway user taps a transponder. The translation from intention to machine-parsable data is deterministic: there is one input, one action, one data point, and no competency-driven variance in how the system processes it. Payment processing platforms, barcode scanners, and simple transactional systems impose minimal translation requirements. The theory predicts near-zero competency-driven coordination variance in these contexts because the communication system demands near-zero communicative judgment. The gradient between these poles, from tollway to TikTok, defines the theory’s explanatory domain. Where translation is complex, ambiguous, and consequential, ALC literacy predicts coordination outcomes. Where translation is simple, deterministic, and trivial, it does not. Platforms rarely remain static along this gradient. A system that begins as a simple transactional tool, closer to the tollway, can evolve into an algorithmic ecosystem, closer to TikTok, as the platform adds recommendation layers, dynamic pricing, reputational scoring, and personalized ranking. Each addition widens the translation gap and increases the ALC literacy demands on its users, often without users recognizing the shift. The framework, therefore, applies not only across platforms at a single point in time but also within a single platform across its developmental trajectory.
Coordination as a Literacy Problem
The aggregation problem in platform coordination is, at bottom, a problem about who can speak the platform’s language well enough to produce data worth aggregating. Solving it permanently alters three active research programs in management scholarship.
For coordination theory, the implication is that mechanism specification must accommodate endogenous competency development. Classical theory treats competency development as a background condition across markets, hierarchies, and networks. Actors slowly develop pricing sophistication, hierarchical skills, or relational competence. Platforms compress this dynamic into observable timescales and record it in digital traces. They make the invisible visible. The platform case does not merely extend coordination theory. It reveals a blind spot that has always been present across all coordination mechanisms: the assumption that the capacity to participate in coordination exists before and independently of coordination structures themselves. Platforms force this assumption into the open because the assumption is so visibly violated.
In platform studies, the implication is that the control-versus-resistance framing that dominates the field captures only one dimension of platform dynamics. Algorithmic management research has organized itself around documenting control mechanisms and workers’ responses to them (Kellogg et al., 2020; Curchod et al., 2020; Rahman, 2021). Control and resistance are downstream consequences of a more fundamental process: the development of communicative competence—workers who develop high ALC literacy experience algorithmic management differently than workers who do not. The control mechanisms are identical. The translations those workers generate are not. The competency distribution precedes and shapes the control-resistance dynamic, not the other way around. Studying platform inequality without measuring user communicative competence is studying educational outcomes without measuring literacy. The independent variable is missing from the model.
For organizational design, the implication is that platform implementation is a literacy intervention, whether organizations recognize it or not. Every platform deployment creates a new communication system. Every new communication system requires users to develop communicative competence through implicit acquisition, because platforms do not teach their own communication requirements. Organizations that understand this can design for literacy development: scaffolded exposure, transparent feedback, and opportunities for low-stakes experimentation. Organizations that do not understand it deploy platforms and then blame users, algorithms, or vendors when coordination fails to materialize. The aggregation pathway explains why coordination fails: the user population’s literacy distribution determines data quality, which in turn determines algorithmic output quality, and no amount of algorithmic optimization can compensate for inputs that lack coordination-relevant information.
The existing literature has assembled nearly all the pieces. The micro-macro bridging tradition provides the conceptual architecture. The microfoundations movement insists on individual-level specification. Stigmergy provides the model of indirect coordination through environmental traces. The algorithmic management literature documents the control mechanisms through which platforms process user data. The folk theories and algorithm sensemaking literature documents how users develop competence through participation. The cumulative advantage literature documents how small differences amplify through feedback loops. What remains missing is the mechanism connecting these pieces: the specification of what users do when they generate the data that algorithms aggregate, and why some users generate better data than others. That specification is ALC literacy, and its absence from the literature is the gap that matters.
References
Adam, M., Croitor, E., Werner, D., Benlian, A., & Wiener, M. (2022). Input control and its signalling effects for complementors’ intention to join digital platforms. Information Systems Journal, 33(3), 437–466. https://doi.org/10.1111/isj.12408
Bassanetti, T., Cezera, S., Delacroix, M., Escobedo, R., Blanchet, A., Sire, C., & Theraulaz, G. (2023). Cooperation and deception through stigmergic interactions in human groups. Proceedings of the National Academy of Sciences, 120(36), e2307804120. https://doi.org/10.1073/pnas.2307804120
Bolici, F., Howison, J., & Crowston, K. (2016). Stigmergic coordination in FLOSS development teams. Cognitive Systems Research, 38, 14–22. https://doi.org/10.1016/j.cogsys.2015.12.003
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary effects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086
Cameron, L. D. (2022). “Making out” while driving: Relational and efficiency games in the gig economy. Organization Science, 33(1), 231–252. https://doi.org/10.1287/orsc.2021.1547
Coleman, J. S. (1990). Foundations of social theory. Harvard University Press.
Cotter, K. (2022). Practical knowledge of algorithms: The case of BreadTube. New Media & Society, 26(4), 2131–2150. https://doi.org/10.1177/14614448221081802
Cotter, K., & Reisdorf, B. (2020). Algorithmic knowledge gaps: A new dimension of (digital) inequality. International Journal of Communication, 14, 745–765.
Cowen, A. P., Rink, F., Cuypers, I. R. P., Gregoire, D. A., & Weller, I. (2022). Applying Coleman’s boat in management research: Opportunities and challenges in bridging macro and micro theory. Academy of Management Journal, 65(1), 1–10. https://doi.org/10.5465/amj.2022.4001
Curchod, C., Patriotta, G., Cohen, L., & Neysen, N. (2020). Working for an algorithm: Power asymmetries and agency in online work settings. Administrative Science Quarterly, 65(3), 644–676. https://doi.org/10.1177/0001839219867024
DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 339, 1–38.
Felin, T., Foss, N. J., Heimeriks, K. H., & Madsen, T. L. (2012). Microfoundations of routines and capabilities: Individuals, processes, and structure. Journal of Management Studies, 49(8), 1351–1374. https://doi.org/10.1111/j.1467-6486.2012.01052.x
Felin, T., Foss, N. J., & Ployhart, R. E. (2015). The microfoundations movement in strategy and organization theory. Academy of Management Annals, 9(1), 575–632. https://doi.org/10.5465/19416520.2015.1007651
Grassé, P.-P. (1959). La reconstruction du nid et les coordinations interindividuelles chez Bellicositermes natalensis et Cubitermes sp. Insectes Sociaux, 6(1), 41–80.
Hayek, F. A. (1945). The use of knowledge in society. American Economic Review, 35(4), 519–530.
Heylighen, F. (2016a). Stigmergy as a universal coordination mechanism I: Definition and components. Cognitive Systems Research, 38, 4–13. https://doi.org/10.1016/j.cogsys.2015.12.002
Heylighen, F. (2016b). Stigmergy as a universal coordination mechanism II: Varieties and evolution. Cognitive Systems Research, 38, 50–59. https://doi.org/10.1016/j.cogsys.2015.12.007
Ionescu, Ş., Hannák, A., & Pagan, N. (2023). Group fairness for content creators: The role of human and algorithmic biases under popularity-based recommendations. Proceedings of the 17th ACM Conference on Recommender Systems, 905–911.
Jiang, S., & Sinchaisri, W. P. (2025). Learning on the go: Understanding how gig economy workers learn with recommendation algorithms. Proceedings of the ACM on Human-Computer Interaction, 9, 1–35.
Jiang, T., Sun, Z., & Fu, S. (2025). Restraining the formation of filter bubbles with algorithmic affordances. Journal of the Association for Information Science and Technology, 76(7), 989–1005. https://doi.org/10.1002/asi.24988
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174
Lemoine, G., Ghahremani, H., & Norris, K. (2025). The curious case of cross-level effects: Refining our understanding to match our methods. Journal of Organizational Behavior. https://doi.org/10.1002/job.70010
Leong, C., Lin, S., Tan, F., & Yu, J. (2023). Coordination in a digital platform organization. Information Systems Research, 35(1), 363–393.
Mansoury, M., Abdollahpouri, H., Pechenizkiy, M., Mobasher, B., & Burke, R. (2020). Feedback loop and bias amplification in recommender systems. Proceedings of the 29th ACM International Conference on Information and Knowledge Management, 2145–2148.
Meyer, T., Kerkhof, A., Cennamo, C., & Kretschmer, T. (2024). Competing for attention on digital platforms: The case of news outlets. Strategic Management Journal, 45(9), 1731–1790. https://doi.org/10.1002/smj.3600
Möhlmann, M., Salge, C. A. L., & Marabelli, M. (2023). Algorithm sensemaking: How platform workers make sense of algorithmic management. Journal of the Association for Information Systems, 24(1), 35–64.
O’Reilly, T., Strauss, I., & Mazzucato, M. (2024). Algorithmic attention rents: A theory of digital platform market power. Data & Policy, 6, e6. https://doi.org/10.1017/dap.2024.1
Ployhart, R. E., & Moliterno, T. P. (2011). Emergence of the human capital resource: A multilevel model. Academy of Management Review, 36(1), 127–150.
Potnis, D., Tahamtan, I., & McDonald, L. (2024). Negative consequences of information gatekeeping through algorithmic technologies. Journal of the Association for Information Science and Technology, 76(1), 262–288. https://doi.org/10.1002/asi.24955
Powell, W. W. (1990). Neither market nor hierarchy: Network forms of organization. Research in Organizational Behavior, 12, 295–336.
Puranam, P. (2018). The microstructure of organizations. Oxford University Press.
Qiu, J., Zuo, M., Wang, J., & Cai, C. (2021). Knowledge order in an online knowledge community: Group heterogeneity and two paths mediated by group interaction. Journal of the Association for Information Science and Technology, 72(8), 1075–1091. https://doi.org/10.1002/asi.24475
Rahman, H. A. (2021). The invisible cage: Workers’ reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4), 945–988. https://doi.org/10.1177/00018392211010118
Remmer, S., & Gilbert, D. U. (2025). Causal mechanisms in CSR management: A meta-synthesis of micro-CSR research. Journal of Management Studies. https://doi.org/10.1111/joms.13207
Stark, D., & Pais, I. (2020). Algorithmic management in the platform economy. Sociologica, 14(3), 47–72. https://doi.org/10.6092/issn.1971-8853/12107
Stark, D., & Vanden Broeck, P. (2024). Principles of algorithmic management. Organization Theory, 5(2). https://doi.org/10.1177/26317877241257213
Strauss, I., Yang, J., & Mazzucato, M. (2025). “Rich-get-richer”? Analyzing content creator earnings across large social media platforms. Working paper, UCL Institute for Innovation and Public Purpose.
Tavalaei, M. M., Santaló, J., & Gawer, A. (2024). Balancing variety and quality: Examining the impact of benefit-linked cross-subsidization on multisided platforms. Journal of Management Studies, 62(4), 1717–1746. https://doi.org/10.1111/joms.13120
Theraulaz, G., & Bonabeau, E. (1999). A brief history of stigmergy. Artificial Life, 5(2), 97–116. https://doi.org/10.1162/106454699568700
Topf, S., & Speekenbrink, M. (2021). Agent, behaviour, trace, repeat: Understanding the cognitive processes involved in human stigmergic coordination. Working paper.
Vashevko, A. (2024). The Matthew effect as skill and strategy. Working paper.
Weick, K. E. (1995). Sensemaking in organizations. Sage.
Whang, S. E., Roh, Y., Song, H., & Lee, J.-G. (2021). Data collection and quality challenges in deep learning: A data-centric AI perspective. The VLDB Journal, 32, 791–813.
Wibbens, P. D. (2021). The role of competitive amplification in explaining sustained performance heterogeneity. Strategic Management Journal, 42(10), 1769–1792. https://doi.org/10.1002/smj.3311
Yu, P., Zhang, Z. J., & Li, Q. (2022). Traffic channeling under uncertain conversion rates on e-commerce platforms. Naval Research Logistics, 70(1), 34–52. https://doi.org/10.1002/nav.22079
Zheng, L., Mai, F., Yan, B., & Nickerson, J. V. (2023). Stigmergy in open collaboration: An empirical investigation based on Wikipedia. Journal of Management Information Systems, 40(4), 983–1008.
Zhou, L., Lei, X., Liu, M., Huang, X., & Hou, R. (2025). Algorithmic competency of on-demand labor platform workers: Scale development, antecedents, and consequences. Asia Pacific Journal of Human Resources (forthcoming).
