By René Menozzi • 13 min read
I once killed a community in an afternoon.
Not intentionally. I cut the cash incentives from an event programme at Talus to test what we actually had. Three hundred people had been showing up regularly. After the cut, ten people showed up. Mostly moderators. People who were there for other reasons entirely.
I stood in that empty room, metaphorically speaking, and realised I had been measuring the wrong thing for months. The 300 were not a community. They were a transaction. The ten were something different, something real. But I had no framework at the time to explain why, or how to build more of them.
So I went looking for one.
People Gather for Reasons That Predate the Internet
The first thing I found in the research surprised me: none of this is new.
In 1995, psychologists Roy Baumeister and Mark Leary published a paper arguing that the need to belong is as fundamental to human survival as hunger or shelter. The brain processes social exclusion through the same neural pathways as physical pain. We did not evolve to be alone. Communities, digital or otherwise, are proxies for the tribal groups that kept our ancestors alive.
This sounds abstract until you watch it operate in a product. Think about why people switch banks. The standard assumption is that users leave Revolut for Monzo for Starling because of better rates, better cashback, better features. Sometimes that is true. But users who feel genuinely connected to a product, who identify with it, who feel it understands them, resist competitive offers that look better on paper. Both layers have to be present. Strong extrinsic value with no intrinsic connection means any competitor with a better deal can take your user. Strong intrinsic connection with poor extrinsic value means users stay but feel underserved. When both are working, switching becomes genuinely costly in a way that has nothing to do with cancellation fees.
The ten people who stayed after I cut the incentives at Talus were there because of the intrinsic layer. They had built relationships. They felt part of something. The 290 who left had only ever had the extrinsic layer. And I had been feeding that layer exclusively, congratulating myself on the numbers, without ever building the thing underneath.
Extrinsic Rewards Get People In. They Cannot Keep Them.
In 1973, researchers Lepper, Greene, and Nisbett ran an experiment that every product builder should know about.
They observed nursery school children who genuinely loved drawing with felt-tip markers. Intrinsically motivated, in the language of psychology: they drew because drawing felt good. The researchers split the children into groups. One group was promised a reward for drawing. One received a surprise reward after. One received nothing.
Weeks later, during free play, the children who had been promised a reward spent significantly less time drawing than before the experiment. The ones who received nothing had not changed at all.
The promised reward had reclassified drawing from play to work. And work, without payment, is not worth doing.
This is the overjustification effect, and it operates in every product community I have ever studied. American Express understands this intuitively. The card offers real extrinsic rewards: points, lounge access, purchase protection. These get people to apply. But the product invests heavily in the intrinsic layer too: the weight of the metal card, the quality of customer service, the subtle identity signal of the brand. Cardholders develop a sense of belonging to a particular kind of person. The rewards keep them interested; the identity keeps them loyal.
Most products invert this. They launch with referral bonuses, token incentives, and cashback programmes before anyone feels any genuine connection to the product or to each other. The people who arrive are mercenaries. They came for the incentives. When the incentives are reduced, they leave, and they tell others on their way out.
I had built a mercenary army at Talus. The question was how to build something else.
Belonging Comes First. Everything Else Follows.
The framework I developed from this experience I call Belong, Grow, Succeed. It describes the sequence in which community value compounds, and the order matters more than anything else.
Belonging comes first. People need to feel connected, welcomed, and seen before they care about anything the product can offer them. Grow comes second: once people feel safe, they want to develop, contribute, improve. Succeed comes third: rewards, status, and outcomes follow naturally from a community where the first two are already present.
The sequence is not arbitrary. It maps directly onto what self-determination theory, developed by Deci and Ryan, identifies as the three universal psychological needs driving sustained motivation: relatedness (connection to others), competence (mastery and growth), and autonomy (control over one's actions). Products that satisfy these needs in sequence produce users who stay without being paid to do so. Products that skip straight to succeed produce the Talus situation: strong numbers, hollow core.
| Stage | What the user needs | What the builder must provide |
|---|---|---|
| Belong | Connection, recognition, safety | Welcoming onboarding, peer introductions, clear identity signals |
| Grow | Skill development, feedback, progress | Educational content, contribution pathways, visible improvement |
| Succeed | Status, rewards, tangible outcomes | Ambassador programmes, leaderboards, financial incentives |
Knowing this changed how I rebuilt the Talus community. We stopped leading with incentives and started leading with connection: introductions, shared experiences, contributor pathways that gave people a role rather than a reward. The numbers looked worse for a while. The community became real.
The Brain Is Running a Different Programme Than You Think
Here is something that still unsettles me about this work.
Even after you build genuine belonging, you are still competing with mechanisms the user cannot fully perceive or control. B.F. Skinner demonstrated in the 1950s that variable ratio reinforcement, rewards delivered after an unpredictable number of responses, produces the highest and most persistent rates of behaviour of any reinforcement schedule. A slot machine operates on this principle. So does every notification feed, every social platform, every engagement loop in modern software.
A 2021 study by Lindström et al. analysed over one million posts from more than 4,000 users across social platforms and confirmed that human online behaviour maps precisely onto Skinner's models. When social reward rates rise (more likes, more replies, more responses), users post more frequently. The platform functions as an operant conditioning chamber. The users inside it respond accordingly, whether they are aware of it or not.
Nir Eyal's Hook Model is the applied version of this science. A trigger, initially external (a notification, a badge, an email) and eventually internal (boredom, loneliness, the vague feeling that something might have happened), prompts an action. The action is rewarded variably. The user invests something: time, content, reputation, money. That investment raises the psychological cost of leaving and loads the next trigger. Repeated cycles shift the trigger from external to internal. The user no longer opens the product because their phone vibrated. They open it because they feel a pang of isolation, and the product is where that feeling gets addressed.
This is powerful. It is also ethically serious. These mechanisms can be used to reinforce genuinely valuable behaviour, contribution, learning, community participation. They can also manufacture compulsion around activities that serve the platform more than the user. The architecture is neutral. The intent is not.
Large Numbers Lie. Density Tells the Truth.
Metcalfe's Law states that the value of a network scales with the square of its connected users. Every new member increases the value for every existing member. This is why community compounds, and why building early pays disproportionate returns later.
But there is a constraint the law ignores. Dunbar's Number places the human brain's capacity for meaningful relationships at roughly 150 people. Beyond that threshold, average relational affinity between any two members collapses towards zero. A community of 200,000 cannot sustain the density of connection that a community of 200 can. The mathematics of network value and the biology of human attention pull in opposite directions.
The resolution is fractal structure. Large communities that sustain genuine engagement are composed of many small, dense sub-communities operating inside a larger whole. This structure forms naturally if you allow it to, but it fractures in ways that hurt you if you do not design it deliberately.
At Talus we reached 200,000 members with 50,000 consistently active and 8,000 daily active at peak. Those numbers held because of architecture. The ambassador programme created 300 deeply engaged contributors who generated 400,000 content views and acted as connective tissue between the broader community and the core team. Regional groups formed. Interest clusters formed. The large number contained many small ones, each with its own density, its own relationships, its own reasons to stay.
Most of the 200,000 were observers. That is normal. Research frames so-called lurkers not as disengaged users but as active learners: people assessing the environment, deciding whether it is safe enough to contribute. The transition from observer to participant is almost always triggered by a direct invitation or a specific moment where their particular knowledge or perspective is needed. Systems that create those moments consistently turn the passive majority into contributors over time. The 90% are not absent. They are waiting.
An Empty Room Teaches People to Stay Away
None of this works without social proof. And social proof can be manufactured, at least at the beginning.
Solomon Asch's 1951 conformity experiments showed that individuals will give answers they know to be factually wrong in order to align with group consensus. The evolutionary logic is clear: being wrong with the group is safer than being right alone. Digital communities run on this same mechanism. Visible activity signals, concurrent users, trending topics, response indicators, all tell new arrivals whether this is a place worth joining.
Reddit's founders understood this acutely. In the platform's earliest days, Alexis Ohanian and Steve Huffman created hundreds of fake user accounts to populate the site with content. When real users arrived, they found what appeared to be an active, established community. The social proof was artificial. The effect on real user behaviour was entirely genuine. Once organic participation crossed the threshold of self-sustainability, the fake accounts were abandoned. The community ran on its own.
This is not a recommendation to deceive your users. It is an observation about the mechanics of early-stage communities. The appearance of activity matters as much as the activity itself, particularly at launch. Seeding content, inviting high-quality early contributors, and concentrating activity in visible channels are all legitimate ways to generate the social proof conditions that attract real participation. An empty room teaches people to stay away. A room with ten engaged people in it teaches people to come in.
Transparency Is a Tool, Not a Virtue
The last thing I learned, and probably the hardest, is that honesty requires the same deliberate design as everything else.
A 2025 study on institutional trust documents what it calls the transparency paradox: transparency about performance and progress builds trust, while transparency about failure erodes it faster than silence would. The asymmetry is not a moral judgment. It is a psychological fact about how people, especially people who are new to a product, form and update their assessments.
Every community contains people at every stage. Long-term members have context. They have seen the product face difficulties before and recover. They interpret hard news against a history of trust. New members have no such history. The first announcement they see becomes the anchor for everything they believe about the product. A communication about internal difficulties or a failed campaign, framed carelessly, teaches a new member one thing: this product has problems.
I managed this in real time during the post-TGE period at Talus. A token launch compresses every anxiety a community carries into a single moment: price action, claim issues, expectation gaps, team changes. The question I returned to constantly was not what is true, but what does this announcement teach the newest person in the room, and does it help or harm their confidence in what we are building together.
Transparency about milestones belongs in public. Transparency about internal difficulties belongs in contexts where the audience has the history to hold it. The timing and framing of honesty are as consequential as the honesty itself.
The ten people who stayed after I cut the cash at Talus already knew this. They had earned the context. They stayed through the hard moments because the belonging was real, the growth was real, and they trusted us enough to weather the rest. That is what I was trying to build towards all along. It just took an empty room to show me what it actually looked like.
Sources
- Ryan & Deci — Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions (2000)
- Lepper, Greene & Nisbett — The Overjustification Effect and the Felt-Tip Marker Study (1973)
- Lindström et al. — A Computational Reward Learning Account of Social Media Engagement (2021)
- Mitchell et al. — Reduction of Financial Health Incentives and Changes in Physical Activity (2023)
- Baumeister & Leary — The Need to Belong (1995)
- Nielsen — Participation Inequality: The 90-9-1 Rule (2006)
- Globig et al. — Changing the Incentive Structure of Social Media Platforms (2023)
- Hyde — The Transparency Paradox (2025)
- Nir Eyal — The Hook Model (2013)