Social is no longer just what you do on Facebook; it’s what you do in every app you use. Think of the experience on Venmo, Strava, Duolingo or even Sephora.
Companies that implement social components into their apps and services, known as social+ companies, are thriving because they can establish connections and enable interactions with users.
D’Arcy Coolican of Andreessen Horowitz explained the appeal of social enterprises+, writing:
“[Social+] can help us find community in everything from video games to music to exercise. Social+ occurs when the pleasure spark utility is carefully integrated into this essential human connection. This is powerful because, ultimately, the more ways we find to authentically and positively connect, the better.”
Social+ will soon permeate every aspect of our lives, accelerating at a breakneck pace in the coming months. I would bet that adoption will continue to the point of utility – where every business is a social enterprise. This is very exciting, but only if we plan accordingly. As we’ve seen with the influence of social in the past, it’s amazing… until it’s not.
What is incredibly addictive to the user experience today can turn into an absolute nightmare if apps that invoke social don’t find religion in solid moderation practices and invest the necessary resources to ensure they build in the right technology and processes from the start. start.
learning with facebook
As OG’s social pioneer, Facebook has redefined how society works. In doing so, he suffered some very painful lessons. Notably, it must shoulder the burden of monitoring individual, group, and organization posts from 1.93 billion daily active users — all while trying to cultivate an uncensored sense of community and drive adoption, engagement, and profits. platform. While social+ businesses likely won’t see that kind of volume, at least in the short term, they’ll still have to deal with the same issues – only they no longer have the excuse of not being able to predict these things might happen.
If Facebook and its army of developers, moderators and AI tech struggle, what kind of chance do you have if you don’t prioritize moderation and community guidelines from the start?
Let’s look at some areas where Facebook has stumbled in moderation:
- Failing to take into account bad user behavior amid rapid growth: In the early days of Facebook, moderation of the platform was not considered necessary in what was considered a free and user-oriented space. The company was just a connection channel. Facebook failed to recognize the potential for user harm until it was too late to manage effectively. Even with the most advanced software and a workforce in which 15,000 employees are solely dedicated to reviewing content in 70 languages, content moderation remains a huge issue that has cost company users ad dollars and large amounts of reputation.
- Underestimating the language barrier: As we live in an increasingly global society, connected through online services and networks, documents released to Congress showed that 87% of Facebook’s global budget for identifying disinformation has been earmarked for the United States. Only 13% go to moderation practices for the rest of the world, even though North Americans make up just 10% of its daily users. Facebook has tried to apply AI-based software for content moderation in markets where the language is incredibly subtle in an attempt to solve the problem, which has not gone well. In Facebook’s biggest market (India, with 350 million users) misinformation and calls for violence proliferated because of a language deficit. It is even worse with the varied dialects of North Africa and the Middle East. As a result, human and automated content reviews mistakenly allowed hate speech to spread while benign posts were removed for apparently promoting terrorist activities.
- Becoming a politician: Clearer language has become a weapon in the U.S. Deep fakes and disinformation campaigns have been normalized, but posts that Facebook legitimately removes or flags under its terms of service draw the ire of users who feel their free speech rights are being undermined. violated and their voices suppressed. This caused significant public backlash, along with a handful of new legal proceedings. Also on December 1, a federal judge blocked the entry into force of a Texas law that would allow residents of the state to sue Facebook for damages if their content is removed based on political beliefs. A similar law in Florida, which sought to hold Facebook accountable for censoring political candidates, news sites and users, was also knocked down. These attempts, however, show just how irritated people have been by content moderation practices that they don’t like or that they perceive to be changing over time to work against them.
- Determining what to do with banned content: There’s also the question of what happens to that content once it’s removed, and whether a company has an ethical responsibility to deliver objectionable content or alert authorities to potential illegal activity. For example, prosecutors are currently demanding that Facebook hand over data that would help them identify members of a group, the New Mexico Civil Guard, that was involved in a violent incident in which a protester was shot. the Facebook says he can’t help because it erased records of the group, which had been banned. Tensions continue to mount between law enforcement and social businesses in terms of who owns what, reasonable expectations of privacy, and whether companies can release content.
All of these issues should be carefully considered by companies planning to incorporate a social component into their application or service.
The next generation of social apps
Social engagement is critical for sales, adoption and more, but we must not forget that humans are flawed. Trolling, spam, pornography, phishing and money scams are as much a part of the internet as browsers and shopping carts. They can wipe out and destroy a community.
Consider: if Facebook and its army of developers, moderators and AI tech struggle, what kind of chance do you have if you don’t prioritize moderation and community guidelines from the start?
Companies should build moderation capabilities – or partner with companies that provide robust solutions – that can scale with the company, especially as services go global. This cannot be overstated. It is critical to the long-term success and viability of a platform – and to the future of the social movement+.
For moderation tools to do their part, however, companies must create clearly defined codes of conduct for communities that minimize gray areas and are written clearly and concisely so that all users understand expectations.
Transparency is vital. Companies should also have a framework for dealing with misconduct – what are the processes for removing posts or blocking users? How long will they be locked out of accounts? Can they appeal?
And then the big test – companies must apply these rules from the start with consistency. Whenever there is ambiguity or comparison between instances, the company loses.
Organizations must also define their stance on their ethical responsibility when it comes to objectionable content. Companies need to decide for themselves how they will manage user privacy and content, particularly what may be of interest to law enforcement. This is a tricky issue, and the way for social businesses to keep their hands clean is to clearly articulate the company’s privacy stance rather than hiding from it, presenting it only when an issue arises.
Social models are being incorporated into all apps, from fintechs to healthcare and food delivery, to make our digital lives more engaging and fun. At the same time, mistakes are inevitable as companies come up with a whole new way of communicating with their users and customers.
What’s important now is for social+ companies to learn from pioneers like Facebook to create safer and more cooperative online worlds. It just requires some forethought and commitment.