Likes are always welcome, but social media platforms are now realizing that likes need to be redesigned with mental health issues in mind.
When cooking a special meal, you always want to use the best ingredients to make it great. When you think about social media, the best ingredient for social media analytics is a healthy audience. After all, an audience’s activity on a social media profile creates metrics from which marketers can learn to better connect with people. A safe environment is one where permission-based marketing is accepted, which also improves brand safety.
But when people don’t feel safe from trolls, more than simple measures can be at risk. Social media platforms have begun adjusting their features to discourage abusive trolling, making it safe for influencers to avoid harassment and improving the overall quality of user mental health protection.
YouTube removed ‘dislike’ counts from hosted videos and live streams on the platform as part of a beta experiment – in August it launched the removed count feature on all accounts . The removal highlights social media’s attempt to protect the mental health of their users while controlling brand safety.
Related article: Mastering brand reputation management in the age of social media
How YouTube’s Decision to Remove the ‘Dislike’ Count Matters
Viewers of a given video can indicate that they dislike a given video by clicking the thumbs down icon. The dislike number appeared once next to the icon. Now only the dislike icon is displayed, part of an updated appearance of the video player appearance. The “Like” count and associated thumbs-up icon remain. Removing dislike counts from public view was designed to discourage trolls from inflating dislike counts and targeting individuals.
Trolls have looked for many ways to abuse people online. This usually meant rude comments, which can be reported on a given social media platform. But new aggressive behaviors emerged as social media platforms added features. On YouTube, trolls click the dislike button on videos hosted by their targets. Hitting it repeatedly increases the dislike count, creating a negative perception of a given video. Likes have become a social currency, a sign of support from many people, so likes are meant to humiliate their targets wherever possible.
Related article: Are social media ruining our lives?
When likes and lives are at stake
So while social media algorithms see downvotes as a factor in demoting a video or post, social media platforms realize that it can also be a sign of harassment. In response, social media platforms decided to reevaluate how certain features can be misused to create an air of harassment.
Over the years, various tech insiders have commented on the need to refine social media metrics as they relate to mental health. Evan Williamsco-founder of Twitter, noted the need for better metrics in 2012. In 2015 Mark Zuckerberg, CEO and founder of Meta, announced that he was considering removing likes (I explained some of the reasons in my message on sentiment analysis).
But these simple observations date back to 2012. The impact of social media on mental health was not even considered well-documented. Social media was just too new. Today, people are questioning internet behavior that contributes to poor mental health.
Bad human behavior has not changed with the enlightened understanding of the negative consequences social media can have on mental health. In 2017, a Pew Research study reported that 4 in 10 adults surveyed said they had been harassed online. And 66% said they had witnessed harassment. This is a slight increase since Pew’s previous study in 2014. Fast forward to 2020, Bench found that although the rates were similar, the intensity of bullying increased, with 41% reporting being bullied and 25% experiencing more extreme bullying.
Related article: How Influencers Help Create a Better Customer Experience
What social media does to protect mental health
Like YouTube, social media platforms have conducted experimental testing of features. In 2021, Instagram introduced a Hidden Like count for its posts. Users can still dislike a post, but not see the number of dislikes.
In February, Twitter experimented with downvotes, a variation of YouTube’s dislike button. Users of the Twitter app can vote down replies to tweets. Like the YouTube and Instagram examples, the downvote count isn’t visible to users, but it’s also not visible to tweeters. Twitter uses downvotes to determine the relevance of a tweet. In May, it decided to roll out the downvote feature to website users, with app users later adopting it.
More and more researchers are looking at major streams to better identify mental health traits and risks. Studies have indicated that people who frequently spend time on social media and have fewer interpersonal activities increase the risk of mental health problems such as depression. Platforms need to consider mental health when launching a feature, in the face of a competing interest to retain an audience and better maintain safety.
More harassment prevention features are introduced to better align with the societal influence of social media. For example, Adam Mosseri, head of Instagram announced the extension of parental controls on Instagram. Instagram’s parent company, Meta, later extended parental control features to Facebook. Controls will limit who can see young users’ friends lists and the pages they follow. They place abuse prevention measures at the heart of teen social media use.
Brands with beauty and fashion offerings are also taking steps to ensure their marketing activity doesn’t condone bad behavior. Ogilvy announced that it won’t work with influencers who alter their body images or faces in ads. Two ultra-luxury brands, Lush and Bottega Veneta, have even eliminated social media presence, despite the fact that their fashion brands have a presence to leverage image and video.
As a result, the developer’s notion of “go fast and break things” that has underpinned software product design – including the development of user features in social media – is evolving to incorporate more psychology and science insights. health in the way the customer experience is delivered. Additionally, executives of the biggest social media platforms are realizing they can’t sit on negative information, like data scientist Frances Haugen who testified before the US Congress and UK Parliament about rulings. taken within Facebook regarding the emerging study of the mental health impact of social media. media.
Social media platforms have increased their consideration of human response to feature designs. They still have a lot to do, especially at a time when regulations are being considered, so marketers also need to keep up to date with feature changes as a brand safety issue. The change in tone is timely as the public recognizes how technology embedded in the products and services they use impacts key decisions and their well-being.