Hot topics

How AI could fix social media... or break it even more

AI robot 09
© sdecoret/Shutterstock

Despite many of us swearing off of Facebook, following multiple data breaches and privacy scandals, social media use in general has not diminished. In fact, around 2.7 billion people have at least one social account and these numbers are projected to grow. But it's not just humans scanning through tweets and Facebook posts. Although you may not know it, artificial intelligence is already present on multiple platforms and its roles are expanding - for better or for worse.

Social media itself can be immensely powerful -  its role in global events can no longer be denied. Since the early 2010s, we've seen it used as a tool for activism and civil resistance - helping organize protests and revolutions, such as Egypt's January 25 Revolution; it has been a platform for citizen journalism and of course, a way to connect with people across the globe. However, we have also seen it be used for propaganda and as a platform for hate speech and misinformation. This is why as artificial intelligence continues to evolve and its presence on social media grows, we have to be mindful of what direction it steers it in.

A helping hand?

We've all gotten into heated online arguments with trolls at least once in our lives and we've all probably come across highly offensive content. It's no wonder then that social media is often being labeled a toxic and depressing space. As many others, I believe that some people just feel empowered by anonymity or lack of consequences online - leading them to behave terribly. Of course, companies like Twitter are responsible for moderating such content, but I don't blame them when stuff slips through the cracks. The amount of content posted on these websites every day is immense. According to Internet Live Stats, on average, around 6,000 tweets are posted every second.

AI robot 05
AI could help moderate social media posts. / © Phonlamai Photo/Shutterstock

This is where artificial intelligence comes in. In my opinion, a well-trained AI could be of great help - especially if it is designed to assist human moderators. Artificial intelligence can scan vast amounts of data incredibly quickly and potentially identify things such as doxxing, calls to violence and other illegal or questionable content. Human moderators can then choose what action to take in response.

There already have been similar attempts - researchers at Cornell university developed a bot, which had the task of anticipating when a civil conversation might turn toxic. It performed worse than humans, getting it right only 65% of the time, compared to people's 72% accuracy. However, as the tech improves, and especially if there's collaboration between human and machine, the results could be great. Of course, there are concerns and potential ways such a system could be gamed or used for nefarious purposes, but that is the case with almost every technology.

I also believe artificial intelligence could potentially help in combating fake news by scanning online data and fact-checking. However, how well it performs would depend on how it's trained. It's not unheard of humans passing their biases down to AI. I personally think it might be a bit too early to see this use of AI in action, but as time goes on, it could play a positive role.

A disaster in the making?

As social media platforms have matured, they have transformed from websites where people post their selfies to spaces which aggregate enormous amounts of data about users' every single action, like and preference. AI has only helped speed up this process. What is the harvested data used for? First and foremost to customize what you see on your newsfeed and in advertisements.

According to MarTech Advisor, Twitter, for example, is working a machine learning algorithm, which categorizes every tweet. "The idea is to provide content people most care about at the top of their timeline. It could mean a significant shift in the way that people currently view tweets within the chronological timeline format." Facebook is also allegedly using artificial intelligence in its newsfeed.

Facebook 07
Facebook has never had a great track record with user data. / © MichaelJayBerlin/Shutterstock

However, while delivering relevant content is great, in my opinion, there are potential problems with personalization to such a degree. It can isolate people in bubbles, where their views are constantly affirmed and regurgitated. The narrative can then become extremely toxic, as we have already seen with Reddit subforums like 'Fat People Hate'. It can also lead to social media companies selling even more of our data, as advertisers insist on ultra personalized ads.

But these are not the only issues - isolated online communities are often easier to manipulate, as long as they are fed information that aligns with their preexisting world views. In my opinion, it's healthy to be presented with opposing ideas and have your biases challenged. Yet, artificial intelligence can make this increasingly harder in the future. As Clint Watts, a research fellow at the Foreign Policy Research Institute, writes for the Washington Post, discussing the capabilities of AI voter manupilation: "Every like, retweet, share and post on all social media platforms will be merged and matched with purchase histories, credit reports, professional résumés and subscriptions. Fitness tracker data combined with social media activity provides a remarkable window into just when a targeted voter might be most vulnerable to influence, ripe for just the right message or a particular kind of messenger."

8477893426 9181cdabc4 o
Although Twitter has been criticized less than Facebook, the platform has its problems too. / © mkhmarketing, Flickr

This, combined with the ability to create so-called 'deep fakes' with AI, can have disastrous consequences for democracy and society. And if artificial intelligence is left unsupervised, censorship - whether accidental or intentional, could further exasperate the problem. However, it's not all hopeless. In the end, the outcome of this match between social media and AI will be in the hands of humans - artificial intelligence today is not Skynet, we still control it. This means its uses can be studied, changed and legislated. We can only hope we make the right choices.

What do you think about the relationship between AI and social media? Share your thoughts in the comments.

  nextpit recommendation Price tip Luxury version with handle Price tip with handle For Garmin fans Mid-range tip
Product
Image Withings Body Smart Product Image Renpho Smart Body Fat Scale Product Image Withings Body Scan Product Image Lepulse Lescale P1 Product Image Garmin Index S2 Smart Scale Product Image eufy Smart Scale P3 Product Image
Deals*
Go to comment (4)
Liked this article? Share now!
Recommended articles
Latest articles
Push notification Next article
4 comments
Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing

  • Mana Got 5
    Mana Got Jan 19, 2019 Link to comment

    The author has hit the bull’s eye again. Well done 👍


  • Mana Got 5
    Mana Got Jan 19, 2019 Link to comment

    AI can easily track language of hatred or any key words of interest, as long as it has been trained to do it. There’s nothing scary about AI on the social media yet.


  • Dean L. 34
    Dean L. Jan 19, 2019 Link to comment

    AI can't predict the emotional side of a topic because it's artificial and has no emotions.


  • 49
    storm Jan 19, 2019 Link to comment

    It will be tied to monetization and advertising so it will lack the proper focus and fail.

    Nicholas MontegriffoDean L.

Write new comment:
All changes will be saved. No drafts are saved when editing