Multimedia Platforms vs Misinformation

I decided to look at LinkedIn and Twitch I want to see what they do to combat misinformation. I use LinkedIn and do not have a lot of experience with Twitch. Let’s see how they compare and what they are doing to fight misinformation today.

LINKEDIN

First thing, a quick Google search revealed an article with an update on how they are fighting fake accounts. The link on Google led to a good article on LinkedIn’s official blog. It revealed how many fake accounts – 21.6 million – were stopped just between January and June of 2019. I was not even aware that this was an issue on the platform.

(Image source: https://www.linkedin.com/blog/member/product/an-update-on-how-were-fighting-fake-accounts)

In the article “How We’re Protecting Members From Fake Profiles” by Paul Rockwell, it mentioned that “97% of all fake accounts were stopped through automated defenses.”

What does that mean? I had to go look for myself and read the next linked article, titled “Automated Fake Account Detection” by Janelle Bray.

(Image source: https://www.linkedin.com/blog/engineering/trust-and-safety/automated-fake-account-detection-at-linkedin)

Of course, these are older concepts and I need up-to-date information from 2020+.I clicked the link in the article that reads “Trust & safety”, which took me to the latest news on what they are doing to combat fake profiles in an article titled, “New LinkedIn profile features help verify identity, detect and remove fake accounts, boost authenticity” by Oscar Rodriguez. They referred to the “Funnel of Defense” shown above and to that same older article. New features include an “about this profile” feature for verification of individuals through phone and email, an AI photo detection tool that searches for unnatural images, and better ways for notifying users of potential scams in their messaging.

(Image source: https://www.linkedin.com/blog/member/trust-and-safety/new-linkedin-profile-features-help-verify-identity-detect-remove-fake-accounts-boost-authenticity)

In the example, we see a message sent to a LinkedIn user named Sarah King. The message is from Wendy Chou, a marketing and Crypto Specialist. The algorithm picks up that it is a potential scam and notifies the user. It then gives them the option to review safety tips or view the message anyway. As you can see, they want the person to move off the platform. You can then mark the content safe or report it to LinkedIn.

I want to know what LinkedIn is doing to protect me from these types of scammers. So, I head over to my LinkedIn profile to review the policies on the platform. What do they have to say for themselves?

I log in to my LinkedIn page and go to my profile, click home and look for any information that might direct me to some type of policy. I am not finding anything. I click the home icon. It takes me to a page with recent posts and a sidebar with my profile and other information. I don’t see anything about policies so I click “Discover More” at the bottom of the sidebar strip. That routes to a “Trending Pages in your network” page. No policy information there either. There is a sidebar and I decide to click on a link called “Privacy & Terms”, that goes to another link titled “Privacy” so I click it. My brain and eyes are both strained trying to find the information on what they are doing to stop the spread of misinformation.

(Image source: https://www.linkedin.com/legal/professional-community-policies)

When I finally found the policies and read them, I was surprised to see how short they were. Not all that helpful at all really. They rely on the user to report anything that they feel may be a violation of their policies. The site states “Depending on the severity of violation, we may limit the visibility of certain content, label it, or remove it entirely. Repeated or egregious offenses will result in account restriction.” (Source: https://www.linkedin.com/legal/professional-community-policies) The page covers most of the standard items: harassment; violent or graphic content; misleading content; fake profiles; scams; fraud; hateful content; unwanted advances; nudity; and spam.

LinkedIn is Out of Touch

What I believe that LinkedIn could improve upon is putting this information out front on the main LinkedIn page where it is easily accessible and readily available. They should have links that people could use to make a report of such activity. They should have an anti-fraud team or something similar. The way that they are attempting to curb misinformation leaves everything up to the user and the platform is not taking active responsibility. They need to make things easier for the user. They should also place these links or contacts in the Messages sections because that is where the scammers make contact. This would offer a level of protection for the user that doesn’t exist at this time. Until I did this research, I was not even aware that I could report someone on LinkedIn and have had to block (which was an easy and available option) connections because they are salespeople (likely running scams) whose identity I cannot verify. The policies are not working. They have a lot of scammers now. My recommendations for LinkedIn, to have easier ways to connect and report, and to be more transparent would bring them into the current year. It would help make them a more trusted platform.

TWITCH

The next platform I chose to research was Twitch. So, right to Google I went. I was very surprised that what popped right up on my screen was “Twitch Safety Center”.

I clicked over the Safety Center and was very impressed with the article about what Twitch is doing to stop the spread of misinformation on the platform. They have partnered with experts and others to find out about the spread of misinformation online and what they can do best to curb it. They look for the 3 following things “(1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics, such as conspiracies that promote violence.” They also state “Our goal is to prohibit individuals whose online presence is dedicated to spreading harmful, false information from using Twitch.”

(Source: https://safety.twitch.tv/s/article/Preventing-Misinformation-Actors-from-Using-Twitch?language=en_US)

They have an actual investigations team at Twitch. It looks to me like they take the spread of misinformation very seriously. The users can report directly to Twitch through the email address OSIT@twitch.tv (Off-Service Investigation Team) with backup documentation to support claims. This page references a “Scams, Spam and Malicious Conduct Policy”. That policy specifically prohibits botting.

Twitch says they take a “layered approach” that involves the community. They do take responsibility and actions taken by them “include removal of content, removal of monetization tools, a warning, and/or suspension of their account”. 

Source: https://safety.twitch.tv/s/article/Community-Guidelines?language=en_US#30HarmfulMisinformationActors)

There is a process for an appeal. Reports of violations are reviewed by the team around the clock every day of the year. They are not messing around. They also update the Guidelines regularly and consider it a living document.

As you can see from the menu on the left, they take Authenticity seriously. Again, if there are “Harmful Misinformation Actors” as they are referred to by Twitch and those individuals meet the three criteria, they will be dealt with swiftly. Twitch is a heavy hitter when it comes to violations. If you are suspended and try to get back on through evasion, they will go after you and you could be permanently suspended.

Twitch Has it Right

It seems to me that Twitch is doing a really good job of combatting the spread of misinformation on their platform. I do not have a lot of experience using the platform, but I would feel very confident in doing so. There seems to be a lot of transparency. They engage the community of users but take full responsibility for being at the forefront of stopping the spread of disinformation and misinformation. I am so shocked by how thorough they are in their efforts. It seems I can’t find room for improvement. I think other platforms should look at what they are doing and follow suit.

I have no recommendations for Twitch except that perhaps they should start a channel of their own where the 24/7/365 Off-Service Investigation Team shares information on best practices that Twitch utilizes in the fight against information. And they should invite the people running LinkedIn to follow them on the stream. They are on top of the game and can help other platforms to be as proactive and forward-thinking as they are. Twitch has got it right.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *