Professional social network LinkedIn has announced plans to roll out a series of enhancements to its platform to protect its community from inauthentic, or even malicious, profiles and activity.
Its objective is to help users make more informed decisions about those they choose to interact with when accepting connection requests, learning more about business or career opportunities, or just exchanging contact details.
The enhancements include providing additional information about user profiles, back-end features to detect fake profiles, and systems to stop suspicious messages.
“I have the pleasure of hearing from you and members across LinkedIn how much value you get from our professional community we’re building,” said Oscar Rodriguez, LinkedIn vice-president of product management.
“Equally important is that in order for you to find that next job, learn new skills or connect with others, you want to feel confident that the people you come across are authentic.”
Going forward, LinkedIn users will start to see an “About This Profile” feature that will provide information on when a profile was created or last updated and whether or not it is attached to a verified phone number or workplace email account, although this last feature will only be piloted with a limited number of companies to begin with.
“Starting this week, you can find the ‘About This Profile’ feature on each LinkedIn member’s profile page, and soon you’ll see it in more places over the coming weeks, including when viewing invitations and messages,” said Rodriguez.
“We hope that viewing this information will help you make informed decisions, such as when you are deciding whether to accept a connection request or reply to a message.”
The platform is also implementing a new deep-learning-based model that will proactively scan new profile photo uploads in an attempt to determine whether or not a user’s profile picture has been generated using artificial intelligence (AI).
Rodriguez said rapid advances in AI-based synthetic image generation have enabled scammers and fraudsters to create essentially unlimited numbers of exceptionally convincing, high-quality profile photos that do not, in reality, correspond to any human, living or dead. Many operators of fake LinkedIn accounts are using such technology to disguise their trickery.
“Our new deep-learning-based model is AI-generated using cutting-edge technology designed to detect subtle image artifacts associated with the AI-based synthetic image generation process without performing facial recognition or biometric analyses,” said Rodriguez.
“This model helps increase the effectiveness of our automated anti-abuse defences to help detect and remove fake accounts before they have a chance to reach our members.”
Finally, LinkedIn will add a warning to messages that are detected as containing potentially high-risk content, such as those that ask to redirect a conversation to a different platform – a common sign that a scammer may be at work. These warnings will also give their recipients the option to report the content without the sender knowing.
“We’re excited to start rolling out these new features,” said Rodriguez. “What you’ll see in the future are updates on how we’re finding new ways to improve automated systems for fake account detection and removal, plus more member features to help keep everyone safe.”
Rachel Jones, CEO of SnapDragon Monitoring, said that while LinkedIn primarily attracts legitimate users, it also offers criminals an easy avenue to commit fraud.
“Today criminals will regularly use social media platforms to con people out of money, so any move to protect users is a positive step,” she said.
“These scams range from criminals setting up fake profiles and advertising the sales of fake counterfeit goods, which can endanger people’s lives, to monetary scams where fraudsters set up spoofs of legitimate websites, advertise them on social media, and then con people into visiting them and handing over their financial information.
“Also, criminals also use social media to lure people into romance scams, or even to set up fake profiles pretending to be recruiters for legitimate organisations.
“These scams cause irreparable emotional and financial damage and more need to be done by businesses and online platforms to protect users. If a business sees its site is being duplicated by criminals, they must ensure it is taken offline with speed and efficiency before it causes harm.”