REUTERS: TikTok, a short-video sharing app, announced on Friday that it will employ more automation to remove videos that violate its community guidelines from its site. Currently, videos uploaded to the site are analyzed by technology tools that identify and flag any potential infractions, which are then assessed by a member of the safety team. If a violation is found, the video is taken down and the user is alerted, according to TikTok.
Over the next few weeks, the ByteDance-owned company will begin automatically eliminating certain sorts of content that breach rules for minor safety, adult nudity and sexual activities, violent and graphic content, illegal activities, and regulated items.
This will be in addition to the removals that the safety team has confirmed.
This, according to the firm, will allow its safety team to focus on areas that are more contextual and nuanced, such as bullying and harassment, misinformation, and hateful behavior.
TikTok also stated that after the first violation, the app will issue a warning. However, if the user violates the terms of service repeatedly, the user will be warned, and the account may be permanently deleted.
The revisions come as social media firms such as Facebook and TikTok have been chastised for propagating hate speech and disinformation on their platforms around the world.
(Bengaluru-based reporter Tiyashi Datta contributed to this report; Krishna Chandra Eluri edited it.)/nRead More