X Improves Tagging Process for Videos That Have Received a Community Note

X has added another valuable element to its Community Notes user-led moderation process, with all instances of any video that gets a Community Note now set to display the message, in any re-shares and posts.

As you can see in this example, now, when a Community Notes contributor adds a note to a video in the app, they’ll have the option to specify that the note is about the video clip, not the specific post.

As explained by X:

Notes written on videos will automatically show on other posts containing matching videos. A highly-scalable way of adding context to edited clips, AI-generated videos, and more.”

That’s an efficient and effective way to provide more advisory notes to more users, with X’s system able to now match both re-shared images and videos in the app, and tag them with any corresponding contextual notes.

Community Notes, which had been in development under the name “Birdwatch” for years before Elon Musk took over the app, has become a much bigger focus under Musk’s leadership, with the billionaire hoping to use community-led moderation as a means to combat more types of platform misuse, without the X team having to impose its own rules around what’s allowed, and what’s not, leaning more into his own free speech ethos.

Which has merit. As previous Twitter management explained:

We believe that a transparent, community-driven approach to identifying misleading information and elevating helpful context can help us all create a better-informed world.

This, in large part, is how Reddit has operated for years, with volunteer moderators helping to weed out junk, and up and downvotes better reflecting community sentiment on such, as opposed to Reddit management stepping in.

But there are limits to this as well.

As per analysis by Poynter Institute, the vast majority of the Community Notes that are created are never actually seen by users in the app, due to the way in which the Community Notes review system is structured, which requires consensus from users of opposing perspectives in order to be displayed.

As explained by Poynter’s Alex Mahadevan:

Essentially, [Community Notes] requires a cross-ideological agreement on truth, and in an increasingly partisan environment, achieving that consensus is almost impossible.”

X determines a Notes contributor’s political leaning based on past behavior in the app, which is also not always the best proxy, but based on this, the system then requires responses from both sides in order to approve a note.

Based on Poynter’s research, it found that this is useful for highlighting low-stakes content, like clarifying satire, or highlighting AI-generated images (again, a good use of this new, blanket tagging), things that everybody is generally in agreement on. But some of the most harmful misinformation, along more divisive lines (e.g. COVID vaccine impacts, election interference, gender debate), is never likely to get that critical consensus.

Thus, the majority of Community Notes, where they’re most needed, are never displayed.

Yet, despite this, Musk seems confident that Community Notes is the way forward, which will essentially enable the X community to govern itself on moderation concerns.

Anyone making materially false statements on this platform will get Community Noted, including you, me, Tucker, advertisers, head of state, etc. No exceptions.

Convince the people and let the chips fall where they may. @CommunityNotes https://t.co/GLK8o7D2FS

— Elon Musk (@elonmusk) April 27, 2023

That’s a lot of trust being placed on a system with known flaws that are still being worked through, so while it is an interesting concept, with a lot of potential in a range of key areas, the reliance that Musk and Co. are placing on Community Notes could be too much, as it’s unlikely to catch out all instances of misinformation and misuse.

Though it has proven particularly effective in one area: Policing misleading claims in ads:

Which Elon has admitted is not “super helpful” for X’s revenue intake, and with the company’s ad revenue down 60% YoY in the U.S., that’s probably not the ideal use of the function, from a business perspective.

But Elon seems willing to take the good with the bad, with the good in this case being a more hands-off moderation approach, which relies on hope, and ideological consensus, to police false claims.

There is a lot to like about the project, but X may also be putting too much reliance, too early, on a still-in-development system.

And amid broader reports of X allowing more harmful content to be shared in the app under Musk’s leadership, this will remain a key area of focus for the platform, and ad partners, moving forward.

Source link

Stay up to date
Register now to get updates on promotions and coupons
The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.

Shopping cart

×