In this article we’re going to do a mini investigation that deals with A.I. slop disinformation.
Before we begin, this research is difficult to write about, talk about, or show because it deals with disinformation. We’re going to do our best to describe things we’ve observed, but in a way that doesn’t give the subjects very much oxygen. Here’s why.
What disinformation is
Disinformation is when someone, or a group, deliberately creates false information that is designed to confuse or mislead.
For example, in this simulated post you’re about to see, one tactic disinformants like to use is taking a screenshot of a real headline and not include the source of where the article resides. Then in the text of the social media post itself, imply that what the person is posting comes from the article shown in the screenshot.
This is disinformation. It’s fabricated by taking a real headline and attributing the text in the social media post loosely to the real article. Depending on how the post is framed, it can illicit emotions of anger, fear, despair, hate, or even positive emotions, as long as it gets the viewer to react and share the post. In some cases the disinformation can physically motivate people to carry out an action.
Here’s our simulated post.
How do you combat something like this?
Specific to our sample post, when you come across something like this, bust out your favorite search engine, pop in the title of the article and the author, and see if it shows up in the search results. Then read the article and compare it to the social media post. Is it an exact match or is it completely fabricated?
If it’s fabricated, then start asking yourself why it made you feel a certain way, what’s the motive behind the person making the social media post, and where else does it show up.
This is just the tip of the iceberg with disinformation and if you’d like to read more, you can check out another one of our articles here.
Having a super basic understanding of disinformation, let’s turn your attention to A.I. slop.
What is A.I. slop?
A.I. slop is low quality generative artificial intelligence. It takes the form of images, text, video, and audio. It’s often rife with errors.
Because of how easy it is to create A.I. slop, it’s becoming a problem. Social media feeds are inundated with this junk and YouTube videos are getting out of hand.
Here’s an example of slop we created using generative A.I.
Everything up to this point sets the stage for the first part of this mini investigation.
The Target
Our target is not one, but three YouTube channels.
This is where the difficulty begins in describing things as we don’t want to give them traffic.
YouTube Channel 1
We stumbled onto this channel around the first week of April 2025 in the recommended videos on the side panel of YouTube’s web interface. The thumbnail caught our eye which made us give the video a click.
Immediately, we could tell that the audio in the video used A.I. generated voice and the image used was also synthesized.
This piqued out interest, so we looked at the channel’s main page.
At the time of the first visit, it had 353 videos. As of this writing it has 376 videos.
The channel description includes a disclaimer that all the videos are fiction and are for entertainment purposes only and it goes on to say that any similarity to real events is by coincidence.
The disclaimer ends in saying that what’s portrayed in the content is not based on actual people, events, or entities.
Stats of YouTube Channel 1
The location of the created account is alleged to be the UK
The account creation date was in late 2024
The number of subscribers around our first visit was a little over 10,0000.
Total number of views around the time of our first visit was a little over 1.2 million.
Here’s what the channel looks like at the time of this writing.
Let’s tackle the general themes and subject matter of this channel.
If you scroll all the way back to the first video, or sort by “Oldest,” the earliest one is from Q3 2024.
From the first published video to the middle of March 2025, the themes revolve around health and intimacy tips for a certain demographic.
From the middle of March 2025 to present day, the themes are of celebrity and politics.
The questions we don’t have answers to are why the shift in themes and why choose the middle of March 2025 to make this change?
What follows are our observations of this channel.
Observations
- Account behavior: The account posts roughly between 1-3 videos a day.
- Visuals: Some videos have disclaimers or tags (for lack of a better word) saying that what’s presented is made up or that there is “Altered or synthetic content.” The latter is also displayed by YouTube in the video description. A quick sampling of current and older videos show that YouTube includes in the video description that the video was made with “Altered or synthetic content” and that “Sound or visuals were significantly edited or digitally generated.”
- Topics: The first batch of topics focused around health and intimacy of a specific demographic. Later it’s politics and celebrity. This includes political figures, religious figures, and celebrities.
- View counts: The majority of the videos have a range of views from about 20 to a few thousand. The outlier is a video about a current congress person which, as of this writing, has 1.1 million views.
Where the disinformation exists
The disclaimer created by the channel owner states that the content, to paraphrase, is not based on actual people, events, or entities. We allege that this is not true when everything from the video title, the images, and audio, use a sitting politician’s, legal entity’s, or celebrity’s name or likeness. In fairness these are videos dating from the middle of March 2025 to present. Anything before the middle of March 2025 do not appear to resemble any public figures.
With the disclaimer from the channel owner that everything is a fiction, this means that the health and intimacy advice should not be followed or put into practice. On the same track, there are videos that contradict the advice of a previous one that’s published.
With the videos that all feature public figures, some of the text on the thumbnails emote anger. There are many thumbnails with “F*CK YOU” in the image. 13 to be exact as of April 14th, 2025. This could cause someone to watch and/or share the video.
The video titles themselves are misleading because they never happened. Think of it as “fan fiction” if you’d like, or a Dhar Mann YouTube video.
If you’ve never seen a Dhar Mann YouTube video, oh boy are you in for a treat. He has titles like “Gold Digger Dumps Broke Boyfriend, Then Regrets Her Decision,” and “Stranger Makes Fun Of Nerd, Lives To Regret His Decision” which reminds us of how the videos are titled in what we’re calling YouTube Channel 1. They’re very clickbaity to entice someone to watch the content.
To dig a little deeper, the video that has 1.1 million views on what we’re calling YouTube Channel 1 currently has 3961 comments as of this writing.
Some of the comments are of the nature of either being in support of a Congress person or in support of their adversary/opponent.
Some comments could fall into a few categories for us; those who only commented because of the video title/thumbnail, or those who watched the over 20 minute long video. While we do not have evidence to support this claim, we feel that the two categories are plausible.
Then you have some commenters rightfully calling out the content as a.i slop or saying it’s fake.
Having this amount of engagement increases the chances of people sharing it more. Unfortunately we are unable to see how many times this video was shared. If you’re a YouTube super hero and know of a way to find this data, we’d love to learn, but we digress.
To shift gears a little before we wrap up YouTube Channel 1, we looked at Social Blade for more information about this channel. Social Blade tracks statistics for various social media platforms including YouTube in case you did not know.
What we were interested in seeing was money YouTube Channel 1 brought in and, according to Social Blade, the monthly earning range is $380-$6.1K as of this writing. It’ll be interesting to see what happens next month.
For now, this is where we leave YouTube Channel 1 and take a little look at YouTube Channel 2 and 3.
YouTube Channel 2 and 3
Both of these channels follow YouTube Channel 1 with the same type of content they publish.
What’s interesting about YouTube Channel 2 is that the very first video posted has the exact same title as the video that has 1.1 million views on YouTube Channel 1 as of this writing.
Unlike YouTube Channel 1, this one’s content is solely focused on one person, which we find interesting.
Another interesting observation is that YouTube Channel 2 started on March 16th 2025. This is the same timeframe that YouTube Channel 1 made the switch over to posting only content about public figures. This does not connect the two channels, but it does warrant further looking into.
YouTube Channel 2 has far fewer subscribers; clocking in at 1.33K with only a little over 88,000 views as of this writing.
The channel’s alleged location is in France.
We found YouTube Channel 2 the same way as the first one. We saw an odd looking video thumbnail in our recommended videos in the side pane.
Lastly let’s look at YouTube Channel 3.
Again, we found this channel the same way as the first two through the recommended video side pane.
YouTube Channel 3 is roughly the same size as YouTube Channel 1. It has 16000 subscribers and change.
The channel started in the beginning of 2025.
There isn’t much else to cover how these channels operate that we didn’t touch on when discussing YouTube Channel 1.
While these are simple observations, it helps to understand how these channels function and how far their reach is.
Why we’re doing this
We’re doing this to show you some simple steps you can take to identify not only A.I. slop, but some easy things you can do to see if what you’re looking at is disinformation or something else on the information disorder spectrum.
Also, this level of research is the nuts and bolts of Open Source Intelligence (OSINT) and Social Media Intelligence (SOCMINT). Exploring something like this at a foundational level is a good start to learning about this practice.
What we didn’t do
To start wrapping this article up, we did not listen to, or transcribe, the information in the script of the narrations. We also didn’t hunt down where any of the video images were used or what they were generated with.
What’s next
We’re working on part two of this mini investigation into A.I. slop disinformation. This may take a little more time and focus.
The question we want to answer in part 2: Is it possible to figure out what text-to-speech platform is used in these videos?
Your next steps
- First sign up for our cybersecurity and research newsletter to get updates on this little project. You’ll also get tips, tricks, tools, alerts, and news in your inbox. Here’s the link https://bsquaredintel.com/newsletter-signup/
- Lastly, fill out the form below to schedule a free strategy call whether you’re looking for training/education, Information Security services, or, if you’re part of a legal team, litigation support for your cases.
Contact Us | Bsquared Intel
Please fill out the form below, or call 203.828.0012, to learn how Bsquared Intel can assist you.