There was a Doctor Who story called “Time Flight” — the one with the Concord — where a plane and its crew gets sucked into Earth’s prehistoric past through some sort of time vortex, but they are under the illusion that they are back at Heathrow airport. In order to break the spell, the Doctor encourages his companions to remember the Indian rope trick.
This came to mind when I was contemplating this post so I looked up exactly what the “Indian rope trick” was. The gist of it is that a magician purportedly makes a rope stand vertically, and a boy climbs up the rope and disappears. In reality this was instance of fake news… the story had been made up as a hoax in 1890 to sell newspapers.
Humans are attracted to novelty and the unexplained. That is the appeal of such illusions. Our senses are susceptible to being fooled by a skilled magician who can distract our focus from what they may really be doing, and the allure of such performances is the fact it doesn’t conform to our expectations. This is something that is evident in developmental psychology — there is developmental shift between the ages of 2 and 3 in children to direct their attention towards novel outcomes — having gathered enough of an understanding of the world that when something doesn’t conform, children take notice and show interest.
Alas, it is this appeal to novelty that make us susceptible to some insidious and, well, bizarre practices in the modern digital age, a few examples of which have recently come to my attention.
On particular case is YouTube content farms producing so-called “life hack” videos.
Ann Reardon is a talented baker with a food science background based in Australia who creates wonderful YouTube videos on her channel “How to Cook That.” She comes with innovative cooking ideas, “rescues” cakes, and recreates historical dishes. Unfortunately she’s up against content farms that produce massive numbers of life hack videos which contain recipes that simply do not work and may even be out right dangerous.
Multiple clusters of interconnected channels feature short videos that are designed to provoke equal parts fascination and rage. The content is slickly produced, each hack shown in rapid succession with cheerful music and happy participants (there’s a an extreme cheese factor to any acting involved). The primary lure is that these videos demo how to do something that will allegedly be quick and take minimal effort like creating a cake in a cup, a custard in a carton, or some sort of flavored popcorn.
The results are staged, sometimes using obvious camera trickery. Anyone attempting to actually do the hack will inevitably fail, and likely be left with sticky, inedible mess. If they don’t wind up in the hospital. Please folks, don’t try to poach your eggs in the microwave.
Ann has been able to parlay this situation into a series of debunking videos that have proved popular. Mocking such videos is its own cottage industry, but Reardon will use her expertise to actually attempt to execute these supposed life hacks and show the inevitable, atrocious results. It does mean that she spends an inordinate amount of time cleaning pans, microwaves, and waffle irons — also her husband Dave, who is an exceptionally good sport, is her guinea pig for trying out the end product of these recipes.
Dave is actually a journalist and delved into the weird world of these content farms, exploring how they have hacked the YouTube algorithms. It’s a lucrative business, referred to as “a machine that prints money.” And Ann points out a lot of their content on its face would appear to violate YouTubes terms of service (such as how they use clickbait-y thumbnails that seem to have little or nothing to do with the content).
It seems like such madness when you see the volume at which these videos are produced. and the number of views they are able to generate. Not to mention some disturbing narration that accompanies some of these videos.
Manipulating each other to get something from one another is obviously nothing new, in fact it’s inherent to being part of social species. And worries about how new technology can be used to enhance that manipulation are certainly not new either. We’ve always lived in an attention economy, it’s just there’s so much more competing for ever decreasing slices of bandwidth.
Another example of how this sort of online deception manifests itself is a video Eddy Burback produced on his investigation the rise of ghost kitchens and virtual restaurants. There has been a proliferation of “brands” operating without an actual restaurant location or acting as a front for another more familiar brand. Probably the most infamous of the latter was Chuck E. Cheese operating as “Pasquale’s” pizza (the name itself actually has a connection to Chuck E. Cheese lore). Other familiar restaurant chains such as IHOP and Chile’s, not exactly known for their cultural cachet, have also been involved in this phenomenon. What is perhaps most amazing is the sheer volume of brands that are being created, many operating out of the exact same location. Eddy actually found one location in L.A. operating 44 “restaurants.”
Burback went through the exercise of ordering nine beer battered cod sandwiches supposedly from different businesses but all coming out of the same location. He dutifully took a bite of each one so he could determine if they tasted the same to him. There is actually an extraordinary ease in which these restaurants can be set up. It is a sort of Taco Bell type of system where you have the same ingredients just put together in different ways — you can mix and match different items to create whatever menu you want under your own branding in order to appeal to a given market or fandom even though the food is all the same. Mr. Beast is perhaps the most well known personality who is leveraging these ghost kitchens, and he is somewhat unique in his openness about it.
How this system operates can lead to major quality control problems and it creates complications for food safety inspectors, who liken it to “smoke and mirrors or a shell game.”
We can see other examples as to how platforms are hijacked or how their internal algorithms have detrimental effects in how Facebook operates.
Facebook has gained a reputation as a place where conservative news stories seem to trend disproportionately, with the posts of certain well known names regularly take the top spots in lists of most popular posts and news articles. A lot of this is blamed on Facebook’s aging “boomer” user base.
However, given the spat recently between Steven Crowder and the Daily Wire where Crowder tried to make hay out of being offered what sounded to be an exorbitant sum of money as being too little compensation, it seems clear that the conservative online media ecosystem is awash in money. So it would not surprise me at all that publications such as the Daily Wire are buying their way to the top of those lists with targeted ads. After all, if the bottom line is irrelevant, why not spend vast sums to ensure your content gets distributed? If it allows you to influence the body politic to get your agenda popular support, then from their perspective, it’s probably a worthwhile investment.
While its impossible to know the full extent of its efficacy, we do know that the Russia government saw the Facebook platform as an opportunity to try and influence the 2016 election — and at the time Facebook was apparently not concerned enough about the fact that political ads were being bought in rubles. Given the closeness of the election and the amount of money spent in the effort, it may have been a radically effective expenditure.
Even if part of the equation is just the amount of money being spent, the sort of content that the conservative ecosystem pushes does exploit another aspect of our psyches that is susceptible to modern social media. It’s not an appeal of novelty that creators of the content are leveraging, instead it is rage baiting.
We’ve learned from a whistler blower at Facebook that the algorithms would tend to promote content that made people angry and caused conflict. To the Facebook algorithm an impassioned back and forth that ends up invoking Godwin’s law looks like compelling engagement. By getting our amygdala’s to react to a perceived threat, our decision making and rational thinking are short circuited, leading to an impulsive response. More over it affects our social behavior, causing us to seek out others who are our views and opinions which further isolates us into our silos.
That platforms like YouTube and Facebook allow for this manipulation is a problem. They see it as views, stickiness, and ultimately advertising dollars so whatever the negative externalities of people wasting time and resources of life hacks that will never work might be, apparently the powers that be don’t think it’s their concern.
A certain so-called free speech crusader who recently bought Twitter had made it one of his missions to rid the platform of those pesky bots. I haven’t gotten any sense that any progress has been made on that front, and it seems doubtful that will happen as they are a convenient scapegoat and there is a dearth of engineering resources available for the effort after he’s whittled down the workforce to an insane degree. I suspect the new “verification” system only puts bot farms at that much more of premium and will induce certain actors to engage in more nefarious activities to influence the platform (e.g., credit card fraud to purchase Twitter Blue subscriptions for bots). It’s a constant arms race against those that would seek to exploit and manipulate these platforms through scams, spam, and other inauthentic behaviors.
I don’t know exactly what the answer to all this is. I don’t think we can rely on self regulation on the part of these entities and, we’ll have to see where discussions about reforming section 230 go, but I’m not highly confident there’s much sophistication on the part of certain legislators on the subject.
I am a bit more heartened by the the possibility of greater adoption of the platforms in the Fediverse — leveraging the distributed nature of the Internet based on protocols and not individual sites has an inherent, old-school appeal harkening back to the days of a different sort of Internet. That would at least take some of the decision making out the hands of monolithic corporate entities and allow for a consensus of norms and standards to develop. While adoption of Mastodon may be suffering from some usability issues, both in terms of understanding the Fediverse model and certain features (a lot of journalists have expressed unhappiness about the lack of an equivalent to quote tweeting), I’m hoping it will ultimately gain traction such that platforms start adopting ActivityPub instead of continuing to cultivate their own walled gardens.
Anyway, I’ve been thinking about all this in the context of song I’m still trying to finish — it’s one of those ones where the third verse never felt quite right, was even a bit mean, and overall it felt like I hadn’t really given the song “a point” that was worthwhile. It’s still not quite there, so all I’ll say about it for now is that when I tried to enlist ChatGPT’s help for some suggestions it got mad at me for objectifying a mannequin.
— E.G.