it turns out you cannot replicate the human spirit by sorting images a trillion times

has anyone said artificial isn’telligence yeta technological post7 min read

When I poke fun at AI, as a con­cept, I usu­ally go for the words “copyright in­fringe­ment al­go­rithm”, be­cause for the most part, that’s the only goal they’ve prov­ably ac­com­plished so far. They’re ter­ri­ble ad­vice-givers, bad enough for com­pa­nies to be suc­cess­fully sued on the mat­ter, they’re not able to be a good stand-in for even the most ba­sic of flow­charts the Siris of olden times would ac­com­plish, and be­cause peo­ple re­mem­bered you can just lie on the in­ter­net with­out an im­age of a six-fin­gered freak be­ing gen­er­ated, it is­n’t even par­tic­u­larly known for be­ing used for mis­in­for­ma­tion.

For it to be good at any of those things, it would first have to truly know things, which it does­n’t. It just hap­pens to have, in ex­tremely large quan­ti­ties, a bunch of stuff (text, im­ages, video, etc.) that are vaguely tagged into cat­e­gories, and then when you tell it to, like, draw a dog, it just makes a av­er­ag­ing-out of how­ever many things tagged “dog” ap­pear. If this is AI, then so is the Youtube al­go­rithm, or the Facebook time­line. It’s the ex­act same kind of sort­ing, only thanks to the amount of data it needs, far more ex­pen­sive.

Thus, the copy­right in­fringe­ment al­go­rithm; Github’s Copilot is get­ting sued for shame­lessly ig­nor­ing soft­ware li­censes, a chat­bot com­pany is get­ting sued for be­ing trained on copy­righted mu­sic, the New York Times is su­ing OpenAI for rip­ping from its ar­ti­cles, Link to foot­note 1 and pos­si­bly my most fa­vorite ex­am­ple of all, Midjourney had a lit­eral writ­ten-down-list of all the copy­right they were in­fring­ing. (I may not be a lawyer, but I can also see some, uh, in­ter­est­ing im­pli­ca­tions in both hav­ing that list, and also in pub­licly talk­ing about laun­der­ing the list in a Discord chan­nel.)

the CEO of midjourney saying, and i quote, "Just need to launder it through a fine tuned codex", it being the LITERAL FUCKING EVIDENCE of copyright infringements that they HAVE WRITTEN DOWN OH MY GOD
very good mes­sages for your lawyer to see dot png

When all you have is a ma­chine that can take a bil­lion im­ages, and spit out some­thing in­be­tween all of them, that’s a bil­lion dif­fer­ent ways you can vi­o­late copy­right. The AI com­pa­nies’ de­fense is that it’s all un­der fair use, cit­ing a court case so far dis­con­nected from the use cases of AI that it hurts; the ar­gu­ment is that some­thing like DALL-E, ex­plic­itly de­signed to mimic the works of real artists, is the same as, like, scan­ning books for a search en­gine. Especially given the kind of peo­ple on the other side of this ar­gu­ment in­cludes record la­bels, en­ti­ties stub­born enough to ba­si­cally ruin peo­ple’s lives for an ex­tra thirty bucks, I have my doubts that this will hold up for too long.

It’s al­ready a mas­sive black hole of wast­ing en­ergy, one merely sub­si­dized for the cur­rent mo­ment. How ex­pen­sive will it get, once any given piece of AI gen­er­a­tion be­comes some­thing you can sue over?

And it has to in­fringe on those copy­rights, for the record. Every time AI gets trained on its own work, it gets worse, and every time it takes data be­ing will­ingly fucked with to ma­nip­u­late AI, it gets worse, and now, even the bare­bones copy­right vi­o­la­tions that AI loves to do can, as­sum­ing the artists it steals from uses the right tools, make it so, SO much worse. The only way for these ma­chines to build is through col­lect­ing data, far too much of it, and it needs to in­volve un­will­ing hu­man par­tic­i­pants, for any­thing less is sui­cide.

If none of this seems sus­tain­able, that’s be­cause it’s not. To give you an idea of how un­avoid­able AI poi­son­ing it­self truly is, con­sider that the very peo­ple meant to be re­fin­ing this AI—because they ARE ex­tremely hu­man peo­ple, and there’s quite a lot of them—are also us­ing AI to do so.

Another Kenyan an­no­ta­tor said that af­ter his ac­count got sus­pended for mys­te­ri­ous rea­sons, he de­cided to stop play­ing by the rules. Now, he runs mul­ti­ple ac­counts in mul­ti­ple coun­tries, task­ing wher­ever the pay is best. He works fast and gets high marks for qual­ity, he said, thanks to ChatGPT. The bot is won­der­ful, he said, let­ting him speed through $10 tasks in a mat­ter of min­utes. When we spoke, he was hav­ing it rate an­other chat­bot’s re­sponses ac­cord­ing to seven dif­fer­ent cri­te­ria, one AI train­ing the other.

It’s why, when I hear so much dooms­day shit about AI, I can’t help but laugh. If you’re one of those peo­ple, ask your­self this: why is it that the biggest dooms­day truthers out there are the peo­ple ac­tively de­vel­op­ing it? It’s not be­cause they be­lieve it, be­cause if they did, they would­n’t be push­ing it; it’s be­cause, when you peel back the mar­ket­ing and see the ac­tual meat-and-bones, you’d re­al­ize that AI, even in this half-func­tion­ing state, has al­ready peaked. There’s only so much an ex­tremely com­pli­cated sort­ing al­go­rithm can truly ac­com­plish, and it’s nowhere near the level of an ac­tual hu­man per­son.

They need you to be­lieve the hy­per­bole, in the hopes that every­one in­volved can make enough money be­fore you no­tice that there’s no bul­lets in the ri­fles they wield. It’s in the same way Uber needed to once con­vince you that they’d be cheaper to take than a taxi… that is to say, they need you to be­lieve it for just long enough to where, when it comes time for them to at­tempt a real profit, they can foot you the bill, and you’ll have to pay for it.

No, if there is to be in­dus­tries de­stroyed thanks to AI, it’s go­ing to be for far more bor­ing rea­sons. Here’s what’s go­ing to hap­pen: some movie ex­ec­u­tive, or book pub­lisher, or game stu­dio suit, or what­ever, is go­ing to think they can re­place all their low-to-mid-level work­ers with AI. This will not work well, and it will re­sult in the few hu­man be­ing left in the stu­dio hav­ing to clean up af­ter a stu­pid lit­tle sort­ing ma­chine that per­forms worse than even the sad­dest, dumb­est in­tern, but the suits will tell them­selves that it’s cheaper, so it’s fine.

And then, as in­vestor money stops pour­ing in, it will sud­denly, very quickly, stop be­ing fine. These com­pa­nies will stop be­ing able to use pro­grams like ChatGPT for pen­nies, and will start hav­ing to spend very real, very ex­pen­sive amounts of money, a price that can only climb higher as those AI mod­els re­quire more and more data to be trained on, and will only get worse as the mod­els get worse, re­quir­ing more prompts to get a good-enough out­put, which will cost even more money, all the while, strug­gling to bring back all the work­ers they had al­ready fired, be­cause they’ll catch onto the fact that, if a hu­man needs to fix all the dumb shit AI cre­ates, then they’ve ef­fec­tively spent the money of dozens of work­ers on what amounts to an ex­tremely ex­pen­sive draft gen­er­a­tor.

Companies will, al­most def­i­nitely, shut­ter as a re­sult of this, I don’t doubt that. If you wanted to be pedan­tic, tech­ni­cally, it will be AI’s fault—but is it re­ally? Or is it merely more of the same rea­sons your boss was al­ways go­ing to doom you?

  1. This one also dou­bles as a li­bel law­suit; since AI is so good at spew­ing out com­plete bull­shit, the NYT has a very real ar­gu­ment to be made in that ChatGPT has been giv­ing peo­ple “information” that NYT ar­ti­cles them­selves don’t ac­tu­ally pro­vide. Two crimes for the price of one! Return to ar­ti­cle via foot­note 1 by nomiti ityool 2024
CC BY-SA 4.0 unless marked otherwise
made with Eleventy v2.0.1

rss feed lives here
if you use this for AI i hope my dogshit sentence structure poisons it