




| Artificial Inteligence & Why "No AI Webring" - The Crystal Website Thought i should maybe write something here given "No AI Web Ring"; When ChatGPT first dropped and everyone started talking about it, me being the nerd i am, gave it a try; first thoughts were that it is kinda "neat" and i had played around with it, actually doing something useful, i tried running some javascript "hello world" program through an obfuscation utility then asking it what the code does and if it can de-obfuscate it and was able to do it fairly well,
which was initially i thought was very exciting, as it could save hours of unnessorcary work and effort, i thought-
however there was one thing that annoyed me it would trying to refuse to do that, saying its "unethical" to deobfuscate code, when in the real-world; there is almost no ethical use for obfuscating code (all i can think of is for CTF like challenges or something)
and like it seems they programmed it with the corporate fake "ethics" and ""ethical hacking"" bullshit built-in; y'know, the one where helping the government hack journalists and build a surveillance state to figure out who to brutually murder is 'ethical' but where cracking DRM on an abandoned forgotten game distribution platform, or jaibreaking your own device to make it do what you want, and to prevent it becoming complete e-waste, is not.
initally that was my only real complaint at the time
i bring this up because i think it gives context as to why there is an AI bubble in the first place inital reception was from what i remember, fairly positive, ""open""AI became a big hit overnight
but then after awhile, the cracks began to show, you begin to realize that since its sometimes wrong you need verify the answer, and fact-check it this means that most of the things you initally thought it could be fairly useful for it suddenly isn't
going back to the obfuscation example, it means i have to know what the obfuscated code does, in order to know that what it says it does is correct, and if you knew that, you wouldn't be asking in the first place
and this applies to pretty much all seemingly optimistic usecases you could come up with, you can't ask it general questions since you have to fact check and reserach the answers it gives you anyway. you can't ask it how to do something since you have to know how to do it in the first place. and so on; leaving the actual reaming usecases to effectively as a crappy chatbot, that whenever you ask questions like "how what do you think of .. you get the same answer of "as a large language model i cannot .." becuase the people who made it and pushing it, seemingly; explicitly don't want you to use it this way (probably better off just using something like cleverbot) for suggestions on otherwise subjective things (like e.g naming things) that you don't actually need to care about the answers too verify creating kinda poorly written stories, code or art, that you can't tell sucks and in a way that completely negates your ability to learn anything from it
not the total revolutionary thing thats gonna make everything so much easier after all is it. so naturally it got its 10 seconds of fame, and then everyone kinda moved on; but then also stuff started coming out about how unreasonably high energy consumption making it absolutely horrible impact on the climate and the environment in general, something that is already in a kinda bad state. and we should really be trying to avoid
and then there was the whole drama about the training data is derived of mostly copyrighted content which admittedly when i first heard i was like "okay, but fuck copyright?" like your talking about state sanctioned violence to grant someone a complete monopoly over how something is used, to the level where it infringes on other peoples rights to express themselves and make shit just because its derivative of some other thing.
Yes i know practice, it is more nuanced than that, with companies gobbling it up to try get around having to pay their workers; it basically copying or barely changing something and claiming you made it, like i.e plagiarism (which is actually different to copyright, and i hate how these two get conflated alot. but thats a topic for another time)
But this has been talked about to death and probably better than i could; and it doesn't interest me too much
instead, i would like to focus on ... a much more serious and more pressing issue; CW: (State sanctioned) Murder & Suicide, & Mass Survailence, C/SA, general state of the world, conspiracy therories and propaganda going back to the 'crappy chatbot' usecase, yeah well it seems some people producing this have actually run with this idea, however unlike most of what came before it which was primarily just for fun, it seems this time, some people have taken it as a replacement for therapy and actual crisis support;
this is not to be a blind endorsement of therapists, the field of psychology, or crisis lines in general, i know first hand how there are are valid things that can be said about them especially when it comes to matters like coercion, autonomy, and more generally human rights, replacing them with AI is not a good solution (especially considering that AI vendors are building said coercion and human rights & autonomy violations directly into the AIs, meaning it won't even let you avoid that.)
but more generally, AI being used in this way, has gotten people killed, multiple times,
wanna know another thing AI is really good at? .. image reocgnition, specifically facial recognition, it's also really good at parsing through large amounts of text, and finding all things relating to a certain topic or whatever (of course, it also gets details blatantly wrong here, but still, its fairly alright at it), yeah so its very good for, as an example; looking over footage of a protest; and then cross-referencing it with people who posted about certain political issues on social media it's a survailence states pipe dream;
more pressingly these same AI vendors the same ones who kept warning about how a rouge AI could turn evil and kill everyone; are now turning around training AI for the express purpose of murdering people. guess we better give them a head start, right? AI vendors have also fully adopted the growing rise of fascism & bigotry and have started pushing xenophobic conspiracy theroies fearmongering about "rouge superinteligent ai" teaming up with the chinese communist party" to idfk take away your totally real "american freedoms and " rights " (which the state can just strip from you and blatantly violate at any time)"
and finally i'd wanna point out that AI is really effecitve at is writing really convincing propaganda.
don't even get me started on deepfakes, and their potential for propaganda, and for creating C/SA imagery of people; (honestly, i can't think of a single legitimate use for it, probably isn't one - i mean, it seems to be expressly useful you can't get someone to record / photograph themselves doing something on their own, so like for violating autonomy and consent ... and not much else?)
This goes way beyond art theft, as it currently exists AI is genuinely harmful to people and a threat to peoples safety, autonomy, and just general human rights. and i feel like this aspect of it is often overlooked, with a huge focus on the ai art thing, (or maybe im in a bubble) still, i wanted to speak about it, as AI as it currently exists; has an insanely horrific human cost, thats why Anti-AI, thats why "No AI Webring"; the fact they bombarded my Forgejo instance with millions of requests that is indistinguishable from a DDOS, somehow managing to creating 100's of gigabytes of temporary files, and repeatedly crashing the whole server (which resulting in data corruption a couple times) with their completley overzealous and over the top scraping that ultimately ended up forcing us to completley destroy the accessibility of our site and use utilities like Anubis is also related, that is all true too, but its not the main factor.
|