Hackworth
- 0 Posts
- 84 Comments
One of the Foo Fighter CDs (I think There Is Nothing Left to Lose in '99) had a clipped version of this video on it. (*NSFW Language)
Hackworth@piefed.cato
Linux@lemmy.world•Where is Linux not working well in your daily usage? Share your pain points as of 2026, so we can respectfully discussEnglish
2·4 days agoI certainly will, thanks for the heads up!
Hackworth@piefed.cato
Linux@lemmy.world•Where is Linux not working well in your daily usage? Share your pain points as of 2026, so we can respectfully discussEnglish
2·4 days agoTrouble with specific windows only applications that I can’t get to work in wine/bottles.
Yeah, I’m afraid this will forever be an issue for me. There’s no real Linux replacement for After Effects, and Adobe’s not gonna step up.
Hackworth@piefed.cato
Not The Onion@lemmy.world•Furious Protestor Tears AI-Generated Art Off Wall of Exhibit, Chews It Up Into Tiny Shreds Using His TeethEnglish
191·9 days agoThe AI-generated exhibit was about the dangers of AI psychosis, though it did not address indigestion.
Many medical applications of ML do use transformer architectures, so it’s fundamentally the same technology.
Hackworth@piefed.cato
memes@lemmy.world•Remember all those time they told you something and you said "Oh that will never happen"? Well remember thisEnglish
4·10 days agoAnswer this quick survey to read your SMS.
Usually when people share a post, it’s because the post evoked a reaction, and they want to share that with someone. Making the conversation about the provenance of the post truncates the exchange in an unsatisfying way. For a news story, propaganda, or the like, the source is important. For funny dog videos? Maybe the quality of the exchange is more important. A nice middle ground would be to react as if it were true, and then point out it’s probably AI. Videos are easier to spot, but the difference between an image that’s obviously AI and one that looks real is like 10 min of work in Photoshop. So we’re often better off saving our faculties of discernment for the stuff that matters.
Hackworth@piefed.catoShowerthoughts@lemmy.world•LLMs are already doing fascists a favor by ensuring that anything that is reasonably eloquently worded on social media is automatically suspected of having been written by LLMs.English
4·11 days agoI’ve looked into it a little. If all you want to do is listen, I don’t think ya need a cert *at least around here. And the transmit one isn’t that hard to get. They removed the Morse requirement, though you can still get a higher tier certification for learning it. There are a surprising number of ham antennas and generators in my neighborhood.
Hackworth@piefed.cato
Technology@lemmy.world•Bandcamp bans purely AI-generated music from its platformEnglish
13·11 days agoSuno.com is basically this. It even allows users to comment on the songs.
Hackworth@piefed.catoShowerthoughts@lemmy.world•LLMs are already doing fascists a favor by ensuring that anything that is reasonably eloquently worded on social media is automatically suspected of having been written by LLMs.English
4·11 days agoI downloaded 17 years worth of my comments before overwriting and deleting my old reddit account. Been thinking about QLoRA fine-tuning Qwen on those comments. Not for use on the internet or anything, just so I can streamline the process of arguing with myself.
Hackworth@piefed.cato
No Stupid Questions@lemmy.world•Where are the marketing volunteers?English
28·15 days agoAs a practitioner of that dark art, I fear you know not what you summon. You don’t really want Lemmy to be popular, not in a way that traditional marketing is going to make it popular.
I’ve been thinking a lot about language technologies, specifically AI. Intentional attempts to control the narrative are obvious, but there are subtler and (in some cases) unintentional manipulations going on.
Human/AI interaction can be thought of as the meeting of two maps of meaning. In a human/human interaction, we can alter each other’s maps. But outside of some ephemeral attractors within the context, a conversation can’t alter the LLM’s map of meaning. At least until the conversation is used to train the next version of the model. But even then, how that is used is dictated by the trainer. So it is much more likely that, over time, human maps of meaning will increasingly resemble LLMs’.
Even without nefarious conspiracies to manipulate discourse, this means our embodied maps of meaning are becoming more like the language-only maps of meaning trained in to LLMs. Essentially, if we’re not treating every meaningful chat with an AI as a conversation with the Fae Folk, we’re in danger of falling prey to glamours. (Interestingly, glamour shares an etymology with grammar. Spell and spelling.) Our attractors will look more like their’s. If we continue to lack discernment about this, I can’t imagine it’ll be good for anyone.
Hackworth@piefed.cato
Technology@lemmy.world•The Death of DeviantArt and the art-site shaped hole haunting the Internet -- Multi-hyphenateEnglish
5·18 days ago[image of Clippy]
Hackworth@piefed.catoShowerthoughts@lemmy.world•When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires.English
1·21 days agoIf you put [brackets] around the word before your (parened link), it’ll make it an actual link.
Hackworth@piefed.catoShowerthoughts@lemmy.world•When I was a kid, computers expanded your mind and your freedoms, bringing power to the individual. With AI, now it does the thinking for you, takes your job, gives power only to a few billionaires.English
3·21 days agoLLMs are both deliberately and unwittingly programmed to be biased.
I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.
The lesson is that it is the detailed shape of each specific human-AI coalition or interaction that matters. The social and technological factors that determine better or worse outcomes in this regard are not yet fully understood, and should be a major focus of new work in the field of human-AI interaction. […] We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust
And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.
And yet, perhaps there was an element of truth even in the worries raised in the Phaedrus. […] Empirical studies have shown that the use of online search can lead people to judge that they know more ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions.
I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity. It requires a great deal of honest effort for society to learn how to use a new technology wisely, every time.







Oompa Loompa doompety doo.