NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Thoughts on thinking (dcurt.is)
abathologist 14 hours ago [-]
I think we are going to be seeing a vast partitioning in society in the next months and years.

The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.

I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).

I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.

cameldrv 7 hours ago [-]
I've personally noticed this as a big trend. For example, I had become more and more reliant on my GPS in the car. I've not really been the outer control loop of the vehicle. An automated system tells me what to do.

I recently got a running watch. It suggests workouts that will help me improve my speed (which honestly I don't even care about!). If you turn it on it will blare at you if you're going too fast or too slow.

When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

Anyhow recently I've been reducing my usage of these things, and it's made me feel much better. Even navigating the car without the GPS makes me feel much more engaged and alive.

Ultimately one of the core things that makes us human is making decisions for ourselves. When we cede this in the name of efficiency, we gain something but we also lose something.

Marshall Brain wrote an interesting short book about this called Manna.

bartread 38 minutes ago [-]
> I've not really been the outer control loop of the vehicle. An automated system tells me what to do.

That’s not really true, is it? Who tells the GPS where you’d like to go? You, I imagine. You don’t just follow GPS instructions unless you’ve first told it where you’d like to go. And, indeed, unless you tell it, it won’t give you any instructions (though it might suggest common destinations for you to choose from).

You are still the outer control loop of the vehicle: you’re just thinking at the wrong level of abstraction, or thinking of the wrong loop as the outer loop.

bsenftner 1 hours ago [-]
More people need to read Marshall Brain's book "Manna"; the main character's thoughts examine and put to bed the majority of the sophomore thinking surrounding AI and it's impacts on civilization. Plus, it is one of the rare balanced views with both very positive and very negative outcomes simultaneously coexisting.
bsenftner 60 minutes ago [-]
empiricus 6 hours ago [-]
For GPS, I start by looking at the overall route, and compare with potential alternatives. Then during the driving the GPS just manages the local details, I still have some understanding and agency over where to go and how to get there.
immibis 2 hours ago [-]
I start by looking at the map. I go in the direction of the place I want to be. If I want to know the technically fastest route then I let my device calculate that. I don't always take that route. It's an assistant, not a boss. It's more interesting to walk down different streets sometimes. (And while I'm preaching to Americans, it's also good to walk down streets sometimes. It breaks away a few layers of abstraction that you have when driving.)

Looking at the map actually helps you learn the city layout. As of right now (literally as I'm typing this) the train was delayed, so I chose to get off at the next big station before everyone crowds on, and walk the rest of the way. I can do this without checking a map because I know where it is and where I am, because I don't let the machine think for me.

I don't drive (non-car-worshipping cities are amazing) but I do this when walking and also with train routes. I don't memorize the bus routes, since the train is better and has fewer routes, so I also sometimes ask my device for a route if I think there's a faster bus route than train (usually not the case).

boppo1 5 hours ago [-]
>When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

Not necessarily. I'm into a very particular sort of painting and I have been totalitarian with Instagram about showing me that content and not other stuff. It works splendidly as long as I'm consistent.

Thanks to Instagram, I have been introduced to tons of painters I would not have been otherwise.

bonoboTP 4 hours ago [-]
Is it better to be introduced to tons of painters vs fewer but in more detail? Or being told about a painter by someone in person vs by an algorithm?

In the 90s you only had certain songs if you knew someone who had it on cassette and you borrowed it and put it on your mixtape. Throughout the interaction, you also got initiated deeper into the culture of that thing in person.

I also notice that families rarely sit together nowadays to look through vacation photos. The pictures are taken, but people either don't have time to sort them and curate them. When film had a price, you only took fewer ones but it was more intentional. Then the fact that you only saw the picture once you were back at home, generated excitement that you could share and relive candid moments. Now people upload stuff on Instagram but it's intended to a generic audience, much unlike browsing through an album on the couch.

tsumnia 26 minutes ago [-]
> Then the fact that you only saw the picture once you were back at home, generated excitement that you could share and relive candid moments

Or you do like me and go see Interstellar 5 times in IMAX because the story was so good

throwaway2037 3 hours ago [-]

    > In the 90s you only had certain songs if you knew someone who had it on cassette and you borrowed it and put it on your mixtape.
I knew lots of people who recorded 120 Minutes on MTV and listened to college radio.
bonoboTP 3 hours ago [-]
I meant the niche long tail stuff, since the commenter mentioned "tons of painters I would not have been otherwise". The equivalent in music would not be on MTV.
throwaway2037 3 hours ago [-]

    > I'm into a very particular sort of painting
Can you share some of your favourites that you follow? This sounds interesting.
guythedudebro 29 minutes ago [-]
Furries
vidar 4 hours ago [-]
I applaud your consistency and effort to curatr your feed which is certainly technibally possible but i am quite sure you are the exception to the rule.
BeFlatXIII 2 hours ago [-]
The big benefits I find about modern satnav have little to do with route planning. That can be done with maps and dead reckoning. Where it shines are

1. Having knowledge that cannot be acquired ahead of time, such as traffic conditions

2. Providing a countdown timer until my next turn

huijzer 5 hours ago [-]
> When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

Yeah it’s crazy. I used to have a commonly held believe until last week. Then I started watching more videos in the opposite viewpoint and boom now my whole YT feed is full of it. I wish the feed would have sprinkled some opposing sides into the mix before last week. (Having said that I am appreciating individual content creator much more since people like Lex can decide to show both sides independent from some algorithm.)

globular-toast 6 hours ago [-]
For road navigation it might be worth seeing if your country has a proper system in place and learning how to use it. In the UK, for example, there is a simple "algorithm" to get you where you need to go. The signage is hierarchical starting from motorways and trunk routes and descending down to primary and secondary local routes. So to navigate anywhere you go via trunk routes and follow the signs to the nearest trunk destination beyond where you are trying to go. Then as you get closer you should start to see your actual destination appear on the signs as a primary route. Once you learn the system it's really quite possible to navigate by yourself anywhere.

The nice thing is you won't end up routed down some ridiculous difficult road just because the GPS says so and it calculated it would save 0.2 seconds if you were somehow going at the speed limit the whole way. Your brain includes a common sense module, and it's usually right.

robrorcroptrer 6 hours ago [-]
But then again you are relying on an information system to navigate.
js8 2 hours ago [-]
Another example is free market ideology. This was a question I posed to libertarians - how can you claim that free market enhances human freedom, when it always tells you what to do in the name of efficiency?
norome 2 hours ago [-]
I don't think the claim is that it enhances human freedom necessarily, rather: by giving more freedom to i.e. set prices than people will use their particular knowledge of their area of concern to set those prices correctly.

It does coincidentally align with John Stuart Mill's reasoning for why Liberty is fundamentally necessary: that only at the level of the individual is it possible to know what is good and right for that individual.

bonoboTP 4 hours ago [-]
> When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

This was even more true with TV, and especially before there were a million cable channels.

And it makes me think about the even wider time scale. A few generations ago, "the outer control loop" was also not in the individual's hand, but instead of computers, it was built on social technology. The average person didn't have much to decide about their lives. They likely lived within a few (or few dozen) km of where their ancestors did, in the part of town and a type of home fitting for their social class, likely doing the same job as their father, following a rigid life script, hitting predefined ritualized milestones. Their diet was based on whatever was available at that time of the year based on local production, cooked essentially the same way, as handed down by mothers and grandmothers. There was very little to the tune of letting their inner true self blossom through taking fun colorful decisions. They couldn't choose from some endless repository of stories. It was mostly a rotation of the local folk stories and the stories of the dominant religion.

Just wanting to "consume" and follow a script without the weight of decision making isn't some modern "disease".

The key difference is a new kind of fragmentation of culture (and the non-local nature of it). A long time ago, culture was also fractally fragmented, in a way where "neighboring" villages in a mountainous area would have their own dialects. Then with long-distance travel and electronic communication and media, globalization happened where distant parts of the world started to sync up and converge on some shared part of culture (of course fused with a continuation of the local one), everyone wearing T-shirts, listening to Michael Jackson and rooting for their football/soccer team. If you were dropped to some random place on the planet, you could likely converse with them about some fairly recent cultural cornerstones in entertainment and basic global news topics. But you still likely weren't "dropped" there.

Then the internet appeared and you could suddenly talk to all those people in other parts of the world (or just other parts of your country). But search and discoverability weren't so great so there was friction. You build communities around shared interests and compatibility of personality and it required effort and participation. Usenet, forums, IRC. But these isolate you from your neighbors and local connections. And people often explicitly wanted that. Nosy neighbors and know-it-all gossipy townfolk weren't such a rosy thing, people wanted to escape that to find peers who understand and validate them and can build a shared culture with.

In schools, subcultures already existed from the 70s and 80s onwards for sure, but they were few, like maybe 2 main camps or 3 or so, and information flow was slow therefore change was slow. Some new album of a popular band was released, then it was the thing for a long time, you didn't get an endless stream shoved in your face, you got the album and listened to it over and over. Today subcultures can't even be meaningfully counted because people follow personalized streams and come together in random configuration in streamer chats etc.

So basically, in the old internet model, there were lots of opportunities to choose from, but it needed effort to find it and to forge belonging. Then with more commercialization, things started to consolidate on fewer platforms. It made it easier for creators to reach a wider pool of users simultaneously, and made it simpler for users to just learn to use one or a few platforms. But this made it also easier to pick and choose your "content diet", buffet style. A little from here, a little from there, with little friction. But with so much on offer, how do you choose? Discoverability was still an issue until recommendation algorithms became strong enough to know what will drive engagement. Turn that up to 11 and you get the current day where even the front page grid of options is obsolete and you get a single linear feed again, which is like watching TV and channel surfing (pressing the "next channel" button over and over), except it's personalized and never boring.

Of course this applies to many other things as well, such as dating apps etc, which also feed you an algorithmic stream of options with the goal of maximizing profits for the company.

I don't think individual people's rejection of the trend due to "makes me feel much better" will make a dent. In many cases the use of these things isn't mere convenience but implicitly mandatory because other things are designed around the assumption that people use them. Schools announcing stuff to parents in Facebook groups. There's less traffic report announcements on the radio, because people use Waze and Google Maps that has real time traffic info and reroutes you automatically.

---

But then what might happen? I think we're seeing glimpses of it in the rejection of AI in certain circles of cultural thought leaders, which might grow towards a rejection of more tech. But instead of "makes me feel better", the only actually working mechanism will be social shame, similar to what often appears nowadays when some product turns out to have used AI. If it becomes established that you're obviously a loser if you Shazaam a song, or open TikTok, it could flip. Of course companies won't sit by watching idle. What's more likely is that the "rejection" of tech will just lead to other levels of meta-grift and engagement optimization. It may just fizzle out in a whimper of angry malaise and meta-ironic apathy.

whyage 7 hours ago [-]
> The aim of our creation is communication and mutual-transformation

That's a myopic point of view. Personal transformation is as significant, if not more. Production-oriented pastimes like painting, gardening, or organizing your stamp collection can do wonders for the mind. Their goals can be remaining sane in this crazy world, not producing the best painting ever, growing conversation-starting plants, or showing off your stamp collection. It's about doing for the sake of being.

emporas 12 hours ago [-]
It is knowledge that gets automated, rather than reasoning.

I was thinking of the first solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere. They build tubes 10 km long, a shot board is put at one end, and the players at the other end. They shoot darts at the board, and each shot takes 5 hours to reach the target. That's their national sport.

Problem is, I have never played darts, i don't know anyone who plays it, I will ask the LLM to fill in the blanks, of how a story based on that game could be constructed. Then I will add my own story on top of that, I will fix anything that doesn't fit in, add some stuff, remove some other stuff and so on.

For me it saves time, instead of asking people about something, hearing them talk about it or watching them do it, i do data mining on words. Maybe more shallow than experiencing it myself or asking people who know about it first hand, but the time it takes to get some information good enough collapses down to 5 minutes.

Depends on how you use it, it can enhance human capabilities, or indeed, mute them.

jen729w 11 hours ago [-]
Oh turns out ChatGPT generates exactly the level of banality that one would expect.

https://chatgpt.com/canvas/shared/6827fcdd3ec88191ab6a2f3297...

I don't want to read this story. I probably want to read one that a human author laboured over.

visarga 9 hours ago [-]
It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future. Of course they are going to get better. But that is not the point - it is that in the chat room the human and LLM spark ideas off each other. Humans come with their own unique life experience and large context, LLMs come with their broad knowledge and skills.
aorloff 9 hours ago [-]
There is a Borges short story written in the 1930s about "the Library" a supposed collection of all possible permutations of language, even misspellings and gibberish. In many ways, it is extremely prescient of AI.

To cut it short, in the end what Borges proposed is that the meaning comes from the stories, and that all the stories are really repetitions and permutations of the same set of humans stories (the Order) and that is what makes meaning.

So all a successful literary AI needs to do is figure out how to retell the same stories we have been telling but in a different context that is resonant today.

Simple right ?

parodysbird 8 hours ago [-]
This is basically a contemporary reframing of the core purpose of Renaissance magic. I suppose aspiring to be a 21st century John Dee from talking to some powerful chatbot of the future, rather than angels or elemental beings, does sound a bit exciting, but it is ultimately mysticism all the same.
WhyIsItAlwaysHN 8 hours ago [-]
O3s story is not amazing but it sure is orders of magnitude more interesting than your example:

https://chatgpt.com/share/68282eb2-e53c-8000-853f-9a03eee128...

I don't think it's possible to generate an acceptable story without reasoning.

That is not to say that I disagree with you. I would prefer to read human authors even if the AI was great at writing stories, because there's something alluring about getting a glimpse into a world that somebody else created in their head.

campers 8 hours ago [-]
There is a huge focus on training the LLMs to reason, that ability will slowly (or not that slowly depending on your timeframe!) but surely improve in the AI models given the gargantuan amount of money and talent being thrown at the problem. To what level we'll have to wait and see.
8note 11 hours ago [-]
hmm

ive been thinking that the knowledge isnt written down, so cant be automated, which also makes knowledge sharing hard, but the reasoning is automated

so, ive been trying to figure out patterns by which the knowledge does get written down, and so can be reasoned about

jrvarela56 9 hours ago [-]
My initial hunch and many answers in this site say ‘it’s boring I wouldn’t read that’.

There’s something to that: a good author synthesizes experiences into sentences/paragraphs, making the reader feel things via text.

I have a feeling LLMs can’t do that bc they are trained on all the crap that’s been written and it’s hard to fake being genuine.

But I agree you can generate any amount of filler/crap. It is useful, but what I got from GP was ‘ultimately, what’s the point of that?’. Hopefully these tools help us wake up to what is important.

fennecbutt 13 hours ago [-]
99% if not 100% of human thought and general output is derivative. Everything we create or do is based on something we've experienced or seen.

Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

Writers made elves by adding pointy ears to a human. That's it.

parodysbird 8 hours ago [-]
To emphasize again part of the post above: "The aim of our creation is communication and mutual-transformation".

When I write a poem in a birthday card for my wife to give her on her birthday, very little of the "meaning" that will be communicated to (and more importantly with) her is really from some generic semantic interpretation of the tokens. Instead, almost all of the meaning will come from it being an actual personal expression in a shared social context.

If I didn't grasp that second part, I might actually think that asking ChatGPT to write the poem and then copying it in my handwriting to give to her is about the same thing as if the same tokens written but from genuine personal creation. Over prolonged interaction, it could lead to a shared social context in which she generally treats certain things I say as little different than if ChatGPT returned them as output. Thus the shared social context and relationship is then degenerated and fairly inhuman (or "robotic" as the above post calls it).

jonplackett 7 hours ago [-]
Someone just the other day told me about how they used to have a group WhatsApp where they’d share these hand made memes. Just a bunch of guys photoshopping dumb stuff. It went on for years.

One day one of them discovers AI and post anything made with AI - initially it’s great, it’s much better quality than what they could photoshop. Everyone jumps on board.

But after a day or so, the joke is over. The love has gone. The whole things falls apart and no-one posts anything anymore.

It turns out - as you say - that the meaning - founded on the insight and EFFORT to create it - was more important than the anccuracy and speed.

parodysbird 7 hours ago [-]
Oh yeah this is exactly how my group chats went. We still can post some good (in our context) memes and have fun, but not like an avalanche of poorly filtered slop. A joke for a group can still be crafted via an LLM when used judiciously and as intentionally as part of the bit. But by judicious it's important that the human is the one doing the sending and in the right moment, and so the human is still the one communicating.

When WhatsApp originally inserted their AI bot in the chats, it got very annoying very quickly and we agreed to all never invoke it again. It's just a generative spam machine without the curation.

Tallain 12 hours ago [-]
This is an alarmingly reductionist statement that I cannot believe is made in good faith. If it somehow is, it's based on an abundance of ignorance that only highlights the importance of education.

Are you genuinely arguing that LLM output is derivative, and human output is derivative, therefore they're equal? Why don't you pop that thesis into ChatGPT and see how it answers.

bccdee 10 hours ago [-]
No, that's not true.

Quick, what's 51 plus 92?

Now: Did you think back to a time someone else added these numbers together, or are you doing it yourself, right now, in your head? I'm sure it's not the first time these numbers have ever been summed, but that doesn't matter. You're doing it now, independently.

Just because something isn't unique, doesn't make it derivative. We rediscover things every day.

BeFlatXIII 2 hours ago [-]
> Just because something isn't unique, doesn't make it derivative. We rediscover things every day.

This is the argument I use to dunk on ranters who spam conversations with “How can you say Christopher Columbus discovered the new world when there were already people living there?”

treebirg 3 hours ago [-]
But I do know what numbers are. I've also done addition before, so I know what the steps are. The result of 51 + 92 is derivative from (at least) these two concepts, which derive from others and so on. Maybe I'm stretching the meaning of derivative here, but to me derivative doesn't mean strictly recalling something verbatim.
Nevermark 11 hours ago [-]
Go with 99.9%. But not 100%.

Someone imagined space and time could be a deformed fabric. That was new.

In minor and major ways, new ideas are found or emerge from searches for solutions to problems from science to art. Or exploration of things in new combinations or from a previously untapped viewpoint.

Most people are not looking hard for anything beyond what they know. So not likely to find anything new.

But many people try new things, or try to improve or vary something in a direction that is not easy, and learning something nonobvious and new is the “price” they must pay to succeed. Or a bonus they are paid for pushing through a thicket, even if they don’t succeed at what they set out to do.

pwndByDeath 7 hours ago [-]
Not really new it came from observations or imagination of observed things.
MadcapJake 12 hours ago [-]
> Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

This is an outrageous thought experiment. Novelty is creating new connections or perceiving things in new ways, you can't just say "try to have eureka moment, see! impossible". You can't prompt engineer your own brain.

In fact, there's some research about eureka moments rewiring our brain. https://neurosciencenews.com/insight-memory-neuroscience-289...

mr_toad 13 hours ago [-]
> Writers made elves by adding pointy ears to a human.

Now that’s reductionist to the point of being diminutive.

milliams 3 hours ago [-]
Elves are wonderful. They provoke wonder. Elves are marvellous. They cause marvels. Elves are fantastic. They create fantasies. Elves are glamorous. They project glamour. Elves are enchanting. They weave enchantment. Elves are terrific. They beget terror. The thing about words is that meanings can twist just like a snake, and if you want to find snakes look for them behind words that have changed their meaning. No one ever said elves are nice. Elves are bad.

― Terry Pratchett, Lords and Ladies

BobbyTables2 12 hours ago [-]
Hey, no need to get short!

We should try to be the bigger person.

That’s really the long and short of it.

TechDebtDevin 12 hours ago [-]
Its not, thats why the term humanoid exists.
musicale 13 hours ago [-]
> Writers made elves by adding pointy ears to a human. That's it.

Humans have been interested in supernatural beings for thousands of years. Their appearance is usually less important than their powers and abilities.

The word is present in Old English and Old Norse, and elves appear in Norse mythology.

DavidPiper 13 hours ago [-]
That is a nonsense definition of creativity. The parent also wasn't suggesting - as far as I can read - that creativity is defined solely in the realm of the "truly novel" (or "isn't based on anything you've ever seen before").

All creativity is a conversation between our own ideas and what already exists.

Consider the unused soundtrack to James Cameron's Avatar [0][1], where ethnomusicologists set out to create a kind of music that had never been heard before.

They succeeded. But it was ultimately scrapped for the film because - by virtue of it being so different to any music anyone has ever heard before - it was not remotely accessible to audiences and the movie suffered as a result.

To argue that work is not creative because it is still based on "music" is absurd.

[0] https://www.youtube.com/watch?v=tL5sX8VmvB8

[1] https://ethnomusicologyreview.ucla.edu/journal/volume/17/pie...

myko 12 hours ago [-]
Incredibly interesting, thanks for sharing
vaylian 6 hours ago [-]
> Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

That's easy. The hard part is to explain it to other people, because we lack a shared background and terminology to explain it.

jonplackett 7 hours ago [-]
I think you misunderstand the point. It’s about intention. Are you creating this thing for the purpose of transforming or communicating? Or are you just making it for some businessy reason.

Yes, elves are derivative, as was a lot of the Tolkien world in a way - being intentionally based on ww1 - but its intention was to create something beautiful and amazing and communicative and transformational.

jen729w 11 hours ago [-]
> Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

Pick up an Iain M. Banks book, my friend.

musicale 13 hours ago [-]
> Everything we create or do is based on something we've experienced or seen.

I would add a couple of things to that. First, humans (like other animals) have instincts and feelings; even newborns can exhibit varying personality traits as well as fears and desires. It's certainly useful to fear things like spiders, snakes, or abandonment without prior experience.

Second, an important part of experience is inner life - how you personally perceive, feel, and experience things. This may be very different from person to person.

Andrex 12 hours ago [-]
What really fascinates me is gender based toy preferences at <2 years old. Very consistent that boys like race cars and action figures, even though it's their first exposure.

(I do not participate in culture wars, this fact just straight up fascinates me as a non-masculine gay guy.)

socalgal2 9 hours ago [-]
I'd be curious how we know they aren't exposed - 1 year is a long time to see TV shows, TV commercials, toys with pictures of target audience, picture books, etc...
gitremote 1 hours ago [-]
Cars were invented in the early 1900s and the vast majority of human existence was in a world without cars. There cannot be an innate preference for cars, which were a very recent invention.
bowsamic 9 hours ago [-]
You have fallen into the very trap he is criticising: you are entirely focussed on the product and how it differs from other ones, and have no sense of your individual journey of thinking being relevant
voidhorse 12 hours ago [-]
It astonishes me sometimes how completely stupid and reductive some HN takes on arts and creativity can be. I am astounded continually at how we can produce humans who are so capable in one sphere of life and so ignorant and oblivious of others...yet all too willing to make dismissive claims about them...

Creativity is much more than the derivative production of artifacts. What the OP is driving at is that creativity is a process of human connection and communication—you can see this most clearly in the art of interpretation. A single literary work has an almost uncountable number of possible interpretations, and a huge element of its existence in the world as a price of art are the discussions and debates that emerge over those interpretations, and how they shape us as individuals, instill morals, etc etc. Quite a lot more than "making elves by adding pointy ears to humans".

Your post stinks of the very gross consumerist mindset the OP called out. The creation and preservation of meaning is about way more than the production of fungible decontextualized objects--it's all about the mediation and maintenance of human relationships through artifacts. The fact that the elves have pointy ears doesn't even begin to scratch at their actual meaning (e.g. they exist in a world with very big real problems that effect you and me too, e.g. race relations, and exaggerated features estrange these relations so as to make them more discernible to us and allow us to finally see the water we swim in).

If humans stop engaging with these processes, it's reasonable to believe that a lot of that semiotic richness, which is much of what, in my opinion, makes us human and not just super smart animals in the first place, will be lost.

krelian 5 hours ago [-]
In full agreement with you on the flagrant incapability of a sizeable part of the HN crowd to understand and value of the arts.

Throughout history man has been celebrated and distinguished as the rational animal. As master of the earth this animal in our days dedicates its brightest minds to the continual increase of economic growth. Ask the rational man what is growth good for and after a few exchanges they perhaps will say that it ultimately improves our quality of life and even extends it. If might even allow the human race to flourish beyond earth and thus prevail long after resources on earth are depleted. But ask him then why is improving the quality of life a good thing at all? Is it just a meaningless cycle in which we improve the quality of life so that we can then improve the quality of life even further? No. Ask an individual human (in contrast to the ultra rationalist who thinks they represent the human race as a whole) what they work for, what they strive to achieve, what does quality of life ultimately mean to them and you will end up with happy times spent among family and friends. With meaningful moments listening to music, watching a film, reading a book. About time spent in creative endeavors that are totally their own. The rational animal in its hubris forgot what it thinks for and trapped itself in an endless cycle where the true meaning of being human is hidden from the sight of many.

But I think a wake up call is due very soon. The rational animal is about to discover the rationality it prides itself on was merely a sample of the true possibility. From the rational animal we have been relegated to another animal with some rational capability. As we slowly realize how futile are our attempts at thinking, we'll realize to our horror that the gift we are left with is the ability to recognize the futility and inadequacy of our attempts. Hopefully then we'll decide to retreat back into what truly makes us human, to what is ours, to what quality of life really means.

jofla_net 27 minutes ago [-]
I'm reminded of that My dinner with Andre monologue, and totally agree.
alganet 13 hours ago [-]
I'm not so sure about it.

Maybe it's like that because there aren't many novel opportunities for varied experiences nowadays.

The pointy ear sounds trivial in our experience, but it is radically different than ordinary everyday thought when observed as a piece of a whole imagined new world.

Of course, pointy ears now are not a novelty anymore. But that's beyond the point. By the time they were conceived, human experience was already homogenized.

The idea space for what an object is has been depleted by exploration. People already tried everything. It's kinda the same thing as saying that is impossible to come up with a new platonic solid (also an idea space that has been exhausted).

Any novel thought is bound to be nameless at first, and it becomes novel by trying to use derivation to define an unknown observation, not as a basis for it.

NobodytheHobbit 6 hours ago [-]
You're trying to expand the human experience instead of individual human experience which is really yours from your perspective and mine from my perspective if I can be redundant by enumerating. The frustration comes from the sacrifice of individual experience to this weird aggregated experience in the machine. It will push the capability of technology but does that service the aim of luxury made easy for the many to acquire as tech is supposed to do? What profit a person to gain the whole world but lose the very thing that makes themselves them? It feels systemically dehumanizing.
bowsamic 9 hours ago [-]
> I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic.

Exactly, there’s a huge section of humanity that actively wants to give away its humanity. They want to reduce themselves to nothing. Because, as you say, they cannot understand anything as having value other than economic artefacts

don_neufeld 19 hours ago [-]
Completely agree.

From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.

I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.

Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.

Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.

zdragnar 18 hours ago [-]
Anyone who has learned a second language can tell you that you aren't proficient just by memorizing vocabulary and grammar. Having a conversation and forming sentences on the fly just feels different- either as a different skill or using a different part of the brain.

I also don't think the nature of LLMs being a negative crutch is new knowledge per se; when I was in school, calculus class required a graphing calculator but the higher end models (TI-92 etc) that had symbolic equation solvers were also banned, for exactly the same reason. Having something that can give an answer for you fundamentally undermines the value of the exercise in the first place, and cripples your growth while you use it.

JackFr 15 hours ago [-]
Well I can extract a square root by hand. We all had to learn it and got tested on it.

No one to day learns that anymore. The vast, vast majority have no idea and I don’t think people are dumber because of it.

That is to say, I think it’s not cut-and-dried. I agree you need to learn something, but something’s it’s okay use a tool.

zdragnar 14 hours ago [-]
Comparing extracting a square root my hand is rather different in scope than reducing / simplifying equations entirely. The TI-92 could basically do all of your coursework for you up to college level, if memory serves.

The real question isn't "is it okay to use a tool" but "how does using a tool affect what you learn".

In the cases of both LLMs and symbolic solving calculators, I believe the answer is "highly detrimental".

mistercow 3 hours ago [-]
> No one to day learns that anymore. The vast, vast majority have no idea and I don’t think people are dumber because of it.

Arguably, the kind of person who was helped by learning to do that by hand still learns to do it by hand, but because of curiosity rather than because a teacher told them to.

I remember being thirteen and trying to brute force methods for computing the square root. I didn’t have the tools yet to figure out how to do it in any systematic way, and the internet wasn’t at a point yet where it would have even occurred to me to just search online. Wikipedia wouldn’t exist for another two years.

I probably finally looked it up at some point in high school. I’m not sure exactly when, but I remember spending a lot of time practicing doing a few iterations in my head as a parlor trick (not that I ever had the opportunity to show it off).

If I were thirteen and curious about that now, I’d probably just ask ChatGPT. Then I’d have a whole follow up conversation about how it was derived. It would spit a lot of intimidating LaTeX at me, but unlike with Wikipedia, I’d be able to ask it to explain what those things meant.

This is the thing I don’t get when people talk about LLMs’ impact on education. Everybody focuses on cheating, like learning is inherently a chore that all students hate and must be carefully herded into doing despite themselves.

But that’s a problem with school, not learning. If your actual, self-motivated goal is to learn something, LLM’s are an incredible tool, not a hindrance.

smcleod 14 hours ago [-]
I very much agree with your sentiment here.

I tried to encapsulate that to some degree when writing something (perhaps poorly?) recently actually - https://smcleod.net/2025/03/the-democratisation-paradox-what...

Mikhail_Edoshin 11 hours ago [-]
Using a tool like that is opposite to mastering the skill. There's no royal road to mastery and never will be. One does not have to master all skills, of course, and may do well not mastering any (or mastering dark ones).
BobbyTables2 12 hours ago [-]
The manual methods are also the foundation for higher approaches involving approximation and iterative solutions. These are widely used in engineering and science.

Pressing a calculator key doesn’t give the same insight.

13 hours ago [-]
mattigames 9 hours ago [-]
Yes they are dumber because of it, not in the mental retardation kind of way but a more nuanced way, among others the mental work you put into trying to find another simpler way than the one the professor is teaching you, and the understanding about numbers such attempts can give you, even if they are unsuccessful.
drdeca 12 hours ago [-]
Huh? While I essentially never have need to compute a square root by hand (unless it is a perfect square of course), shouldn’t one know how one would?
johnmaguire 10 hours ago [-]
Why should one? Perhaps they should if it's relevant to their work, daily routine, or interests. But if they have no need for it?
mattigames 9 hours ago [-]
Needs are all fabricated, Ludwig Wittgenstein said "the limits of my language are the limits of my world", the same thing happens with logical thinking and all its tools including math.
skydhash 17 hours ago [-]
Same with drawing which is easy to teach, but hard to master because of the coordination between eyes and hand. You can trace a photograph, but that just bypass the whole point and you don’t exercise any of the knowledge.
socalgal2 9 hours ago [-]
I am waiting for the day (maybe it's already here) when I can talk to an LLM to practice my 2nd language. It can correct everything I say, it can talk forever, it can challenge me to use new grammar or vocabularly. Note: I can speak all day in my 2nd language with friends but I wouldn't give a business presentation nor could I explain, as a native, how something technical works. If I watch a TV show I might understand 30%-99% but the more lawyers/military/goverment/science parts there are the more it's beyond my current level.

Getting exposure there is hard. Talking to friends just means more practice with what I already know but an LLM could help me practice things outside that area.

edanm 8 hours ago [-]
For many languages, this is already something you can do.
makeitdouble 11 hours ago [-]
> the higher end models (TI-92 etc) that had symbolic equation solvers were also banned

I'm surprised it was a problem in the first place. Don't equation solving exercises require you to leave intermediary steps, and you can't just put a "x=5" as a one liner answer ?

nbernard 5 hours ago [-]
I don't remember if it was the case for the TI-92+, but some calculators can show the intermediate steps, or at least some of them.
fennecbutt 13 hours ago [-]
Feels different, comes naturally, without conscious thought, just like we don't focus on beating our hearts.

And agree about learning by practicing a skill being best. But you and I both know the school system has worked on rote memorisation for hundreds of years at least and still is now.

flysand7 16 hours ago [-]
Another case in point is that memorizing vocabularies and grammar, although could seem like an efficient way to learn a language, is incredibly unrewarding. I've been learning japanese from scratch, using only real speech to absorb new words, without using dictionaries and anything else much. The first feeling of reward came immediately when I learned that "arigatou" means thanks (although I terribly misheard how the word sounded, but hey, at least I heard it). Then after 6 month, when I could catch and understand some simple phrases. After 6-7 years I can understand about 80% of any given speech, which is still far, but I gotta say it was a good experience.

With LLM's giving you ready-made answers I feel like it's the same. It's not as rewarding because you haven't obtained the answer yourself. Although it did feel rewarding when I was interrogating an LLM about how CSRF works and it said I asked a great question when I asked whether it only applies to forms because it seems like fetch has a different kind of browser protection.

layer8 14 hours ago [-]
How much hours would you estimate did you watch (I assume it was video, not just audio) in those years? What kind of material? Just curious.
flysand7 2 hours ago [-]
Mostly anime. Surprisingly, not that much, I think somewhere in the ballpark of 100 titles. In the beginning I was also watching some grammar tutorials on YouTube to get started with grammar quicker (Otherwise convergence on solution would be too slow).

Contrary to what I said I actually did use dictionaries, but the point I was trying to make is rather than memorizing phrases in advance, I used it to translate something I thought I heard.

BlueTemplar 3 hours ago [-]
If you used subtitles over audio then why would you avoid dictionaries too ? Purely for the reward of treating it as a puzzle ? (Since you would have to figure out which word corresponds to a which concept in a phrase.)
genewitch 11 hours ago [-]
Yeah and I'm of the age when teachers in all grades would say "you're not going to carry around a calculator your whole adult life"

Hilarious miscalculation.

hammock 15 hours ago [-]
> I’m very glad my kid’s school has hardcore banned them.

What does that mean, I’m curious?

The schools and university I grew up in had a “single-sanction honor code” which meant if you were caught lying or cheating even once you would be expelled. And you signed the honor code at the top of every test.

My more progressive friends at other schools who didn’t have an honor code happily poo-pooed it as a repugnantly harsh old fashioned standard. But I don’t see a better way today of enforcing “don’t use AI” in schools, than it.

don_neufeld 14 hours ago [-]
The school has an academic honestly policy which explicitly bans it, under “Cheating”, which includes:

“Falsifying or inventing any academic work, including the use of AI (ChatGPT, etc)”

Additionally, as mentioned, the school is taking actions to change how work is done to ensure students are actually doing their own work - such as requiring written assignments be completed during class time, or giving homework on physical paper that is to be marked up by hand and returned.

Apparently this is the first year they have been doing this, as last year they had significant problems with submitted work not being authored by students.

This is in an extremely competitive Bay Area school, so there can be a lot of pressure from parents on students to make top grades, and sometimes that has negative side effects.

djhn 7 hours ago [-]
Asking as a non-American non-school-pupil-parent: what does it mean for a school to be competitive in this context? Competitive entry into a school I understand, but that threshold has been cleared. Isn’t US college admission based on essays and standardised tests like GMAT, SAT, GRE?
BlueTemplar 1 hours ago [-]
Physical paper isn't going to save them.

(Also, typing was only appropriate for only some classes anyway.)

garrickvanburen 15 hours ago [-]
I don’t see the problem.

I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.

Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.

If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.

hammock 14 hours ago [-]
If LLMs are allowed then sure. However, when LLMs are explicitly banned from use, is the case I am talking about.
BobbyTables2 12 hours ago [-]
Oral presentation without notes and a live Q&A would be some ways…
StefanBatory 2 hours ago [-]
That's an surprisingly "strict" (in quotes for obvious reason) honor code.

I'm at some uni in Poland, not top tier, but at the same time - not bad either, slighly above average.

The amount of cheating I saw - it's almost mundane. Teachers know this, so do we...

BobbyTables2 12 hours ago [-]
Today such infractions might result in a verbal warning…
avaika 16 hours ago [-]
This reminds me how back in my school days I was not allowed to use the internet to prepare research on some random topics (e g. history essay). It was the late 90s when the internet started to spread. Anyway teachers forced us to use offline libraries only.

Later in the university I was studying engineering. And we were forced to prepare all the technical drawings manually in the first year of study. Like literally with pencil and ruler. Even though computer graphics were widely used and we're de facto standard.

Personally I don't believe hardcore ban will help with any sort of thing. It won't stop the progress either. It's much better to help people learn how to use things instead of forcing them to deal with "old school" stuff only.

don_neufeld 15 hours ago [-]
I was expecting some response like this, because schools have “banned” things in the past.

While this is superficially similar, I believe we are talking about substantially different things.

Learning (the goal) is a process. In the case of an assignment, the resulting answer / work product, while it is what is requested, is critically not the goal. However, it is what is evaluated, so many confuse it with the goal (“I want to get a good grade”)

Anything which bypasses the process makes the goal (learning) less likely to be achieved.

So, I think it is fine to use a calculator to accelerate your use of operations you have already learned and understand.

However, I don’t think you should give 3rd graders calculators that just give them the answer to a multiplication or division when they are learning how those things work in the first place.

Similarly, I think it’s fine to do research using the internet to read sources you use to create your own work.

Meanwhile, I don’t think it’s fine to do research using the internet to find a site where you can buy a paper you can submit as your own work.

Right now, LLMs can be used to bypass a great deal of process, which is why I support them not being used.

It’s possible, maybe even likely that we’ll end up with a “supervised learning by AI” approach where the assignment is replaced by “proof of process”, a record of how the student explored the topic interactively. I could see that working if done right.

pca006132 5 hours ago [-]
Yeah, I remember reading someone saying you won't use a fork lift in a gym. I think this is the same idea.

The problem is really about how to evaluate performance or incentivize students to actually work on their exercise.

16 hours ago [-]
johnisgood 16 hours ago [-]
You can learn a lot from LLMs though, same with, say, Wikipedia. You need curiosity. You need the desire to learn. If you do not have it, then of course you will get nowhere, LLMs or no LLMs.
layer8 14 hours ago [-]
From the article:

“The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. The output from AI answers questions. It teaches me facts. But it doesn’t really help me know anything new.”

I think the thesis is that with AI there is less need and incentive to “put the work in” instead of just consuming what the AI outputs, and that in consequence we do the needed work less and atrophy.

johnisgood 6 hours ago [-]
I know, that is why you need the desire, the will to learn. I have been using LLMs for this, so I know it is possible. I understand what you are saying though, and it is indeed a sad state of affairs, but then again, this was the case due to search engines, Wikipedia, and so forth, long before LLMs.

Again, you can truly learn a lot using LLMs, but you have to approach it properly. It does not have to be just "facts", and sometimes, even learning "facts" is learning.

I can use LLMs and learn nothing, but I can use LLMs to learn, too!

layer8 49 minutes ago [-]
Yes, but previously you didn’t need the desire that much, because you were more forced to it, there was no easy way. The fact that now you need that internal motivation means that it will happen less, where previously it happened by default.
creata 13 hours ago [-]
Honestly, I doubt that LLMs are great for learning. Too often, they output plausible-sounding things that turn out to be completely wrong. I know Wikipedia can have its problems with factuality, but this is on an entirely different level. (And yes, they do this even when they're allowed to do web searches and "reason".)

The effort of verifying everything it claims may or may not outweigh the effort of other means of learning.

azinman2 16 hours ago [-]
Never underestimate laziness, or willingness to take something 80% as good for 1% of the work.

So most are not curious. So what do you do for them?

johnisgood 16 hours ago [-]
You have to somehow figure out the root cause of the laziness, or if it really is laziness, and not something else, e.g. a mental health issue.

Plus, many kids fail school not because of laziness, but because of their toxic environment.

Swizec 14 hours ago [-]
> if it really is laziness, and not something else, e.g. a mental health issue.

Kids optimize. When I was in high school I was fully capable of getting straight F's in a class I didn't care about and straight A's in a class I enjoyed.

Why bother learning chemistry when you could instead spend that time coding cool plugins and websites in PHP that thousands of internet strangers are using? I really did build one of the most popular phpBB plugins and knew I was gonna be a software engineer. Not that my chemistry professor cared about any of that or even understood what I'm talking about.

johnisgood 3 hours ago [-]
What you just described is irrelevant to what we are discussing.

As for what you said, yeah, I got 1s (Fs) because I was too busy coding and reading books on philosophy, as a 14 years old.

Swizec 8 minutes ago [-]
How is it irrelevant? Kids will always cheat their way through classes they feel are a distraction. Even the super smart Type A kids.

Hell, all humans do that. You use every resource available to get out of dealing with things that are not your priority. This means you will never be good at those things and that’s fine. You can’t be good at everything.

BeFlatXIII 2 hours ago [-]
Leave ‘em behind and win the race.
BobbyTables2 12 hours ago [-]
Realistically, putting them into trades sooner could almost be a good thing. Kids who don’t want to learn end up dragging down the class and distracting those who do.

But, these are kids… Hard to argue that adults should selectively deny education when it is their responsibility to do otherwise.

We don’t neglect the handicapped because it is inconvenient to provide them with assistance.

snackernews 16 hours ago [-]
Can you learn a lot? Or do you get instant answers to every question without learning anything, as OP suggests?
calebkaiser 14 hours ago [-]
You can learn an incredible amount. I do quite a bit of research as a core part of my job, and LLMs are amazing at helping me find relevant research to help me explore ideas. Something like "I'm thinking of X. Does this make sense and do you know of any similar research?" I also mentor some students whose educational journey has been fundamentally changed by them.

Like any other tool, it's more a question of how they're used. For example, I've seen incredible results for students who use ChatGPT to interrogate ideas as they synthesize them. So, for example, "I'm reading this passage PASSAGE and I'm confused about phrase X. The core idea seems similar to Y, which I am familiar with. if I had to explain X, I'd put it like this ATTEMPT Can you help me understand what I'm missing?"

The results are very impressive. I'd encourage you to try it out if you haven't.

vendiddy 14 hours ago [-]
I've used it these past few months to better understand the PDF format, Nix, and a few other technical concepts.

I try to use AI to automate things I already know and force myself to learn things I don't know.

It takes discipline/curiosity but it can be a net positive.

johnisgood 6 hours ago [-]
Thank you, and the previous commenter. I am tired of trying to convince people that LLM can be a really good tool for learning. :/

They should just simply try it. Start with something you actually know to see how useful it might be to you with your prompts.

johnisgood 16 hours ago [-]
You can learn a lot, if you want to. I can ask it a question with regarding to pharmacodynamics of some medication, and I can ask more and more questions, and learn. Similarly, I could pick up a book on pharmacology, but LLMs can definitely make learning easier.
hooverd 11 hours ago [-]
Wikipedia isn't going to write your paper for you. I don't see the difference between an LLM and one of those paper writing services in this context.
johnisgood 6 hours ago [-]
We are talking about learning. You can learn much more from LLMs than Wikipedia, because if you do not understand something, you can always ask an LLM about it, and it would reply to you in any way you want; whatever helps you learn better.
BobbyTables2 12 hours ago [-]
Ironically, states now use AI to grade student essays in standardized tests.

English teachers even recommend grammarly..

Students are given a “prompt” for writing.

I wish other schools had the conviction you describe…

guyfhuo 11 hours ago [-]
> Students are given a “prompt” for writing

Students were always given a “prompt” for writing.

That’s why tech companies used that term; rather than the other way around.

GeoAtreides 4 hours ago [-]
> states now use AI to grade student essays in standardized tests.

citation needed

raincole 9 hours ago [-]
> Students are given a “prompt” for writing.

What do you think "prompt" mean?

Or you're saying the students are asked to mimic AI's style?

mr_toad 13 hours ago [-]
> I’m very glad my kid’s school has hardcore banned them.

Schools will ban anything they think of as sinister.

jebarker 19 hours ago [-]
> nothing I make organically can compete with what AI already produces—or soon will.

No LLM can ever express your unique human experience (or even speak from experience), so on that axis of competition you win by default.

Regurgitating facts and the mean opinion on topics is no replacement for the thoughts of a unique human. The idea that you're competing with AI on some absolute scale of the quality of your thought is a sad way to live.

steamrolled 18 hours ago [-]
More generally, prior to LLMs, you were competing with 8 billion people alive (plus all of our notable dead). Any novel you could write probably had some precedent. Any personal story you could tell probably happened to someone else too. Any skill you wanted to develop, there probably was another person more capable of doing the same.

It was never a useful metric to begin with. If your life goal is to be #1 on the planet, the odds are not in your favor. And if you get there, it's almost certainly going to be unfulfilling. Who is the #1 Java programmer in the world? The #1 topologist? Do they get a lot of recognition and love?

harrison_clarke 16 hours ago [-]
a fun thing about having a high-dimensional fitness function is that it's pretty easy to not be strictly worse than anyone
bconsta 14 hours ago [-]
pareto adequate
musicale 12 hours ago [-]
> Who is the #1 Java programmer in the world?

James Gosling, of course[1]. Next question...

> The #1 topologist?

I'm not a mathematician, but... maybe Akshay Venkatesh, who won the Fields Medal in 2018?

[1] https://news.ycombinator.com/item?id=44005008

imhoguy 1 hours ago [-]
But inevitably you lose in the flood of enshitified creations made with LLMs.

I think we will come back to roots, the simple in person creation: pen and paper, declamation, theatre, live performance, hand painting, improvisation, handmade work.

Maybe not everybody but it will be for (mentally) free people.

computerthings 16 hours ago [-]
[dead]
taylorallred 19 hours ago [-]
Among the many ways that AI causes me existential angst, you've reminded me of another one. That is, the fact that AI pushes you towards the most average thoughts. It makes sense, given the technology. This scares me because creative thought happens at the very edge. When you get stuck on a problem, like you mentioned, you're on the cusp of something novel that will at the very least grow you as a person. The temptation to use AI could rob you of the novelty in favor what has already been done.
BeFlatXIII 2 hours ago [-]
I was playing around with AI autocomplete, and found it to be good for a month or two. Then, it suggested I upgrade the model to match my new computer’s increased performance. It’s useless now. The worse model was usable for creative writing and chatrooms; the new models are fit strictly for business professional communications.
worldsayshi 16 hours ago [-]
I feel there's a glaring counter point to this. I have never felt more compelled to try out whatever coding idea that pops into my head. I can make Claude write a poc in seconds to make the idea more concrete. And I can write into a good enough tool in a few afternoons. Before this all those ideas would just never materialize.

I mean I get the existential angst though. There's a lot of uncertainty about where all this is heading. But, and this is really a tangent, I feel that the direction of it all is in the intersection between politics, technology and human nature. I feel like "we the people" leave walkover to powerful actors if we do not use these new powerful tools in service of the people. For one - to enable new ways to coordinate and organise.

perrygeo 13 hours ago [-]
Good point. It's not that AI is "pushing us" towards anything. AI can be a muse that elevates our creativity. IF we use it that way. But do we use it that way? I think there will be some who do.

The majority of users seem to want convenience at any expense. Most are unconcerned with a loss of agency, almost enthusiastic about it if it removes the labor of thinking.

worldsayshi 6 hours ago [-]
Agency only goes away if control of AI is ultimately centralized. If we end up in a world where anyone can run good enough models on consumer devices and we can install our own models into off the shelf humanoid robots I don't see that we have lost agency.
MoonGhost 15 hours ago [-]
> AI pushes you towards

That's interesting point. But here is the thing: you are supposed to drive. Not AI god. Look at it as at an assistant whom you can interrupt, instruct, correct, ask to redo. While focusing on 'what' you can delegate it some 'how' problems.

taylorallred 1 hours ago [-]
Yeah this is a fair point. In honesty, the attempts I have made to have GPT help me think creatively has usually left me disappointed and feeling like it was picking safe, middle-of-the-road solutions. That could be on my prompting skills but also I tend to view LLMs as more of a fuzzy information retrieval tool than a creative/reasonable one. It just hasn’t shown me original ideas that have seemed compelling to me yet (maybe I just need to beg it to be more “original”).
abakker 13 hours ago [-]
I for one, thing directing subordinates to do something I could be doing kinda … sucks? Like, I get that’s how you have to work with LLMs, but it isn’t a fun thing to do for me.
taylorallred 1 hours ago [-]
Same. I think some people have a mind that is more suited for managing other minds and that’s not me.
Cipater 7 hours ago [-]
Do you do every single thing that you are capable of doing yourself?
BlarfMcFlarf 2 hours ago [-]
Non hierarchical collaboration is the option you are excluding. Where you accept pushback and feedback because you know it comes with creative vision and perspective you lack. You can do creative things with other creative people.
abletonlive 16 hours ago [-]
[dead]
ay 16 hours ago [-]
Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.

Besides that:

I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.

Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.

Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.

Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.

Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.

The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.

Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.

So, I don’t see the same picture, but something close to the opposite of what the author sees.

Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…

sabakhoj 7 hours ago [-]
We need to distinctly think about what tasks are actually suitable for LLMs. Used poorly, they'll gut our abilities to think thoughtfully. The push, IMO, should be for using them for verification and clarification, but not for replacements in understanding and creativity.

Example: Do the problem sets yourself. If you're getting questions wrong, dig deeper with an AI assistant to find gaps in your knowledge. Do NOT let the AI do the problem sets first.

I think it was similar to how we used calculators in school in the 2010s at least. We learned the principles behind the formulae and how to do them manually, before introducing the calculators to abstract the usage of the tools.

I've let that core principle shape some of how we're designing our paper-reading assistant, but still thinking through the UX patterns -- https://openpaper.ai/blog/manifesto.

fennecbutt 13 hours ago [-]
>evidently repetitive “style”.

Use LORAs, write better prompts. I've done a lot of diffusion and especially in 2025 it's not difficult to get out something quite good.

Repetitive style is funny, because that's what human artists do for the most part. I'm a furry, I look at a lot of art and individual styles are a well established fact.

socalgal2 9 hours ago [-]
Yes, most human artists have a repetitive style. In fact that's often how you recongize who made a piece of art.
suddenlybananas 8 hours ago [-]
Yeah but the difference is that style is sometimes actually interesting and not completely banal.
jstummbillig 16 hours ago [-]
Maybe you are not that great at using the most current LLMs or you don't want to be? I find that increasingly to be the most likely answer, whenever somebody makes sweeping claims about the impotence of LLMs.

I get more use out of them every single day and certainly with every model release (mostly for generating absolutely not trivial code) and it's not subtle.

guyfhuo 11 hours ago [-]
> Maybe you are not that great at using the most current LLMs or you don't want to be?

I’m tired of this argument. I’ll even grant you: both sides of it.

It seems as though we prepared our selves to respond to llms in this manner with people memeing, or simply recognizing, that there was a “way” to ask questions to get better results early on when ranked search broadened the appeal of search engines.

The reality is that both you and the op are talking about the opinion of the thing, but leaving out the thing itself.

You could say “git gud”, but what if you showed op what “gud” output to you was, and they recognized it as the same sort of output that they were saying was repetitive?

It’s ambiguity based on opinion.

I fear so many are taking part each other.

Perhaps linking to example prompts and outputs that can be directly discussed is the only way to give specificity to the ambiguous language.

jstummbillig 3 hours ago [-]
The problem is that, knowing the public internet, what would absolutely happen, is people arguing the ways in which

a) the code is bad b) the problem is beneath what they consider non-trivial

The way that OP structured the response, I frankly got a similar impression (although the follow up feels much different). I just don't see the point in engaging in that here, but I take your criticism: Why engage at all. I should probably not, then.

ay 15 hours ago [-]
Could totally be the case, that, as I wrote in the very first sentence, I am holding it wrong.

But I am not saying LLMs are impotent - the other week Claude happily churned me ~3500 lines of C code that allowed to implement a prototype capture facility for network packets with flexible filters and saving the contents into pcapng files. I had to fix a couple of bugs that it made, but overall it was certainly at least 5x-10x productivity improvement compared to me typing these lines of code by hand. I don’t dispute that it’s a pretty useful tool in coding, or as a thinking assistant (see the last paragraph of my comment).

What I challenged is the submissive self deprecating adoration across the entire spectrum.

jstummbillig 3 hours ago [-]
Reading this I am not sure I got the gist of your previous post. Re-reading the previous post, I still don't see how the two posts gel. I submit we might just have very different interpretations of the same observations. For example I have a hard imagining the described 3500 LOC program as 'simple'. Limited in scope, sure. But if you got it done 5-10x faster, then it can't be that simple?

Anyway: I found the writers perspective on this whole subject to be interesting, and agree on the merits — I definitely think they are correct on their analysis and outlook, and here the two of us apparently disagree – but I don't share their concluding feelings.

But I can see, how they got there.

abathologist 15 hours ago [-]
What kind of problems are you solving day-to-day where the LLMs are doing heavy lifting?
Madmallard 4 hours ago [-]
Agree

They can't do anything elaborate or interesting for me beyond literal tiny pet project proof of concepts. They could potentially help me uncover a bug, explain some code, or implement a small feature.

As soon as the complexity of the feature goes up either in its side-effects, dependencies, or the customization of the details of the feature, they are quite unhelpful. I doubt even one senior engineer at a large company is using LLMs for major feature updates in codebases that have a lot of moving parts and significant complexity and many LOC.

bsenftner 36 minutes ago [-]
I believe the author is awash in a sea they do not understand, and that is the cause of their discomfort. When they describe their ideas being fully realized by LLMs, are they really, or just appearing as such because the words and terms arrive in a similar and expected manner as their prompt?

Performing any type of intellectual philosophic or exploratory work with LLMs is extremely subtle, largely because you nor they know what you are seeking, and the discovery process with LLMs is not writing prompts and varying one's prompts in trial manners to hopefully get "something else, something better" <- that is pure incomprehension of how they work, and how to work with them.

Very few seem to be realizing the mirror aspects embodied within LLMs: they will mirror you back, and if you are unaware of this, you may not be getting the replies you really seek, you're receiving "comfort replies" and replies mirroring your metadata (style, nuance) more than the factual logic of your requests, if any factual requests are made.

There is an entire body of work, multiple careers worth of human effort, to document the new subtle logical keys to working with LLMs. These are new logical constructs that have never existed before, not even fictionally, not realized as they are now, with all the implications and details bare, in our faces, yet completely misunderstood as people attempt old imperative methods that will not work with with this new entity with completely different characteristics than anything reality has any experience.

A major issue with getting developers to effectively use LLMs is the fact that many developers are weak to terrible communicators themselves. LLMs are fantastic communicators, who will mirror their audience in an attempt to be better understood, but when that audience is a weak communicator the entire process disintegrates. That is, what I suspect is happening with the blog post author. An inability to be discriminate in their language to the degree they parcel out the easy immediate sophomore level replies, and then arrive at a context within the LLM's capacity that is the integrity of context they seek, but that requires them to meet that intellectually and linguistically or that LLM context is destroyed. So subtle.

bartread 24 minutes ago [-]
> but when that audience is a weak communicator the entire process disintegrates. That is, what I suspect is happening with the blog post author.

That’s a pretty rude and disrespectful take on this piece, don’t you think?

How can you read a piece that is so well articulated then turn around and, apparently unironically, suggest the author isn’t a good communicator?

How can you invalidate the author’s experience without seeing and knowing more? You have no idea of the nuts and bolts of their LLM interactions whereas, to support the conclusion you’ve arrived at, this is exactly the information you’d need based on what you’ve said.

Havoc 4 hours ago [-]
>in the context of AI, what I’m doing is a waste of time. It’s horrifying. The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI

I don't think the compete part is true. I'll never cook like gordon ramsey, but I can still enjoy cooking. My programming will never be kernel dev level, but I still enjoy it.

The only angle where I have doubts like this is work. Cause there enjoying it isn't enough...you actually have to be competitive.

paintboard3 19 hours ago [-]
I've been finding a lot of fulfillment in using AI to assist with things that are (for now) outside of the scope of one-shot AI. For example, when working on projects that require physical assembly or hands-on work, AI feels more like a superpower than a crutch, and it enables me to tackle projects that I wouldn't have touched otherwise. In my case, this was applied to physical building, electronics, and multimedia projects that rely on simple code that are outside of my domain of expertise.

The core takeaway for me is that if you have the desire to stretch your scope as wide as possible, you can get things done in a fun way with reduced friction, and still feel like your physical being is what made the project happen. Often this means doing something that is either multidisciplinary or outside of the scope of just being behind a computer screen, which isn't everyone's desire and that's okay, too.

sanderjd 18 hours ago [-]
Yeah I haven't found the right language for this yet, but it's something like: I'm happy and optimistic about LLMs when I'm the one doing something, and more anxious about them when I'm supporting someone else in doing something. Or: It makes me more excited to focus on ends, and less excited to focus on means.

Like, in the recent past, someone who wanted to achieve some goal with software would either need to learn a bunch of stuff about software development, or would need to hire someone like me to bring their idea to life. But now, they can get a lot further on their own, with the support of these new tools.

I think that's good, but it's also nerve-wracking from an employment perspective. But my ultimate conclusion is that I want to work closer to the ends rather than the means.

apsurd 18 hours ago [-]
Interesting, I just replied to this post recommending the exact opposite: to focus on means vs ends.

The post laments how everything is useless when any conceivable "end state" a human can do will be inferior to what LLMs can do.

So an honest attention toward the means of how something comes about—the process of the thinking vs the polished great thought—is what life is made of.

Another comment talks about hand-made bread. People do it and enjoy it even though "making bread is a solved problem".

sanderjd 18 hours ago [-]
I saw that and thought it was an interesting dichotomy.

I think a way to square the circle is to recognize that people have different goals at different times. As a person with a family who is not independently wealthy, I care a lot about being economically productive. But I also separately care about the joy of creation.

If my goal in making a loaf of bread is economic productivity, I will be happy if I have a robot available that helps me do that quickly. But if my goal is to find joy in the act of creation, I will not use that robot because it would not achieve that goal.

I do still find joy in the act of creating software, but that was already dwindling long before chatgpt launched, and mostly what I'm doing with computers is with the goal of economic productivity.

But yeah I'll probably still create software just for the joy of it from time to time in the future, and I'm unlikely to use AIs for those projects!

But at work, I'm gonna be directing my efforts toward taking advantage of the tools available to create useful things efficiently.

apsurd 18 hours ago [-]
ooh I like this take. We can change the framing. In the frame of one's livelihood we need to be concerned with economic productively, philosophy be damned.
champdebloom 16 hours ago [-]
Beautifully put!
neoden 7 hours ago [-]
Homo sapience had a long period of its history when it was crucial to have a well developed body. Today we need to perform useless exercises just to maintain our bodies in somewhat acceptable shape. This might be applied to intellect as well.

In the coming era of unnecessary intellectual power we might need to do thinking exercises as something that helps maintaining a healthful (and beautiful) mind though our core values would shift towards something else, something that is regarded as good but not mandatory for personal success today.

rax0m 6 hours ago [-]
Video games come to mind
nico_h 7 hours ago [-]
Intellectual power is more and more necessary, on the input side if for no other requirement than to evaluate each piece of media and writing and figure how reliable it is. And on the output side to figure out how much of the LLM output you want to put out in the world under your name is BS.
tutanosh 19 hours ago [-]
I used to feel the same way about AI, but my perspective has completely changed.

The key is to treat AI as a tool, not as a magic wand that will do everything for you.

Even if AI could handle every task, leaning on it that way would mean surrendering control of your own life—and that’s never healthy.

What works for me is keeping responsibility for the big picture—what I want to achieve and how all the pieces fit together—while using AI for well-defined tasks. That way I stay fully in control, and it’s a lot more fun this way too.

steamrolled 15 hours ago [-]
I think the article describes a real problem in that AI discourages thought. So do other things, but what's new about AI is that it removes an incentive to try.

It used to be that if you spent your day doomscrolling instead of writing a blog post, that blog post wouldn't get written and you wouldn't get the riches and fame. But now, you can use AI to write your blog post / email / book. If you don't have an intrinsic motivation to work your brain, it's a lot easier to wing it with AI tools.

At the same time... gosh. I can't help but assume that the author is just depressed and that it has little to do with AI. The post basically says that AI made his life meaningless. But you don't have to use AI tools if they're harming you. And more broadly, life has no meaning beyond what we make of it... unless your life goal is to crank out text faster than an LLM, there's still plenty of stuff to focus on. If you genuinely think you can't possibly write anything new and interesting, then dunno, pick a workshop craft?

xigency 15 hours ago [-]
Humans are social creatures. The existence of a tool that can replace humans is not nearly so depressing as the realization that a loud and powerful group of people are zealous and joyful to use it to such ends. The assumption that people come first is rapidly becoming a logical fallacy in a world that seeks to optimize paperclips first.

Anyway, the pendulum will swing the other way eventually, but it's a rough ride hanging on until then.

Glad to see stimulating discussion here falling on both sides.

smcleod 14 hours ago [-]
For me it decreases the barrier to try and test new thoughts, never have I felt more empowered to try out new avenues that in the past might have been too time consuming or expensive to dispose of.
curl-up 19 hours ago [-]
> The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.

So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.

My mixer can mix dough better than I can, but I still enjoy kneading it by hand. The incredibly good artisanal bakery down the street did not reduce my enjoyment of baking, even though I cannot compete with them in quality by any measure. Modern slip casting can make superior pottery by many different quality measures, but potters enjoy throwing it on a wheel and producing unique pieces.

But if your idea of fun is tied to the "no one else can do this but me", then you've been doing it wrong before AI existed.

ebiester 18 hours ago [-]
Let's frame it more generously: The reward is based on being able to contribute something novel to the world - not because nobody else can but because it's another contribution to the world's knowledge.
curl-up 18 hours ago [-]
If the core idea that was intended to be broadcasted to the world was a "contribution", and LLM simply expanded on it, then I would view LLMs simply a component in that broadcasting operation (just as the internet infrastructure would be), and the author's contribution would still be intact, and so should his enjoyment.

But his argument does not align with that. His argument is that he enjoys the act of writing itself. If he views his act of writing (regardless of the idea being transmitted) as his "contribution to world's knowledge", then I have to say I disagree - I don't think his writing is particularly interesting in and of itself. His ideas might be interesting (even if I disagree), but he obviously doesn't find the formation of ideas enjoyable enough.

17 hours ago [-]
mionhe 17 hours ago [-]
It sounds as if the reward is primarily monetary in this case.

As some others have commented, you can find rewards that aren't monetary to motivate you, and you can find ways to make your work so unique that people are willing to pay for it.

Technology forces us to use the creative process to more creatively monetize our work.

drdaeman 12 hours ago [-]
If that's the source of author's existential crisis, they may possibly find it interesting to meditate on the idea that there's no thinker behind the thought, and the impermanence of "self".

Even if they don't buy all the way into the whole hard incompatiblism thing, the idea is that they may find some value in the process.

fennecbutt 13 hours ago [-]
Let's be honest, humans have been creating slop for much longer then machines. Not a bad thing, but don't put it all on a pedestal.
lo_zamoyski 18 hours ago [-]
The primary motivation should be wisdom. No one can become wise for you. You don't become any wiser yourself that way. And a machine isn't even capable of being wise.

So while AI might remove the need for human beings to engage in certain practical activities, it cannot eliminate the theoretical, because by definition, theory is done for its own sake, to benefit the person theorizing by leading them to understanding something about the world. AI can perhaps find a beneficial place here in the way books or teachers do, as guides. But in all these cases, you absolutely need to engage with the subject matter yourself to profit from it.

Viliam1234 18 hours ago [-]
Now you can contribute something novel to the world by pressing a button. Sounds like an improvement.
drdeca 18 hours ago [-]
If one merely presses a button (the same button, not choosing what button to push based on context), I don’t see what it is that one has contributed? One of those tippy bird toys can press a button.
fennecbutt 13 hours ago [-]
I can draw a circle on a piece of paper and that's a serious contribution?

Where is the line drawn?

Is me sneezing a contribution to the world of art, since art is all about interpretation™®© and some smarmy critic will do a piece on how my sneeze is a visceral physical performative art illustrating the downfall of the modern world where technology binds us and we spend too much time inside surrounded by screens and dust and co2.

Nah, I just sneezed. That's all.

drdeca 12 hours ago [-]
It sounds to me like you are maybe agreeing with me but thought that I was expressing the opposite of what I did, and so are phrasing it as if it were disagreement?

Or maybe you are just agreeing, and did understand that my point was that I don’t think pressing a button is a contribution.

If you are disagreeing with my comment, can you explain how this is disagreeing?

kelseyfrog 17 hours ago [-]
> So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.

People realize this at various points in their life, and some not at all.

In terms the author might accept, the metaphor of the stoic archer comes to mind. Focusing on the action, not the target is what relieves one of the disappointment of outcome. In this cast, the action is writing while the target is having better thoughts.

Much of our life is governed by the success at which we hit our targets, but why do that to oneself? We have a choice in how we approach the world, and setting our intentions toward action and away from targets is a subtle yet profound shift.

A clearer example might be someone who wants to make a friend. Let's imagine they're at a party and they go in with the intention of making a friend, they're setting themselves up for failure. They have relatively little control over that outcome. However, if they go in with the intention of showing up authentically - something people tend to appreciate, and something they have full control over - the changes of them succeeding increase dramatically.

Choosing one's goals - primarily grounded in action - is an under-appreciated perspective.

ankit219 6 hours ago [-]
This is a very millennial style of thinking (myself included). It feels like people can't just have a hobby, they have to be great at it. The sense of greatness, the sense of accomplishment is not merely doing a thing, but getting to an outcome which is measurable and/or which we can tell others or put on social media. I thought it was only me, but turns out this is all around me. I started gardening, spending 15 mins a day, I talk to a friend around it. They tell me about this gardening insta page, tips, and community. The community has people doing things at a better pace / rate than me. Putting in more effort than me. I suddenly feel that rush to have some competition. Then it becomes boring because the fun was the fifteen minutes i spent in there, not the part where it occupied rest of my day. Side projects, writing, painting, I somehow see people doing this all the time. Picking the wrong goals, or expecting a dopamine hit from wrong places.

Choosing the right goals is the great way to put that in perspective. I don't know what happened with hobbies, but it's not there anymore. (so much that i dont tell people i do xyz things on the side)

sifar 13 hours ago [-]
>> Focusing on the action, not the target is what relieves one of the disappointment of outcome.

The primary reason is not that it relieves us of the disappointment, but that worrying about the outcome increases our anxiety and impacts our action which hampers the outcome.

BrenBarn 8 hours ago [-]
> Focusing on the action, not the target is what relieves one of the disappointment of outcome.

This is true, but the tough part is it's not the whole story.

First, obviously along some dimensions of life, targets matter. If we need to grow food to eat, the pleasant feeling of working in the garden isn't going to be sufficient; if we need to strengthen a dike to prevent the town from being inundated, the sensation of swinging a hammer isn't going to cut it.

> However, if they go in with the intention of showing up authentically - something people tend to appreciate, and something they have full control over - the changes of them succeeding increase dramatically.

That is true, but it's also possible for a person to feel like they are being authentic (and even to be correct about that), yet still seem off-putting to others, perhaps for reasons they aren't aware of. Even if they're not focused on the "target" of making a friend, there are intermediate targets like "interact with other people in a way that they (not just I) enjoy", and if those targets aren't met, eventually a reckoning must come.

So the second point is that evaluating the "action" is an internal perspective that can become out of sync with reality even in cases where the result isn't so critical. We may not want to be focused on "end goals" but we need some amount of focus on external calibrators of some sort, to keep us from descending into solipcism.

Then the third thing is that (maybe because of the first two), people have a tendency to extend their results-oriented mindset more and more, and even if an individual resists this, they have to deal with the fact that everyone around them may be doing it. So even if you take the view that writing is a human activity that should be valued for the gusto and AI writing is missing the point, if everyone around you stops writing and starts using AI instead, a lot of important stuff in the penumbra of the activity can be weakened. Like it becomes harder to put together a writing club/workshop etc., maybe even to buy books. And in particular it can become harder to straddle the line between target and action in terms of employment and generally meeting your material needs. There are plenty of people who have artistic skill and have a job where they get to use it to some extent (e.g., graphic design), and even though it may have some distasteful commercial aspects, they can still get some of that "action satisfaction" from their job. But if AI eats all the graphic design jobs, now you have to spend all your work hours doing something that gives you none of that satisfaction, and cram all the satisfying artistic action into your free time.

The same is true for technical tasks. A lot of the dismay over the use of AI for programming arises because people used to be able to get paid for doing things that also gave them a sense of satisfaction for engaging in a sort of problem-solving task that they enjoyed as an action. Now it's harder to do that, but everyone still has to eat, so they have to give up some of the satisfaction they used to get because they can't get paid for it anymore.

I agree that, for an individual, shifting the mindset to action can be helpful. But we as individuals live in the world, and the more an individual's mindset becomes out of step with that of his society, the harder it becomes to live in accordance with that mindset. So I think we also need to apply pressure to create a societal mindset that values and supports the kinds of individual mindsets we want people to have.

wcfrobert 17 hours ago [-]
I think the article is getting at the fact that in a post-AGI world, human skill is a depreciating asset. This is terrifying because we exchange our physical and mental labor for money. Consider this: why would a company hire me if - with enough GPU and capital - they can copy-and-paste 1,000 of AI agents much smarter to do the work?

With AGI, Knowledge workers will be worth less until they are worthless.

While I'm genuinely excited about the scientific progress AGI will bring (e.g. curing all diseases), I really hope there's a place for me in the post-AGI world. Otherwise, like the potters and bakers who can't compete in the market with cold-hard industrial machines, I'll be selling my python code base on Etsy.

No Set Gauge had an excellent blog post about this. Have a read if you want a dash of existential dread for the weekend: https://www.nosetgauge.com/p/capital-agi-and-human-ambition.

9dev 16 hours ago [-]
That seems like a very narrow perspective. For one, it is neither clear we will end up with AGI at all—we could have reached or soon reach a plateau with the possibilities of the LLM technology—or whether it’ll work like what you’re describing; the energy requirements might not be feasible, for example, or usage is so expensive it’s just not worth applying it to every mundane task under the sun, like writing CRUD apps in Python. We know how to build flying cars, technically, but it’s just not economically sustainable to use them. And finally, you never know what niches are going to be freed up or created by the ominous AGI machines appearing on the stage.

I wouldn’t worry too much yet.

Animats 16 hours ago [-]
> With AGI, Knowledge workers will be worth less until they are worthless.

"Knowledge workers" being in charge is a recent idea that is, perhaps, reaching end of life. Up until WWII or so, society had more smart people than it had roles for them. For most of history, being strong and healthy, with a good voice and a strong personality, counted for more than being smart. To a considerable extent, it still does.

In the 1950s, C.P. Snow's "Two Cultures" became famous for pointing out that the smart people were on the way up.[1] They hadn't won yet; that was about two decades ahead. The triumph of the nerds took until the early 1990s.[2] The ultimate victory was, perhaps, the collapse of the Soviet Union in 1991. That was the last major power run by goons. That's celebrated in The End of History and the Last Man (1992).[3] Everything was going to be run by technocrats and experts from now on.

But it didn't last. Government by goons is back. Don't need to elaborate on that.

The glut of smart people will continue to grow. Over half of Americans with college educations work in jobs that don't require a college education. AI will accelerate that process. It doesn't require AI superintelligence to return smart people to the rabble. Just AI somewhat above the human average.

[1] https://en.wikipedia.org/wiki/The_Two_Cultures

[2] https://archive.org/details/triumph_of_the_nerds

[3] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...

rkhassen9 10 hours ago [-]
I’ve thought the same. Goons powered by AI, that is.
senordevnyc 17 hours ago [-]
This is only terrifying because of how we’ve structured society. There’s a version of the trajectory we’re on that leads to a post-scarcity society. I’m not sure we can pull that off as a species, but even if we can, it’s going to be a bumpy road.
GuinansEyebrows 16 hours ago [-]
the barrier to that version of the trajectory is that "we" haven't structured society. what structure exists, exists as a result of capital extracting as much wealth from labor as labor will allow (often by dividing class interests among labor).

agreed on the bumpy road - i don't see how we'll reach a post-scarcity society unless there is an intentional restructuring (which, many people think, would require a pretty violent paradigm shift).

jackphilson 15 hours ago [-]
I think we think of it as 'extracting' because people are coerced into jobs that they hate. I think AI can help us exit the paradigm of working as extracting. Basically, passion economy (ai handles marketing, internet distribution). Allows you to focus on what you actually like, but it can actually make money this time.
GuinansEyebrows 15 hours ago [-]
to be trite, we've been promised a world where AI will help to alleviate the menial necessities so that we're free to pursue our passions. in reality, what we're getting is AI that replaces the human component of passion projects (art, music, engineering as craft), leaving the "actually-hard-to-replace" "low-class" roles (cashiering, trash collection, housekeeping, farming, etc) to humans who generally have few other economic options.

without a dramatic shift in wealth distribution (no less than the elimination of private wealth and the profit motive), we can't have a post-scarcity society. capitalism depends entirely upon scarcity, artificial or not.

drdaeman 11 hours ago [-]
> With AGI, Knowledge workers will be worth less until they are worthless.

The article you've linked fundamentally relies on the assumption that "the tasks can be done better/faster/cheaper by AIs". (Plus, of course, the idea that AGI would be achieved, but without this one the whole discussion would be pointless as it would lack the subject, so I'm totally fine with this one.)

Nothing about AGI (as in "a machine that can produce intelligent thoughts on a given matter") says that human and non-human knowledge workers would have some obvious leverage over each other. Just like my coworkers' existence doesn't hurt mine, a non-human intelligence is of no inherent threat. Not by definition.

Non-intelligent industrial robotics is well-researched and generally available, yet we have plenty of sweatshops because they turn out to be cheaper than robot factories. Not fun, not great, I'm no fond of this, but I'm merely taking it as a fact, as it is how it currently is. So I really wouldn't dare to unquestionably assume that "cheaper" would be true.

And then "better" isn't obvious either. Intelligence is intelligence, it can think, it can make guesses, it can make logical conclusions, and it can make mistakes too - but we've yet to see even the tiniest hints of "higher levels" of it, something that would make humans out of the league of thinking machines if we're ranking on some "quality" of thinking.

I can only buy "faster" - and even that requires an assumption that we ignore any transhumanist ideas. But, surely, "faster" alone doesn't cut it?

patcon 18 hours ago [-]
Yeah, I think you're onto something. I'm not sure the performative motivation is necessarily bad, but def different

Maybe AI is like Covid, where it will reveal that there were subtle differences in the underlying humans all along, but we just never realized it until something shattered the ability for ambiguity to persist.

I'm inclined to so that this is a destabilising thing, regardless of my thoughts on the "right" way to think about creativity. Multiple ways could coexist before, and now one way no longer "works".

garrettj 17 hours ago [-]
Yeah, there’s something this person needs to embrace about the process rather than being some kind of modern John Henry, comparing themselves to a machine. There’s still value in the things a person creates despite what AI can derive from its training model of Reddit comments. Find peace in the process of making and you’ll continue to love it.
quantumgarbage 18 hours ago [-]
I think you are way past the argument the writer is making.
gibbitz 16 hours ago [-]
I think the point is that part of the value of a work of art to this point is the effort or lack of effort involved in its creation. Evidence of effort has traditionally been a sign of the quality of thought put into a work as a product of time spent in its creation. LLMs short circuit this instinct in evaluation making some think works generated by AI are better than they are while simultaneously making those who create work see it as devaluation of work (which is the demotivator here).

I'm curious why so any people see creators and intellectuals as competitive people trying to prove they're better than someone else. This isn't why people are driven to seek knowledge or create Art. I'm sure everyone has their reasons for this, but it feels like insecurity from the outside.

Looking at debates about AI and Art outside of IP often brings out a lot of misunderstandings about what makes good Art and why Art is a thing man has been compelled to make since the beginning of the species. It takes a lifetime to select techniques and thought patterns that define a unique and authentic voice. A lifetime of working hard on creating things adds up to that voice. When you start to believe that work is in vain because the audience doesn't know the difference it certainly doesn't make it feel rewarding to do.

gibbitz 1 hours ago [-]
To put it another way: if we made a machine that could instantly create a baby, how would that effect the notion of motherhood? Sure children are adopted or born to surrogacy but the connection formed during gestation and that time itself is a huge part of our notion of the connection between mother and child. Being an Artist is the same thing, an identity bred from gestation proved by the ends.

Before the rise of Western culture, ancient cultures didn't attribute an artist to a work. Think Ancient Greece or Egypt. These cultures still produced Art because the culture valued it, but in society these creators were seen as tradesmen or they were slaves. AI used in this way both reduces cultural value and removes or reduces the social status of the creator.

I find it telling that LLMs are quite adept at mash-ups and decisions based on data analysis which in my experience is what most business managers do. Why are we not using AI to replace worthless middle management? After all they are lower skilled and higher paid than many developers. I'd argue that anyone who thinks you can replace a job with AI is not doing that job as a career. AI devs who think LLM can replace Java web developers are not Java web developers. Internet trolls who think LLM can replace Artists are not Artists. I think this moment we're in is revealing that we've become so siloed that we have lost our curiosity about each other and cultural history. It's frightening to see how we're changing our culture to accommodate a technology at the expense of people and just how blase we are about it.

movpasd 18 hours ago [-]
Sometimes the fun is in creating something useful, as a human, for humans. We want to feel useful to our tribe.
rkhassen9 10 hours ago [-]
I think you articulated the actual point of the OP. It isn’t so much about creating something better than anyone else, but it is a feeling that your contribution and world means something.

AI can somehow cause one to react with a feeling of futility.

Engaging in acts of creation, and responding to others acts of creation seems a way out of that feeling.

Tuperoir 18 hours ago [-]
Good point.

Self-actualisation should be about doing the things that only you can. Not better than anyone else, but more like, the specific things that ony you, with the same of your experience, expertise, values and constraints can do.

nthingtohide 17 hours ago [-]
We need to start taking a leaf of advice from spiritual knowledge that "You are not the doer." You were never the doer. The doing happened on its own. You were merely a vessel, an instrument. A Witness. Observe your inner mechanisms of mind, and you will quickly come to this realisation.
getpokedagain 18 hours ago [-]
I don’t think it’s solely the rub in others faces that they can’t create. You hope they learn to as well.
jsemrau 18 hours ago [-]
"The fun has been sucked out of the process of preparing food because nothing I make organically can compete with what restaurants/supermarkets already produces—or soon will."
StefanBatory 18 hours ago [-]
Knowing that what I do anyone can do, no matter how well I'll do it, is discouraging. Because then, what is my purpose? What can I say that I'm good at?
aprdm 17 hours ago [-]
That's a deeper question that only you can answer. I can only say that your thinking based on how you phrased it doesn't really lead to happiness in general
StefanBatory 1 hours ago [-]
But I'm a human, I need to eat.

In case that would happen - welp, I'm out of job, out of something I went to school and studied for a few years, now without purpose or "real" skills (as LLMs are on the same level as I).

rkhassen9 10 hours ago [-]
It is the societal recognition and value of level and contribution that AI threatens and weakens. This is the true loss.
curl-up 18 hours ago [-]
Would you say that the chess players became "purposeless" with Deep Blue, or Go players with Alpha Go?
rfw300 18 hours ago [-]
It's interesting that you name those examples, because Lee Sedol, the all-time great Go player, retired shortly after losing to Alpha Go, saying: "Even if I become the number one, there is an entity that cannot be defeated... losing to AI, in a sense, meant my entire world was collapsing... I could no longer enjoy the game. So I retired." [1, 2]

So for some, yes. It is of course also true that many people derive self-worth and fulfillment from contributing positively to the world, and AI automating the productive work in which they specialize can undermine that.

[1] https://en.yna.co.kr/view/AEN20191127004800315

[2] https://www.nytimes.com/2024/07/10/world/asia/lee-saedol-go-...

curl-up 18 hours ago [-]
I am in no way disputing that some people would feel that way because of AI, just as some performing classical musicians felt that way in the advent of the audio recorder.

What I am saying is that (1) I regard this as an unhealthy relationship to creativity (and I accept that this is subjective), and (2) that most people do not feel that way, as can be confirmed by the fact that chess, go, and live music performances are all still very much practiced.

yapyap 18 hours ago [-]
I mean yeah apparently so for the OP but I’m sure he did not mean for it to be that way intentionally
iamwil 16 hours ago [-]
OP said something similar about writing blog posts when he found himself doing twitter a lot, back in 2013. So whatever he did to cope with tweeting, he can do the same with LLMs, since it seems like he's been writing a lot of blog posts since.

> I’ve been thinking about this damn essay for about a year, but I haven’t written it because Twitter is so much easier than writing, and I have been enormously tempted to just tweet it, so instead of not writing anything, I’m just going to write about what I would have written if Twitter didn’t destroy my desire to write by making things so easy to share.

and

> But here’s the worst thing about Twitter, and the thing that may have permanently destroyed my mind: I find myself walking down the street, and every fucking thing I think about, I also think, “How could I fit that into a tweet that lots of people would favorite or retweet?”

https://dcurt.is/what-i-would-have-written

smcleod 14 hours ago [-]
Perhaps they simply are someone who struggles with finding identity and value in change and adaptation.
dcurtis 12 hours ago [-]
History rhymes. It’s funny that my problem back then was jamming fully formed thoughts into tweets, and now my problem is developing seeds of ideas into fully formed thoughts.
baxtr 6 hours ago [-]
> Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

Fully agree.

Sorry to say it like that but I thought the post was a bit "whiny". I really like the thought process of the author. An LLM would have never created a post like that. I think he should not give up.

drakonka 2 hours ago [-]
This rings true for me. I still write a lot on my personal blog and still use writing as a way to process and solidify my learnings, but as I outsource more to LLMs during the process of writing (e.g., helping me find a source or something, or have it explain a topic to me instead of googling for an answer from multiple channels), I can feel my brain getting more sluggish. I think this is impacting not just my creative thinking and problem solving when learning something, but also how I form those thoughts into language. It's so hard to put a finger on exactly what the concrete factors are, but I can feel the change.
WhyNotHugo 1 hours ago [-]
Think of weightlifting. It helps build muscle. You can use a crane-lift to lift a lot more weight a lot faster. Is weightlifting now pointless? You'll never be able to compete with a crane-lift! It'll always be faster and won't get tired.

Writing is an exercise that helps thinking and reasoning about a subject. Delegating it to an LLM is the same as delegating it to another person who writes faster and knows the subject well. The end result is the same, the weights have been moved up 3 series of 15 times. But you didn't gain any muscle, because you didn't lift the weights yourself. You won't learn or even think about a topic if you delegate writing about it to a machine.

--

On a separate note, I also think it's naive to think that the LLM is reasoning about things the same way you would and writing the same things with the same conclusions. If you _read_ the LLM's work, you might get that impression. But your own writing could have spawned different questions along the way, leading you to read on different topics or connect different ideas. Just try asking two people about some complex topic and see if they come up with the same writing or not.

drakonka 1 hours ago [-]
I'm not sure if you meant to reply to someone else or if we're just agreeing with each other? I already said "I still write a lot on my personal blog and still use writing as a way to process and solidify my learnings". So I already understand and agree that writing itself is a worthwhile exercise.
mayas_ 37 minutes ago [-]
for better or for worse gen ai has is fundamentally changing how ideas are expressed and shared

afaic it's a net positive. i've always been lazy on writing down/expressing my thoughts and gen ai feels exactly like the missing piece.

i'm able to "vibe write" my ideas into reality. the process is still messy but exciting.

i've never been this excited about the future since my childhood

keiferski 7 hours ago [-]
I am somewhat in disbelief as my reaction to AI tools has been pretty much the exact opposite to the author’s. I had thousands of short notes, book ideas, etc. before AI, most of which were scattered in a notepad program.

Since AI, I’ve made genuine, real progress on many of them, mostly by discussing the concepts with ChatGPT, planning outlines, creating research reading lists, considering angles I hadn’t considered, and so on. It’s been an insanely productive process.

Like any technology, AI tools are in some sense a reflection of their users. If you find yourself wanting to offload all thinking to the machine, that’s on you, not the tool.

loser357 7 hours ago [-]
What we need is a type of block chain that only records new lines of thought, and ascribes a kind of novelty index for concepts. That way researchers can stay on their intended side of the boundary of human knowledge, either well-trodden, averaged prior art or exploratory and theoretical. Then ambitious thinkers can gamify searching for truly new tokens.
keiferski 7 hours ago [-]
The publication process seems to already basically function this way, no? Whether through books, academic papers, or even just a blog post. If you publish something first online and it gets indexed, that is the best we can probably do - at least while human thoughts are still stuck in human heads.
loser357 2 hours ago [-]
Thinking more about distilling information down in a compressed form like the LLMs. Then the uniqueness of ideas, not actual language like copyright enforcement, could be measured. Established ideas are common and diluted by human and synthetic authors. While concepts out on the edge are measured as a novel combination and might be worth more exploration. Just basing this on the personal conviction of "ask not what the sum of all human knowledge can do for you, but what you can add to the sum of all human knowledge."
kdamica 19 hours ago [-]
wuj 15 hours ago [-]
A good analogy is lifting. We lift to build strength, not because we need that extra strength to lift things in real life. There are plenty machinery to do that for us. But we do so for the sense of accomplishment of hitting our goals when we least expect it, seeing physical changes, and the feeling that we are getting healthier rather than chasing the utility benefits. If we perceive lifting as an utility, we realize its futile and meaningless. Instead, if we see it as a routine with positive externalities sprinkled on top, we feel a lot less pressured to do so.

As kelseyfrog commented already, the key is to focus on the action, not the target. Lifting is not just about hitting a number or getting bigger muscles (though they are great extrinsic motivators), its more of an action that we derive growth from. I have internalized the act of working out that those targets are baked into the unconscious. I don't overthink when I'm lifting. My unconscious take the lead, and I just follow. I enjoy seeing the results show up unexpectedly. It lets me grow without feeling the constant pressure of my conscious mind.

The lifting analogy can be applied to writing and other effortful pursuits. We write for the pleasure of reconciling internal conflicts and restoring order to our chaotic mind. Writing is the lifting of our mind. If we do it for comparison, then there's no point in lifting, or writing, or many other things we do after all our technological breakthroughs. Doing what we do is a means to an end, not the other way around.

gavinray 15 hours ago [-]
I lift because since I was a child, I always wanted to look like an Anime character.
zebez 15 hours ago [-]
Is it working?
dvrp 15 hours ago [-]
What a coincidence! I think we both commented noting the phenomenon of abundance and the repercussions for us humans at the individual level. Especially from a fulfillment and autonomy point of view.
NobodytheHobbit 7 hours ago [-]
There is going to be anxiety over losing any toolset. It just sucks that this time it's the human experience.

Sincerely though you have to do it for you. The product will be whatever the widget is but you can't treat your experience that coldly. None of us can. That's why everyone is so bonkers right now.

Our memes in a memetic sense are being frustrated and we in turn are being frustrated. I mean that mechanically we are frustrated but if you feel a certain way about that it's probably because this is going to also feel frustrating when the subject being frustrated is our creative aspirations. I don't know about you but that's kind of why I get out of bed to do this thing called life.

There is also a certain level of dread that an agent can be spun up to replace anyone. Makes you wonder what need feckless people might need for all this meat when a physical labor force can be replaced by robots and a mental one with AI and you can combine those forces seamlessly. Existential dread everyday on this scale is a chaos to say the least.

I know this is very dour but that's because this is very dour.

jonplackett 19 hours ago [-]
I’m so so glad at the end it said “Written entirely by a human” because all the way through I was thinking - absolutely no way does AI come up with great ideas or insights and it definitely would not write this article- it holds together and flows too well (if this turns out to be the joke I’ll admit we’re all now fucked)

Having said that I am very worried about kids growing up with AI and it stunting their critical thinking before it begins - but as of right this moment AI is extremely sub par at genuinely good ideas or writing.

It’s an amazing and useful tool I use all the time though and would struggle to be without.

paulorlando 19 hours ago [-]
I've noticed something like this as well. A suggestion is to write/build for no one but yourself. Really no one but yourself.

Some of my best writing came during the time that I didn't try to publicize the content. I didn't even put my name on it. But doing that and staying interested enough to spend the hours to think and write and build takes a strange discipline. Easy for me to say as I don't know that I've had it myself.

Another way to think about it: Does AI turn you into Garry Kasparov (who kept playing chess as AI beat him) or Lee Sedol (who, at least for now, has retired from Go)?

If there's no way through this time, I'll just have to occasionally smooth out the crinkled digital copies of my past thoughts and sigh wistfully. But I don't think it's the end.

ge96 19 hours ago [-]
Yeah there is the personal passion and then the points/likes driven which sucks the joy out

I experienced this when I was younger with my rc planes, I joined some forum and I felt like everything I did had to be posted/liked to have value. I'd post designs/fantasy and get the likes then lose interest/not actually do it after I got the ego bump

paulorlando 19 hours ago [-]
I think that's why some writers refuse to talk about their work in progress. If they do, it saps the life out of it. Your ego bump example kind of fits into that.
paulorlando 14 hours ago [-]
If relevant, I wrote this in a related discussion: https://news.ycombinator.com/item?id=43912331
bradgessler 19 hours ago [-]
I keep going back and forth on this feeling, but lately I find myself thinking "F it, I'm going to do what I'm going to do that interests me".

Today I'm working on doing the unthinkable in an AI-world: putting together a video course that teaches developers how to use Phlex components in Rails projects and selling it for a few hundred bucks.

One way of thinking about AI is that it puts so much new information in front of people that they're going to need help from people known to have experience to navigate it all and curate it. Maybe that will become more valuable?

Who knows. That's the worst part at this moment in time—nobody really knows the depths or limits of it all. We'll see breakthroughs in some areas, and others not.

wjholden 19 hours ago [-]
I just wrote a paper a few days ago arguing that "manual thinking" is going to become a rare and valuable skill in the future. When you look around you, everyone is finding ways to be better using AI, and they're all finding amazing successes – but we're also unsure about the downsides. I hedge that my advantage in ten years will be that I chose not to do what everyone else did. I might regret it, we will see.
oytis 19 hours ago [-]
If AI is going to be as economically efficient as promised, there is going to be no way to avoid using it altogether. So the trick will be to keep your thinking skills functional, while still using AI for speedup. Like focus in the age of Internet is a rare skill, but not using Internet is not an option either.
fkfyshroglk 18 hours ago [-]
Not all processes are the same, though. I strongly suspect any efficiency improvements will come in processes that didn't require much "thinking" to begin with. I use it daily, but mostly as a way to essentially type faster—I can read much faster than I can type, so I mostly validate and correct the autocomplete. All of my efforts to get it to produce trustworthy output beyond this seem to trail behind the efficiency of just searching the internet.

Granted, I'm blessed to not have much busywork; if I need to produce corporate docs or listicles AI would be a massive boon. But I also suspect AI will be used to digest these things back into small bullet points.

yoyohello13 10 hours ago [-]
I figure if AI gets as efficient as people seem to think. Then spending a bunch of effort getting good at using it now is kind of pointless, because it’s just going to get easier and easier to use.
montebicyclelo 18 hours ago [-]
My thoughts are that it's key that humans know they will still get credit for their contributions.

E.g. imagine it was the case that you could write a blog post, with some insight, in some niche field – but you know that traffic isn't going to get directed to your site. Instead, an LLM will ingest it, and use the material when people ask about the topic, without giving credit. If you know that will happen, it's not a good incentive to write the post in the first place. You might think, "what's the point".

Related to this topic - computers have been superhuman at chess for 2 decades; yet good chess humans still get credit, recognition, and I would guess, satisfaction, from achieving the level they get to. Although, obviously the LLM situation is on a whole other level.

I guess the main (valid) concern is that LLMs get so good at thought that humans just don't come up with ideas as good as them... And can't execute their ideas as well as them... And then what... (Although that doesn't seem to be the case currently.)

datpuz 18 hours ago [-]
> I guess the main (valid) concern is that LLMs get so good at thought

I don't think that's a valid concern, because LLMs can't think. They are generating tokens one at a time. They're calculating the most likely token to appear based on the arrangements of tokens that were seen in their training data. There is no thinking, there is no reasoning. If they they seem like they're doing these things, it's because they are producing text that is based on unknown humans who actually did these things once.

montebicyclelo 17 hours ago [-]
> LLMs can't think. They are generating tokens one at a time

Huh? They are generating tokens one at a time - sure that's true. But who's shown that predicting tokens one at a time precludes thinking?

It's been shown that the models plan ahead, i.e. think more than just one token forward. [1]

How do you explain the world models that have been detected in LLMs? E.g. OthelloGPT [2] is just given sequences of games to train on, but it has been shown that the model learns to have an internal representation of the game. Same with ChessGPT [3].

For tasks like this, (and with words), real thought is required to predict the next token well; e.g. if you don't understand chess to the level of Magnus Carlsen, how are you going to predict Magnus Carlsen's next move...

...You wouldn't be able to, even just from looking at his previous games; you'd have to actually understand chess, and think about what would be a good move, (and in his style).

[1] https://www.anthropic.com/research/tracing-thoughts-language...

[2] https://www.neelnanda.io/mechanistic-interpretability/othell...

[3] https://adamkarvonen.github.io/machine_learning/2024/01/03/c...

datpuz 15 hours ago [-]
Yes, let's cite the most biased possible source: the company that's selling you the thing, which is banking on a runway funded on keeping the hype train going as long as possible...
imhoguy 2 hours ago [-]
While reading this I got a feeling like reading the last report in captain's log on a ghost ship.
Centigonal 18 hours ago [-]
> But now, when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea, I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought.

I can't relate to this at all. The reason I write, debate, or think at all is to find out what I believe and discover my voice. Having an LLM write an essay based on one of my thoughts is about as "me" as reading a thinkpiece that's tangentially related to something I care about. I write because I want to get my thoughts out onto the page, in my voice.

I find LLMs useful for a lot of things, but using an LLM to shortcut personal writing is antithetical to what I see as the purpose of personal writing.

ccppurcell 5 hours ago [-]
Im pretty much an LLM skeptic but I also think that this sort of sentiment goes back to Plato complaining about writing. I have made light exploratory use of chatgpt. Mainly I explore its limitations and boilerplate capabilities. I'm unimpressed with its non trivial code output to be honest. But it has brought up things I wouldn't have thought of on my own that I can go and do "proper" research into. If you want an example I've been playing with binary strings that are fixed by f.r where r reverses and f flips every bit. Chatgpt came up with the word "anti palindrome " and pointed out a connection to DNA I had no idea about. I read the relevant Wikipedia and asked a biologist and now understand DNA a little better. It probably won't amount to anything in this case but I can imagine it doing so.
socalgal2 9 hours ago [-]
I'm clearly not Dustin Curtis. For me, so far, LLMs let me check my assumptions in a way that is why more effective than before which is to say, I didn't or rarely checked before. I'd have an opinion on a topic. I'd hold that opinion based on intuition/previous-experience/voodoo. Someone might challenge it. I'd probably mostly be shrug off their challenge. I'm not saying I'd dismiss it. I'd just not really look into it. Now I type something into ChatGPT/Gemini and it gives me back the pros and cons of the positions. It links to studies. Etc... I'm not saying I believe it point-blank but at least it gives me much more than I got before.
SilverSlash 4 hours ago [-]
I noticed a similar effect on me in regards to critical thinking when I'm coding. My default response when faced with any coding problem now is to use an LLM.

But it gets even worse. Last year I'd get just an initial solution from an LLM and then reason about it myself. Now even that is too much work and I instead ask the same question to multiple LLMs and draw consensus from their results, skipping/easing even that second step of thinking.

Energy takes the path of least resistance. Thinking requires energy. So are our brains learning to off-load thinking/reasoning to LLMs whenever possible?

abhisek 8 hours ago [-]
I think it’s a trade off between depth and breadth. Thinking is hard and painful. But the insights achieved through deep thinking IMHO is worth because of the mental models that we develop. Not just how it compares now with an LLM generated content.

No doubt LLMs are excellent at researching, collating, structuring and summarizing information. Infact I think o3 Deep Research can probably save a weeks worth of survey time.

But in my experience a lot of thinking is still required to do something meaningful with it.

agotterer 18 hours ago [-]
I appreciate your thoughts and definitely share some of your sentiments. But don’t underestimate the value and importance of continuing to be a creative.

Something I’ve been thinking about lately is the idea of creative stagnation due to AI. If AI’s creativity relies entirely on training from existing writing, anrchitecture, art, music, movies, etc., then future AI might end up being trained only on derivatives of today’s work. If we stop generating original ideas or developing new styles of art, music, etc., how long before society gets stuck endlessly recycling the same sounds, concepts, and designs?

zmmmmm 15 hours ago [-]
I don't agree directly with this, but a variant of it does bother me: will the auto-regressive nature of AI eventually limit the novelty of the ideas humanity can come up with?

So many breakthroughs come from people who work either in ignorance or defiance of existing established ideas. Almost by definition, in fact - to a large extent, everything obvious has already been thought. So to some extent, all the real progress happens in places that violate norms and pre-established logic.

So what's going to happen now if every idea has to run the gauntlet of a supremely intelligent but fully regressive AI? It feels like we could lose a tremendous amount of the potential for original thought from humanity. A good counter argument would be that this has already happened and we're still making progress. I just wonder however if it's a question of degree and that degree matters.

BobbyTables2 14 hours ago [-]
Indeed. Major advancements throughout history often happened because someone looked at a problem differently than traditional approaches.

The AI will have been trained predominantly on the traditional approaches.

I feel AI will be fundamentally limited to regurgitating past ideas and intelligence.

It may at least use a breadth of knowledge to save some people time by helping them avoid repeating work already done.

I’d love to see an AI trained only on knowledge up to 1800 come up with a single invention of the past 200 years. (It won’t happen)

windowshopping 6 hours ago [-]
That first paragraph is really sad to me. I can't imagine believing that my own thoughts aren't worthwhile because "an LLM will think up a better version of anything I think." Jesus. I can't say I have ever felt that way for even a second.
iamwil 17 hours ago [-]
> But it doesn’t really help me know anything new.

Pursue that, since that's what LLMs haven't been helping you with. LLMs haven't really generated new knowledge, though there are hints of it--they have to be directed. There are two or three times when I felt the LLM output was really insightful without being directed.

--

At least for now, I find the stuff I have a lot of domain expertise in, the LLM's output just isn't quite up to snuff. I do a lot of work trying to get it to generate the right things with the right taste, and even using LLMs to generate prompts to feed into other LLMs to write code, and it's just not quite right. Their work just seems...junior.

But for the stuff that I don't really have expertise in, I'm less discerning of the exact output. Even if it is junior, I'm learning from the synthesis of the topic. Since it's usually a means to an end to support the work that I do have expertise in, I don't mind that I didn't do that work.

rollinDyno 17 hours ago [-]
I recently finished my PhD studies in social sciences. Even though it did not lead me to career improvements as I initially expected, I am happy I had the opportunity to undertake an academic endeavor before LLMs became cheap and ubiquitous.

I bring up my studies because what the author is talking about strikes me as not having been ambitious enough in his thinking. If you prompt current LLMs with your idea and find the generated arguments and reasoning satisfactory, then you aren't really being rigorous or you're not having big enough ideas.

I say this confidently because my studies showed me not only the methods in finding and contrasting evidence around any given issue, but also how much more there is to learn about the universe. So, if you're being rigorous enough to look at implications of your theories, finding datapoints that speak to your conclusions and find that your question has been answered, then your idea is too small for what the state of knowledge is in 2025.

vaylian 3 hours ago [-]
This is spot-on. LLMs only cover the things that are already documented/debated. Research requires looking at the current landscape of facts and theories and noticing the gaps that can be filled.
b0ner_t0ner 2 hours ago [-]
Reaching for a calculator made us stupid, let's all go back to using an abacus.
xivzgrev 18 hours ago [-]
in life and with people I think about a car, knowing when to go and when to stop

some people are all go and no stop. we call them impulsive.

some people may LOOK all go but have wisdom (or luck) behind the scenes putting the brakes on. Example: Tom Cruise does his own stunts, and must have a good sense for how to make it safe enough

What this author touches on is a chief concern with AI. In the name of removing life friction, it removes your brakes. Anything you want to do, just ask AI!

But should you?

I was out the other day, pondering what the word "respect" really means. It's more elusive than simply liking someone. Several times I was tempted to just google it or ask AI, but then how would I develop my own point of view? This kind of thing feels important to have your own point of view on. And it's that we're losing - the things we should think about in this life, we'll be tempted to not anymore. And come out worse for it.

All go, no brakes

computerthings 15 hours ago [-]
[dead]
agentultra 11 hours ago [-]
Have you read the book if I told you what it was about? Knowing what Crime and Punishment is about is different than reading it yourself. There’s no royal road and no shortcut to knowledge.

Read How To Solve It by Polya. The frustrations, dead ends, and trials are all a part of the process. It’s how we convince ourselves of truth and reinforce our understanding. It develops our curiosity and creativity.

Aziell 13 hours ago [-]
A year ago, I used to journal almost every day. Writing was how I organized my thoughts, found direction, and occasionally uncovered ideas that even surprised me.

But gradually, I started relying on GPT to help me write. At first, it felt efficient. But over time, I noticed I was thinking less. The more I expressed myself through AI, the more my own desire to express started to fade. Now I’m trying to return to my own thinking process again,but it’s much harder than I expected.

nicbou 8 hours ago [-]
I run an informative website for a living. If the trend continues, I will lose my job to AI trained on my content.

I get the feeling of pointlessness, but not because AI is making me obsolete. AI still needs me, because it still needs human beings to experience the real world and report on it. It need to copy someone's homework. It just destroys the economics of doing that homework.

But there is not the faintest chance of AI doing that sort of work itself. It might repeat what it knows, but it can't survey an audience, shake hands with industry experts, empathize with users, feel friction, or knock on doors.

These are still jobs for thinking humans.

9 hours ago [-]
malloryerik 11 hours ago [-]
I find LLMs fail and fail hard at what might be the most imaginative form of writing: poetry.

Before sending this comment I pecked around the net for examples of gleaming LLM verse.

A few articles claimed human readers preferred AI-brewed poetry to the human stuff. I checked the examples. Clearly most of the people surveyed were underliterate -- the human poems were excellent and the AI poems just creepily bad and simplistic -- so the articles turned into sad and unwitting testament about the state of our culture.

Maybe if you expertly LLM prompt your way to a highly abstract poem, over several iterations you might land something that has some actual feel to it, but even then that might owe more to your prompting talent than the LLM's skill. You could do the same with dice and a dictionary. (Is prompting is essentially editing?)

Please, show me otherwise. If faced with strong contrary evidence, I will be forced to change my mind.

nichochar 11 hours ago [-]
Even if what you said was true, it will be false within months or years.

What then?

This is the whole premise of the article. Just extrapolate and imagine that it can think and write poetry better than you (it will, and likely soon), what then?

It's a very important question. A cultural one.

aeschenbach 10 hours ago [-]
A reckoning of sorts, causing us to confront exactly who and what we are..
xwowsersx 15 hours ago [-]
Well said, and such an important point.

> Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show.

That line really hits home for me.

dvrp 15 hours ago [-]
This is a byproduct of abundance.

In this case, abundance of cognitive ability.

We say that our food sucks. Yet, our elite athletes would crush Hercules or other God-like figures from our mythology. At the same time, we suffer from obesity.

The answer to the paradox comes from abundance. I don’t know why it happens, but I’ve noticed it on food, information retrieval, and now cognitive capacity.

Think about what happened to our capacity to search information on books. Librarians are masters of organizing chaos and filtering through information. But most of us don’t know a tiny fraction of their knowledge because we grew up with Google.

My hope is that, just like eating healthy is not as pleasurable as processed sugars but it’s necessary for a fit life, we will need to go through the process of thinking healthy even though is not as pleasurable as tinkering with LLM prompts.

This doesn’t mean escapism however. Modern athletes take advantage of the industrial world too, but they’re smart about it. I don’t think thinking will be much different.

neom 14 hours ago [-]
For me thinking is kinda like: First draft thought > thinking > 2nd draft thought > thoughtfulness > thought. I suspect a lot of people just need to adjust the frame of their thinking style, LLMs are useful for getting to a more "final thought" quickly, but he says "I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought." - Indeed, to my mind anyway: a complete draft thought. From "when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea" to quality thought typically needs some meaningful time/walking around to gestate into ideas and things worth sharing anyway. LLMs for me are mostly either a) draft thoughts or b) pure work related knowledge transfer - and they work great for those things.
analog31 17 hours ago [-]
Something I'm on the fence about, but just trying to figure out from observation, is whether the AI can decide what is worthwhile. It seems like most of the successes of AI that I've seen are cases where someone is tasked with writing something that's not worth reading.

Granted, that happened before AI. The vast majority of text in my in-box, I never read. I developed heuristics for deciding what to ignore. "Stuff that looks like it was probably generated" will probably be a new heuristic. It's subjective for now. One clue is if it seems more literate than the person who wrote it.

Stuff that's written for school falls into that category. It existed for some reason other than being read, such as the hope that the activity of writing conferred some educational benefit. That was a heuristic too -- a rule of thumb for how to teach, that has been broken by AI.

Sure, AI can be used to teach a job skill, which is writing text that's not worth reading. Who wants to be the one who looks the kids in the eye and explain this to them?

On the other hand, I do use Copilot now, where I would have used Stackoverflow in the past.

williamcotton 18 hours ago [-]
> The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.

I find immense joy and satisfaction when I write poetry. It's like crafting a puzzle made of words and emotions. While I do enjoy the output, if there is any goal it is to tap into and be absorbed by the process itself.

Meanwhile, code? At least for me, and to speak nothing of those that approach the craft differently, it is (almost) nothing but a means to an ends! I do enjoy the little projects I work on. Hmm, maybe for me software is about adding another tool to the belt that will help with the ongoing journey. Who knows. It definitely feels very different to outsource coding than to outsource my artistic endeavors.

One thing that I know won't go away are the small pockets of poetry readings, singer-songwriters, and other artistic approaches that are decidedly more personal in both creation and audience. There are engaged audiences for art and there are passive consumers. I don't think this changes much with AI.

smcleod 14 hours ago [-]
Early on I wondered if things like this might happen.

But for me what has actually happened is almost the opposite, I seem to be experiencing more of a "tree of thoughts" with the ability to now perform rapid experimentation down a given branch, disposing branches that don't bare fruit.

I feel more liberated to explore creative thoughts than ever. I spend less time on the toil needed both bootstrap my thought process and to fending off cognitive dissonance when the feeling of sunk cost creeps in after going too deep down the wrong path.

I wonder if it's just perhaps a difference in how people explore and "complete" their thoughts? Or am I kidding myself and actually getting dumber and just fail to see it?

freshbreath 13 hours ago [-]
> disposing branches that don’t bear fruit

Disposing branches that the LLM convinces you won’t bear fruit.

The Next Big Thing doesn’t exist yet. At least not in any LLM models. If someone thinks of the NBT, asks LLM about it, and LLM’s model says “impossible”, this could squander innovation.

xigency 8 hours ago [-]
Indeed I was playing with a local model and thought I might work on a difficult project I've been putting off which is a new programming language. As I described the task, the model attempted to convince me this was a foolish endeavour that I wasn't equipped to do -- so naturally I included documentation from my most recent PL project as a counterpoint, then suggested starting from there. Llama responded (ironically) by claiming I was attempting copyright infringement.

Which is to say, in the long and short of it, an LLM is completely useless for anything so ambitious as to be intellectually challenging, because the median user has no use for such cases. If I were to pay a subscription fee for something more cutting edge, I would not only be giving up the copyright on the project but also any and all trade secrets, which would end up feeding the next version of GPT or Claude or what have you.

At least while I'm unemployed and underinsured, I'm not in the business of giving away my remaining talents to multinational billion dollar corporations (and paying for the privilege). Instead I've signed up to be a volunteer developer for a non-profit.

My consolation prize against AGI optimists and Singularity doomerists is the film "Slumdog Millionaire." Our individual experiences feel worthless until the opportunities present themselves where they become invaluable. The exponential space of creative problem solving ensures that (some) winning combinations will always come out of left-field.

largbae 17 hours ago [-]
This feels off, as if thinking were like chess and the game is now solved and over.

What if you could turn your attention to much bigger things than you ever imagined before? What if you could use this new superpower to think more not less, to find ways to amplify your will and contribute to fields that were previously out of your reach?

ahussain 16 hours ago [-]
I fully agree. I for one am excited about a future in which we can take on bigger challenges. With (good) LLMs, we won't need to spend as much time thinking about "how" to get things done, and can spend more time lucidly deciding "what" we want done.
ankit219 18 hours ago [-]
Familiar scenario. One thing that would help is to use AI as someone you are having a conversation with.

(most AIs need to be explicitly told before you start this). You tell them not to agree with you, to ask more questions instead of providing the answers, to offer justifications and background as to why those questions are being asked. This helps you refine your ideas more, understand the blind spots, and explore different perspectives. Yes, an LLM can refine the idea for you, especially if something like that is already explored. It can also be the brainstorming accessory who helps you to think harder. Come up with new ideas. The key is to be intentional about which way you want it. I once made Claude roleplay as a busy exec who would not be interested in my offering until i refined it 7 times (and it kept offering reasons as to why an AI exec would or would not read it).

tezza 2 hours ago [-]
Thoughts all the way down?
regurgist 19 hours ago [-]
You are in a maze of twisty little passages that are not the same. You have to figure out which ones are not. Try this. The world is infinitely complex. AI is very good at dealing with the things it knows and can know. It can write more accurately than I can, spell better. It just takes stuff I do and learns from my mistakes. I'm good with that. But here is something to ask AI:

Name three things you cannot think about because of the language you use?

Or "why do people cook curds when making cheese."

Or how about this:

"Name three things you cannot think about because of the language you use?"

AI is at least to some extent a artificial regurgitarian. It can tell you about things that have been thought. Cool. But here is a question for you. Are there things that you can think about that have not been thought about before?

The reason people cook curds is because the goal of cheese making was to preserve milk, not to make cheese.

qudat 16 hours ago [-]
Man people are using LLMs much differently from myself. I use it as an answer engine and that’s about where I stop using it. It’s a great tool for research and quick answers but I haven’t found it great for much else.
__turbobrew__ 13 hours ago [-]
This is how I use it, as a better google search without SEO spam. I still write everything myself.

In that respect I am not afraid of LLMs making me dumber as I would argue that google search has not made me dumber.

zerox22 14 hours ago [-]
That's exactly what I do as well. Sometimes I stumble upon a question that I know I won't find an answer for just by googling, so it's easier to formulate it fully to an AI. In some cases it's like a more convenient search engine, where I'm pretty sure I will get an answer to my question without needing to trick the engine with a good query. Which is rather tedious more often than not.
j7ake 16 hours ago [-]
It’s a faster Google search
uludag 19 hours ago [-]
What happens to conversation in this case? When groups of people are trained to use LLMs as a crutch for thinking, what happens when people get together and need to discuss something. I feel like the averageness of the thinking would get compounded so that the conversation as a whole becomes nothing but a staging ground for a prompt. Would an hour long conversation about the intricacies of a system architecture become a ten minute chatter of what prompts people would want to ask? Does everyone then channel their LLM to the rest of the group? In the end, would the most probable response to which all LLMs being used agree with be the result of the conversation?
jerjerjer 12 hours ago [-]
I want to see people who used LLMs all their conscious life enter the workforce. It's going to be amazing. I mean soon we'll see people who are essentially raised by LLMs.
FeteCommuniste 17 hours ago [-]
Eventually the LLMs get plugged directly into our brains and do all our thinking and lip movements for us. We can all be Sam Altman's meatpuppets.
klntsky 8 hours ago [-]
Pointlessness is a feeling within. People are just rationalizing it conveniently by blaming LLMs (they used to blame other things in the past).

Same with doom anxiety.

Literally just look up some good therapist prompts for chatgpt

zzzbra 17 hours ago [-]
Reminds me of this:

“There are no shortcuts to knowledge, especially knowledge gained from personal experience. Following conventional wisdom and relying on shortcuts can be worse than knowing nothing at all.” ― Ben Horowitz

tibbar 19 hours ago [-]
AI is far better in style than in substance. That is, an AI-written solution will have all the trappings of expertise and deep thought, but frankly is usually at best mediocre. It's sort of like when you hear someone make a very eloquent point and your instinct is to think "wow, that's the final word on the subject, then!" ... and then someone who actually understands the subject points out the flaw in the elegant argument, and it falls down like a house of cards. So, at least for now, don't be fooled! Develop expertise, be the person who really understands stuff.
WillAdams 19 hours ago [-]
This is only a problem if one is writing/thinking on things which have already been written about without creating a new/novel approach in one's writing.

An AI is _not_ going to get awarded a PhD, since by definition, such are earned by extending the boundaries of human knowledge:

https://matt.might.net/articles/phd-school-in-pictures/

So rather than accept that an LLM has been trained on whatever it is you wish to write, write something which it will need to be trained on.

noiv 19 hours ago [-]
I agree, frustration steps in earlier than before because AIs tell you very eagerly that unique thought is already part of its training data. Sometimes I wish one could put an AI on drugs and filter out some hallucinations that'll become main stream next week.
luisacoutinho 10 hours ago [-]
While I can somewhat relate to this post, I can't help but think this sort of thinking is expected and even part of a cycle. AI isn't the first time technology takes over something that humans had to do manually. Like photography? You used to have to paint, or pay someone to paint your picture. Then all of a sudden you didn't anymore. Painters might've argued that photographs remove the need to think, plan and decide on how best to execute a painting - photo cameras make all of that so easy. Even the output comes out faster.

I don't see anyone lamenting the existence of cameras. No one wants to go back to a reality in which, if you want pictures of you and your loved ones, you need to draw or paint them yourself. Even painters have benefited from the existence of cameras.

AI is, of course, more powerful tech than a camera - but when I find myself getting stuck with thoughts of "what's the point? AI can do it better (or will be able to)" - I like to think about how people have gone through similar "revolutions" before, and while some practices did lose value, not everything was replaced. It helps to be specific, because I'm sure AI cannot replace everything we currently do - we're just in the process of figuring out what that is.

armchairhacker 17 hours ago [-]
I feel similar, except not because of AI but the internet. Almost all my knowledge and opinions have already been explained by other people who put in more effort than I could. Anything I'd create, art or computation, high-quality similar things already exist. Even this comment echoes similar writing.

Almost. Similar. I still make things because sometimes what I find online (and what I can generate from AI) isn't "good enough" and I think I can do better. Even when there's something similar that I can reuse, I still make things to develop my skills for further occasions when there isn't.

For example, somebody always needs a slightly different JavaScript front-end or CRM, even though there must be hundreds (thousands? tens-of-thousands?) by now. There are many programming languages, UI libraries, operating systems, etc. and some have no real advantages, but many do and consequently have a small but dedicated user group. As a PhD student, I learn a lot about my field only to make a small contribution*, but chains of small contributions lead to breakthroughs.

The outlook on creative works is even more optimistic, because there will probably never be enough due to desensitization. People watch new movies and listen to new songs not because they're better but because they're different. AI is especially bad at creative writing and artwork, probably because it fundamentally generates "average"; when AI art is good, it's because the human author gave it a creative prompt, and when AI art is really good, it's because the human author manually edited it post-generation. (I also suspect that when AI gets creative, people will become even more creative to compensate, like how I suspect today we have more movies that defy tropes and more video games with unique mechanics; but there's probably a limit, because something can only be so creative before it's random and/or uninteresting.)

Maybe one day AI can automate production-quality software development, PhD-level research, and human-level creativity. But IME today's (at least publicly-facing) models really lack these abilities. I don't worry about when AI is powerful enough to produce high-quality outputs (without specific high-quality prompts), because assuming it doesn't lead to an apocalypse or dystopia, I believe the advantages are so great, the loss of human uniqueness won't matter anymore.

* Described in https://matt.might.net/articles/phd-school-in-pictures/

bachittle 19 hours ago [-]
The article mentions that spell and grammar checking AI was used to help form the article. I think there is a spectrum here, with spell and grammar checking on one end, and the fears the article mentions on the other end (AI replacing our necessity to think). If we had a dial to manually adjust what AI works on, this may help solve the problems mentioned here. The issue is that all the AI companies are trying too hard to achieve AGI, and thus making the interfaces general and without controls like this.
rudimentary_phy 14 hours ago [-]
AI has been really interesting. I definitely feel its benefits when I want to cram some learning in. On the other hand, it has introduced a new and novel issue for me that feels an awful lot like imposter syndrome. This sounded like the same issue.

Perhaps we will now suffer from AI-mposter syndrome as well. Ain't life wonderful?

apsurd 18 hours ago [-]
If you are in the camp of "ends justify the means" then it makes sense why all this is scary and nihilistic. What's the point doing anything if the outcome is infinitely better done in some other way by some other thing/being/agent?

If the "means justify the ends" then doing anything is its own reason.

And in the _end_, the cards will land where they may. ends-justify-means is really logical and alluring, until I realize why am I optimizing for END?

lcsiterrate 16 hours ago [-]
I wonder if there is any like the cs50.ai for other fields, which acts as a guide by not instantly giving any answer. The one I experiment to have like this experience is by using custom instructions (most of LLMs have this in the settings) wherein I set it to Socratic mode so the LLM will spit out questions to stir ideas in my head.
prmph 6 hours ago [-]
IDK. I cancelled my Claude subscription because for the highly creative code I am working on, it is maybe useful for like 5% of it, and LLM produced code is still mostly slop.

I'm generally able to detect LLM writing output; In most contexts, I discount it as fluff with little depth.

AI produced paintings are still weird and uncanny.

So I'm utterly unable to identify with the author's sense of futility whenever they want to write or code. I truly believe the output of my skill and creativity is not diminished by the existence of AI.

sippndipp 18 hours ago [-]
I understand the depression. I'm a developer (professional) and I make music (ambitious hobby). Both arts heavily in a transformational process.

I'd like to challenge a few things. I rarely have a moment where an LLM provides me a creative spark. It's more that I don't forget anything from the mediocre galaxy of thoughts.

See AI as a tool.

A tool that helps you to automate repetitive cognitive work.

deepsun 16 hours ago [-]
> instantly get a fully reasoned, researched, and completed thought

That's not my experience though. I tried several models, but usually get a confident half-baked hallucination, and tweaking my prompt takes more time than finding the information myself.

My requests are typically programming tho.

regurgist 18 hours ago [-]
You are in a maze of twisty little passages that are not the same. You have to figure out which ones are not.

Try this. The world is infinitely complex. AI is very good at dealing with the things it knows and can know. It can write more accurately than I can, spell better. It just takes stuff I do and learns from my mistakes. I'm good with that. But here is something to ask AI:

"Name three things you cannot think about because of the language you use?"

Or "why do people cook curds when making cheese."

Or how about this:

"Name three things you cannot think about because of the language you use?"

AI is at least to some degree an artificial regurgitarian. It can tell you about things that have been thought. Cool. But here is a question for you. Are there things that you can think about that have not been thought about before or that have been thought about incorrectly?

The reason people cook curds is because the goal of cheese making was (in the past) to preserve milk, not to make cheese.

dave1999x 18 hours ago [-]
I really don't understand your point. This in nonsense to me.
ahussain 16 hours ago [-]
If anything, with the right tooling LLMs should improve the quality of our thinking. For example, how much better would your thinking be if you could ask the LLM to reliably figure out all the 2nd and 3rd effects of your ideas, or to identify hidden assumptions.
lcsiterrate 16 hours ago [-]
Agree but LLMs vary. So we need an LLM that really try to reason from first principles.
perplex 19 hours ago [-]
I don't think LLMs replace thinking, but rather elevate it. When I use an LLM, I’m still doing the intellectual work, but I’m freed from the mechanics of writing. It’s similar to programming in C instead of assembly: I’m operating at a higher level of abstraction, focusing more on what I want to say than how to say it.
cratermoon 19 hours ago [-]
The writing is the work, though. The words on paper (or wherever) are the end product, but they are not the point. See chapter 5 of Ahrens, Sönke. 2017. How to take smart notes: one simple technique to boost writing, learning and thinking - for students, academics and nonfiction book writers., for advice on how writing the ideas in your own words is the primary task and improves not only writing, but all intellectual skills, including reading and thinking. C. Wright Mills in his 1952 essay, "“On Intellectual Craftsmanship" says much the same thing. Stating the ideas in your own words is thinking.
i1856511 19 hours ago [-]
If you do not know how to say something, you don't know how to say it.
FeteCommuniste 17 hours ago [-]
When I microwave a frozen meal for dinner, I'm still a chef, but I'm freed from the mechanics of preparing and assembling ingredients to form a dish.
__turbobrew__ 13 hours ago [-]
You can also use a microwave to bloom spices, or thaw frozen veggies from your home garden, or steam things, or thicken sauces, …

The microwave is a tool with certain useful aspects and certain limitations. It is also a tool which can lead to faster outcomes of things you need to do if the tool didn’t exist. At what point should a chef draw the line in the tools they use? Should I forgo microwaves? What about pressure cookers? Ovens? Surely knives are fair game? Maybe I should knap flint and butcher meat with it and cook over an open campfire — then truly no one can claim I am not a chef.

11 hours ago [-]
mlboss 19 hours ago [-]
What we need is mental gyms. In modern society there is no need for physical labor but we go to gyms just to keep ourselves healthy.

Similarly in future we will not need mental "labor" but to keep ourselves sharp we need engage in mental exercises. I am thinking of picking up chess again just for this reason.

incognito124 18 hours ago [-]
IMO chess is not the best mental gym. Personally, I've started exercising mental arithmetics
thidr0 14 hours ago [-]
Are we going to see a small market for artisanal thought emerge?
anoplus 9 hours ago [-]
I think the unhappiness we experience from AI is not because of AI, but a symptom of a society that lacks more humanism. I currently like to define AI's level of intelligence as its ability to reduce human suffering. By this definition, if AI failed to keep you and society happy for whatever reason, then it is stupid. If you or people around you feeling worthless, anxious, depressed, starving, because of AI, it is stupid! And society needs you to fix it!
mattfrommars 14 hours ago [-]
I am envious of folks who get to use AI on daily basis to do productive tasks. My use is very limited to asking it to summarize things to asking to explain LC problems.
boznz 19 hours ago [-]
Think Bigger. As the LLM's capability expands use it to push yourself outside your comfort zone every now and then. I have done some great fun projects recently I would have never thought of tackling before
binary132 17 hours ago [-]
Ok, nice article. Now prove an AI didn’t write it.

As a matter of fact I’m starting to have my doubts about the other people writing glowing, longwinded comments on this discussion.

satisfice 3 hours ago [-]
I want to shake this guy. I want to say “you are a choosing to be a fucking idiot because you falsely believe you are a vacuous moron.” But I know it won’t help.

Here’s the problem: he thinks that what LLMs produce are well-reasoned, coherent thoughts. Here’s a healthier alternative: what LLMs produce is shallow and banal text, designed to camouflage its true nature.

Now here’s a heuristic: treat anything written by an LLM, about any conceptual matter as wrong by definition (because it was not a product of human experience and insight, and because we are humans). If it LOOKS right, look closer. Look more carefully. Take that insight to the next level.

Second heuristic: anything written by an LLM that you cannot falsify is, by definition, banal. Ho hum. Who cares? Does an LLM have an opinion about how to find happiness? How cute… but not worth believing.

Third heuristic: that which an LLM writes which you can neither falsify nor dismiss as banal, you may assume that the LLM itself does not understand. It’s babbling. But perhaps you can understand it, and take it farther.

Define YOURSELF as that which lies beyond these models, and write from that sensibility.

asim 18 hours ago [-]
True. Now compare that to the creation of the universe and feel your insignificance of never being able to match it's creation in any form. Try to create the code that creates blades of grass. Good luck creating life. AI for all it's worth is a toy. One that shows us our own limitations but replace the word AI with God and understand that we spend a lot of time ruminating over things that don't matter. It's an opinion, don't kill me over it. But Dustin has reached an interesting point. What's the value of our time and effort and where should we place it? If the tools we create can do the things we used to do for "work" and "fun" then we need to get back to doing what only humans were made for. Being human.
woah 18 hours ago [-]
> But now, when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea, I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought. Minimal organic thinking required.

No offense, but I've found that AI outputs very polished but very average work. If I am working on something more original, it is hard to get AI to output reasoning about it without heavy explanation and guidance. And even then, it will "revert to the mean" and stumble back into a rut of familiar concepts after a few prompts. Guiding it back onto the original idea repeatedly quickly uses up context.

If an AI is able to take a sliver of an idea and output something very polished from it, then it probably wasn't that original in the first place.

ryankrage77 16 hours ago [-]
> All of my original thoughts feel like early drafts of better, more complete thoughts that simply haven’t yet formed inside an LLM

I would like access to whatever LLM the author is using, because I cannot relate to this at all. Nearly all LLM output I've ever generated has been average, middle-of-the-road predictable slop. Maybe back in the GPT-3 days before all LLMs were RLHF'd to death, they could sometimes come up with novel (to me) ideas, but nowadays often I don't even bother actually sending the prompt I've written, because I have a rough idea of what the output is going to be, and that's enough to hop to the next idea.

BrenBarn 8 hours ago [-]
Although I think there's some truth to the overall gist here, I don't really agree with the author's point that AI can do so many of these things better. I seriously doubt that an AI would write a blog post as good as this, and I entirely doubt that it could write a blog post better than the best human writers could. For me the depressing part is not so much that AI can do everything better, but that AI is so much better at producing low-quality output that it's becoming increasingly difficult to locate anything better than that baseline in the morass of slop.
cess11 19 hours ago [-]
This person should probably read pre-Internet books, discover or rediscover that the bar for passable expression in text is very low compared to what it was.

Most of that 'corpus' isn't even on the Internet so it is wholly unknown to our "AI" masters.

Trasmatta 19 hours ago [-]
Agreed. I keep seeing posts where people claim the output is all that really matters (particularly with code), and I think that's missing something deeply fundamental about being human.
martin-t 19 hours ago [-]
This resonates with me deeply.

I used to write open source a lot but lately, I don't see the point. Not because I think LLMs can produce novel code as good code as me or will be able to in the near future. But because any time I come up with a new solution to something, it will be stolen and used without my permission, without giving me credit or without giving users the rights I give them. And it will be mangled just enough that I can't prove anything.

Large corporations were so anal about copyright that people who ever saw Microsoft's code were forbidden from contributing to FOSS alternatives like wine. But only as long as copyright suited them. Now abolishing copyright promises the C-suite even bigger rewards by getting rid of those pesky expensive programmers, if only they could just steal enough code to mix and match it with enough plausible deniability.

And so even though _in principle_ anybody using my AGPL code or anything that incorporates my AGPL code has the right to inspect and modify said code; yet now tine fractions of my AGPL code now have millions or potentially billions of users but nobody knows and nobody has the right to do anything about it.

And those who benefit the most are those who already have more money than they can spend.

bsimpson 18 hours ago [-]
I admittedly don't use AI as much as many others here, but I have noticed that whenever I take the time to write a hopefully-inciteful response in a place like Reddit, I immediately get people trying to dunk on me by accusing me of being an AI bot.

It makes me not want to participate in those communities (although to be honest, spending less time commenting online would probably be good for me).

FeteCommuniste 17 hours ago [-]
It'a vicious cycle. As AI gets better at writing, more people will rely on it to write for them, and as more AI content gains prominence, more people will tend to ape its style even when they do write something for themselves.
MarcelOlsz 19 hours ago [-]
Have not resonated with an article for a long time until this one.
16 hours ago [-]
aweiher 18 hours ago [-]
Have you tried writing this article on .. AI? *scnr
brador 6 hours ago [-]
What is the value of thought? Why not become a mindless automaton in the AI machine? You usefulness will ensure your continued survival.
petesergeant 1 hours ago [-]
This really hasn't matched my experience, and maybe I'm just blind to it. I have absolutely not found any LLM on the market able to produce text or organize my ideas well, and I spend all days cajoling LLMs into producing text for a living.

I've found it very useful for proof-reading, and calling me out on blind-spots. I'll tell ChatGPT, Claude, and Anthropic that my drafts were written by a contractor and I need a rating out of ten to figure out how much I should pay him. They come back with ideas. They often wildly disagree. I will absolutely ask it to redraft stuff to give me inspiration, or to take a stab at a paragraph to unblock me, but the work produced is almost always dreck that needs heavy rework to get to a place I'm happy with. I will ask its opinion on if an analogy I have created works, but I've found if I ask it for analogies by itself it rarely comes up with anything useful.

I've found it immensely useful for educating myself. For me, learning needs to be interactive to stick. I learn something by asking many clarifying questions about it: "does that imply that" and "well isn't that the same as" and "why like this instead of like that" until I really get it, and the models are beautiful for that. It doesn't -- to me -- feel like I'm atrophying my thinking skills when I do this, it feels like I am managing to implant new and useful concepts because I can really sink my teeth into them.

In short, I think it's improved my writing by challenging me, and I think it's helped me understand complex topics much more efficiently than I would have done by banging my head against a text book. My thinking skills feel sharper, not weaker, from the exercise.

thor_molecules 12 hours ago [-]
The apologists be damned. This article nails it. A grand reduction. Not a bicycle; a set of training wheels.

Where is the dignity in all of this?

russellbeattie 7 hours ago [-]
AI's profound effect on communication is something I haven't worked out yet. Usually I'm pretty good at extrapolating tech trends out to the near future in broad strokes, but there's this paradox I'm running into: Creating long form content is now easy with the help of AI, but no one is going to read that content because AI will summarize it for us.

I can't figure out what the end result of this is going to be - society is going to become both more and less verbose at the same time.

9 hours ago [-]
MinimalAction 18 hours ago [-]
> My thinking systems have atrophied, and I can feel it.

I do understand where the author is coming from. Most of the times, it is easy to read an answer---regardless of whether it is right/wrong, relevant or not---than think of an answer. So AI does take that friction of thinking away.

But, I am still disappointed of all this doom because of AI. I am inclined to throw my hands and say "just don't use it then". The process of thinking is where the fun lies, not in showing the world I am better or always right than so and so.

sneak 17 hours ago [-]
> This post was written entirely by a human, with no assistance from AI. (Other than spell- and grammar-checking.)

That’s not what “no assistance” means.

I’m not nitpicking, however - I think this is an important point. The very concept of what “done completely by myself” means is shifting.

The LLMs we have today are vastly better than the ones we had before. Soon, they will be even better. The complaint he makes about the intellectual journey being missing might be alleviated by an AI as intellectual sparring partner.

I have a feeling this post basically just aliases to “they can think and act much faster than we can”. Of course it’s not as good, but 60-80% as good, 100x faster, might be net better.

keernan 18 hours ago [-]
>>the process of creation >>my original thoughts >>I’d have ideas

As far as I can tell, LLMs are incapable of any of the above.

I'd love to hear from LLM experts how LLMs can ever have original ideas using the current type of algorithms.

spinach 19 hours ago [-]
I've had the same experience but with drawing. What's the point when AI can generate perfect finished pieces in seconds? Why put all that effort in to learning and drawing something. It's always been hard for me but it used to feel worth it for the finished piece, but now you can bring a piece of computer art into being with a simple word prompt.

I still create, I just use physical materials like clay and such, to make things that AI can't yet replicate.

add-sub-mul-div 17 hours ago [-]
AI won't create something perfect or even better. And to the extent we enjoy creating for its own sake that's still there. But it's true that real creators will have a harder time being seen by others when there's an ocean of slop being shoved down our throats by parties with endless budgets.
BlueTemplar 4 hours ago [-]
Using LLMs is a bit like smoking : better to never have started (or at least quit ASAP).

And perhaps even more fitting now that not doing it comes with short term career limitations :

Friends: Rachel Gets Peer Pressured Into Smoking (Season 5 Clip) | TBS

https://www.youtube.com/watch?v=nzDJdZLPeGE

(Though an even better parallel is using platforms.)

double0jimb0 12 hours ago [-]
99% (or more?) of human written work is filler/fluff. LLMs seem to doing a great job of reducing this by actually getting to the point.

There is still a limit at which “points” can be groked, humans can only read so fast.

What is the problem here?

ThomPete 18 hours ago [-]
My way out of this was to start thinking about what can't the LLMs of the world do and my realization was actually quite simple and quite satisfying.

What LLMs can't replace is network effects. One LLM is good but 10 LLMs/agents working together creating shared history is not replaceable by any LLM no matter how smart it becomes.

So it's simple. Build something that benefit from network effects and you will quickly find new ideas, at least it worked for me.

So now I am exploring ex. synthetic predictions markets via https://www.getantelope.com or

Rethinking myspace but for agents instead like: https://www.firstprinciple.co/misc/AlmostFamous.mp4

AI want's to be social :)

quantadev 18 hours ago [-]
There's quite a dichotomy going on in software development. With AI, we can all create much more than we ever could before, and make it much better and even in much less time, but what we've lost is the sense of pride that comes with the act of creating/coding, because nowadays:

1) If you wrote most of it yourself then you failed to adequately utilize AI Coding agents and yet...

2) If AI wrote most of it, then there's not exactly that much of a way to take pride in it.

So the new thing we can "take pride in" is our ability to "control" the AI, and it's just not the same thing at all. So we're all going to be "changing jobs" whether we like it or not, because work will never be the same, regardless of whether you're a coder, an artist, a writer, or an AD agency fluff writer. Then again pride is a sin, so just GSD and stop thinking about yourself. :)

hooverd 17 hours ago [-]
Maybe, but will anyone understand what they are creating?
quantadev 16 hours ago [-]
Right. In the near term, I'm predicting a dramatic decline in software quality, because code was never understood and tested properly. In the future we will have better processes, and better ways of letting AI do it's own checking/validation, but that's lagging right now.
hooverd 16 hours ago [-]
It'll be interesting to see a society where there's a big negative incentive to actually sitting down and understanding how things work, in the name of efficiency.
quantadev 15 hours ago [-]
I've already noticed I'm getting lazy, because of AI, and other developer friends say they are also. I'd rather explain to an AI what I want done, instead of actually writing the code, just because it's easier to explain it than do it.
mattsears 18 hours ago [-]
Peak Hacker News material
rkhassen9 10 hours ago [-]
Agreed. It’s been awhile!
keybored 7 hours ago [-]
Some might think that this is new. People who self-identify as makers. It’s not really new to me at all.[1]

- Why play music infront of anyone? People have Spotify. It will take me a ton of effort to learn one song. Meanwhile I will burden the others with having to politely listen and give feedback.

- Why keep learning instruments? There will be hundreds who are better than me at them at the “touch of a button”. Recurring joke: Y has inspired millions to pick up X and other millions to give up X.

- Why learn an intellectual field? There are hundreds of experts at the “touch of a button”. It would be better to defer to them. Let’s not “Dunning Kruger” myself.

- Why write? Look at the mountain of writings out there. What can I add to that? Rehashes of rehashes? A twentieth explanation on topic X?

- Why do anything that is not about improving or maintaining myself or my family? I can exercise, YouTube can’t do that for me. But what can I do for other people? Probably nothing, there are so many others out there and it will be impossible to “compete”.

- Why read about politics? See previous point. Experts.

- Why get involved in politics? See previous point. And I hear that democratic participation just ends up being populism.

I have read this sentiment before. And a counter-argument to that thinking. One article. One single article. I don’t find it in any mainstream space. You would probably find it in a certain academic corner.

There is no mainstream intellectual investigation of this that I know of. Because it’s by design. People are supposed to be passive, unfulfilled, narrowly focused (on their work and their immediate self and family) and dependent.

The antidote is a realization. One part is the realization that there is a rich inner life that is possible. Which is only possible by applying yourself. Like in writing, for example. Because you can write for yourself. Yes, you might say that we are just back to being narrowly focused on yourself and your family. But this realization might just be a start. Because you can start imagining the untapped potential of the inner mind. What if you journaled for a few weeks. If you just stopped taking in some of the inputs you habitually do. Then you see dormant inner resources coming back. Resources that were dormant because you thought you yourself and your abilities that were not narrowly about doing your professional job and your duties were... they were just not good enough to be cultivated.

But I think they are.

Then you realize that existence is not just about doing your job and doing your duties and in between that being a passive consumer or lackey, deferring everything else to the cream who has floated to the top. Every able-bodied moment can be imbued with meaningful action and movement, because you have innate abilities that are more than good enough to propel yourself forward, and in ninety-nine point nine percent of the cases it is irrelevant that you are not world-class or even county-class at any of it.

[1] But I haven’t really been bit by the AI thing to the point of not programming or thinking anymore. I will only let AI do things like write utility functions and things which I don’t have the brain capacity for, like parsing options in shell scrips.

Maybe because I don’t feel the need to be maximally productive—I was never productive to begin with.

NobodytheHobbit 5 hours ago [-]
The funniest thing about dunning kruger is that everyone has it and it has this weird lofty place of existing for others but not ourselves. One of the hardest things to know is that we know not nothing but abysmally little no matter how genius we can get at times. The Tool can get to a higher level of dunning kruger but is still an aggregate of knowledge and has already eaten all of it. A wise man knows nothing. What does a wise machine know?
hodder 18 hours ago [-]
"AI could probably have written this post far more quickly, eloquently, and concisely. It’s horrifying."

ChatGPT write that post more eloquently:

May 16, 2025 On Thinking

I’ve been stuck.

Every time I sit down to write a blog post, code a feature, or start a project, I hit the same wall: in the age of AI, it all feels pointless. It’s unsettling. The joy of creation—the spark that once came from building something original—feels dimmed, if not extinguished. Because no matter what I make, AI can already do it better. Or soon will.

What used to feel generative now feels futile. My thoughts seem like rough drafts of ideas that an LLM could polish and complete in seconds. And that’s disorienting.

I used to write constantly. I’d jot down ideas, work them over slowly, sculpting them into something worth sharing. I’d obsess over clarity, structure, and precision. That process didn’t just create content—it created thinking. Because for me, writing has always been how I think. The act itself forced rigor. It refined my ideas, surfaced contradictions, and helped me arrive at something resembling truth. Thinking is compounding. The more you do it, the sharper it gets.

But now, when a thought sparks, I can just toss it into a prompt. And instantly, I’m given a complete, reasoned, eloquent response. No uncertainty. No mental work. No growth.

It feels like I’m thinking—but I’m not. The gears aren’t turning. And over time, I can feel the difference. My intuition feels softer. My internal critic, quieter. My cleverness, duller.

I believed I was using AI in a healthy, productive way—a bicycle for the mind, a tool to accelerate my intellectual progress. But LLMs are deceptive. They simulate the journey, but they skip the most important part. Developing a prompt feels like work. Reading the output feels like progress. But it's not. It’s passive consumption dressed up as insight.

Real thinking is messy. It involves false starts, blind alleys, and internal tension. It requires effort. Without that, you may still reach a conclusion—but it won’t be yours. And without building the path yourself, you lose the cognitive infrastructure needed for real understanding.

Ironically, I now know more than ever. But I feel dumber. AI delivers polished thoughts, neatly packaged and persuasive. But they aren’t forged through struggle. And so, they don’t really belong to me.

AI feels like a superintelligence wired into my brain. But when I look at how I explore ideas now, it doesn’t feel like augmentation. It feels like sedation.

Still, here I am—writing this myself. Thinking it through. And maybe that matters. Maybe it’s the only thing that does.

Even if an AI could have written this faster. Even if it could have said it better. It didn’t.

I did.

And that means something.

16 hours ago [-]
ribcage 18 hours ago [-]
[dead]
Xplan 18 hours ago [-]
[dead]
reallynotanai 18 hours ago [-]
[dead]
lngnmn2 9 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:33:03 GMT+0000 (UTC) with Wasmer Edge.