![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://mander.xyz/pictrs/image/dbeda0de-d3fb-4fab-8703-3e52e72cb4db.jpeg)
Now that I think about children develop critical thinking at around the age of 10. Perhaps you are right. But, the question remains, will LLMs develop such critical thinking on it’s own or are we still missing something?
Wishing for my death or a World War. Either will do. Because FML or this world.
Now that I think about children develop critical thinking at around the age of 10. Perhaps you are right. But, the question remains, will LLMs develop such critical thinking on it’s own or are we still missing something?
Does using authoritative sources is fool proof? For example, is everything written in Wikipedia factually correct? I don’t believe so unless I actually check it. Also, what about reddit or stack overflow? Can they be considered factually correct? To some extent, yes. But not completely. That is why most of these LLMs give such arbitrary answers. They extrapolate on information they have no way knowing or understanding.
Why do you even think that? Children don’t ask questions? Don’t try to find answers?
This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT outputs is probably bullshit.
Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.
The material is too ductile. I am at the peak of a narrow yield curve and then, snap, material breaks.
So, Todd is denser than a neutron star.
Yeah, I am skeptical. What would be the energy expenditure of actually storing CO2 into those blocks and what about transporting them? I have a feeling this is like carbon capture plants, great for the headlines, but not really a practical solution.
Should have used toxic glue, if they ran out of nontoxic one.
So much cheese wasted. It’s really tragic.
Yeah, I have to agree with you. For example, I would have no problem using a decently tested LLMs for engineering simply because Engineering usually accounts for errors and uses appropriate factors to accommodate them. Sure LLMs could be get more accurate in future, but I believe the error will reduce asymptotically. Essentially, more accurate LLMs get, it will get that much harder to increase the accuracy. There is always a price to pay, IMO.
That’s just collateral damage.
I am a backend dev, but this shit is why I have utmost respect for frontend devs, also because they know how to center a div.
Imagine simping for a billion-dollar corporation. I mean, I love Sony games, Uncharted, Horizon, Spider-Man, but I am not going to defend them. Their lives must be really awesome, if they stoop to simping to a corporation.
I felt that. First six months of my PhD, I downloaded about 1000 papers and read about 10-12.
Just because you had a thought, doesn’t mean you should write a TED talk on it.
Why am I not a coral? I would be dead by now.
This world is a scam. I studied hard, worked hard, in the all I am good for is being depressed and filled with a sense of futility and worthlessness.