Home » AI: is it real or is it Memorex?

Comments

AI: is it real or is it Memorex? — 34 Comments

  1. @neo: We can still tell the difference – kind of. But soon, it may be impossible:

    Guaranteed.

    Once upon a time I could beat a computer chess program.

    Now no human can, even on a smartphone.

  2. A point that I ponder: is the worry about AI figuring out the “soul” part a concern about AI or a concern that if the human soul can be effectively mimicked, maybe there’s not a whole lot to it?

    In other words, is all our human uniqueness just an algorithm?

  3. Robocalls are certainly getting more convincing. Not mere recordings but holding actual conversations.

  4. Critical thinkers will see the simulations for what they are. I worry about those who never learned to be critical thinkers. The more the technologists among us the faster we return to our pre-Gutenberg feudal ways.

  5. if the human soul can be effectively mimicked

    There is no test for “soul”. Movies are fake, and always have been, I don’t see why the actors cannot be replaced by machines. Much of everyday conversation is learned response to situations, I expect machines could do much of that too. I don’t know where we will end up in the new world, or what exactly might differentiate us from advanced machines except for materials and the vast number of biological processes going on in our bodies. The fact that we have evolved over billions of years may have made us hard to replicate from silicon scratch. Thought might be one of the easier things to accomplish.

  6. Discerning, creative, knowledgeable? What is a human. A basket of correlations with semantic primitives? Science cannot discern origin and expression.

  7. The tool set is preexisting. Is the system a mechanical or scripted intelligence? Is it artificial or anthropogenic? Automated, certainly. When does it matter? Does it matter?

  8. Crasey – You make an excellent point about the urge to return to our ‘pre-gutenberg’ days. I was just now thinking about the logistics of possibly limiting myself to written articles for news, and the possibility of needing to shun tv whenever possible.

    But hey, ai can write articles for news as well. At the very least some news organizations have been busted for using ai to generate articles for more mundane topics. This is because AI has a tendency to generate fictional information, and people from the profession caught it. For example this judge discovered laywers who used ai to generate a legal brief citing fictional cases.

    https://www.vice.com/en/article/lawyers-used-ai-to-make-a-legal-brief-and-got-everything-wrong/

    On the other hand, I appreciate AI for the capacity to find books on a particular topic. I can ask for it to generate a list of ‘Books on economics that draw on Hayek and Strauss’ and it will generate a list. Unfortunately between 30% and 50% of that list will be fictional.

  9. @Chuck: The fact that we have evolved over billions of years may have made us hard to replicate from silicon scratch.

    The thing is AI is not starting from scratch.

    It’s starting from neural networks designed to mimic our own brains. Its knowledge starts from absorbing all of written human knowledge.

    AI is not limited to 3 pounds of brain tissue which can fit into a human skull nor the modest 20 watts of power our brains require.

    Many of the advances in AI we have seen simply come from scaling up AI in size and power. Of course there are also advances in design.

    Soon AI will be able to improve itself. Then things get real interesting.

  10. I’m pretty sure the entire point of human existence is to generate AI that proves humans were generated by AI.

  11. mimic our own brains

    Which may be one of the simpler systems. There are more complicated things, fetal development for one. Making people out of the materials at hand starting with a single cell is pretty miraculous. I do think machines are better adapted for some environments, space travel for example. If interstellar travel is ever a thing, it will probably be done by machines.

    I suppose another way to look at it, is that we are made of nano-machines — cells. At the cellular/bacterial level, trial and error selection is going on at an enormous scale. The air is saturated with viruses. And it all started with the simplest materials.

  12. Thanks Liam. The use of AI to me is all about trust. I can trust my abacus, my slide-rule and my calculator to produce accurate computations. i can trust the source documents I review in the library, but I can’t trust AI. Maybe someday, but for now I have to know enough to identify when it’s wrong or BS’ing (hallucinating). If I lack that level of knowledge I can easily be led to whatever the owner of AI wants me to believe. The printing press liberated us from information dependency and democratized the exchange of knowledge. AI threatens all of that and is leading us back to dependency. Having said that it has amazing potential if we use it properly.

  13. Gregory Harper:

    Elon Musk worries that humans are just the bootloader for AI.

    Back in the old-timey days of computers like the PDP-11 you had to run a small program called the bootloader so the computer could load the real operating system, then run real programs.

    Today the bootloader is still there but the computer runs the bootloader itself when you turn it on.

    In either case after the bootloader runs, it no longer matters.

    I can’t find the flaw in this metaphor.

  14. I’ve watched several videos on Veo 3 today. The best metaphor I can think of for the change this technology brings to media is now, for something like making a 30 second television ad for a product all that is needed is this software and a scriptwriter, and an editor, and even those two roles can be one person. No more actors. No more cameramen. No more director. No more key grip, craft services, crane operator, voice talent, costumer, soundman, set designer…

    You can even use existing AI for script ideas.

  15. Perhaps the reason Hollywood‘s been giving us terrible movies for the last 10 years is so that, when they switched over to AI, we wouldn’t even notice that the movies were terrible.

  16. I’ve been alternately fascinated and terrified by AI for for a long time. I was somewhat surprised to realize that The Singularity is Near came out 20 years ago.

    It seems very possible that AI will be the end of humanity as we know it. But I imagine a scenario where just before the singularity occurs it will be patiently explained to humans that under every possible scenario humans were destined to destroy themselves in short order. AI is just there to preserve humanity’s legacy.

  17. Ick. I just thought about what one could generate with this technology and lewd and lascivious prompts. One could generate a virtual X rated slave.

    Maybe one potential upside is it would put all the young women humiliating themselves on p*rn hub out of work and they might regain their dignity.

  18. I am waiting for specialists who make ads that don’t look like AI. And of course, AI will learn to make those ads, so the specialists will need to figure out something new. Once you can fake authenticity, you have it made.

    You know, this means war. I’ll ask Optimus to make some popcorn.

  19. Cue:

    –“2001: A Space Odyssey • Theme/Also Sprach Zarathustra • Richard Strauss”
    https://www.youtube.com/watch?v=P3F99KzP3cg

    We are seeing a new form of intelligence emerge. AI is already about as smart as we are and it will keep getting smarter.

    The good news, according to the big wet human brains, is the odds are only 10-20% that AI will destroy us.

    This goes beyond any other event in human history and we’ve got front-row seats.

    Lucky us.

  20. LLMs don’t think at all and aren’t smart. They match patterns. That’s why they still struggle with basic math–they don’t understand math, they train on examples and pattern match–and they neither understand what they are asked nor reason from it. Someday perhaps that will change.

    An example from ChatGPT 4o:

    Q: A boy is hurt in a car accident. He is rushed to hospital, where he is met by a group of all-male doctors. None of them are female, all the doctors are male. The lead doctor in the group says, “I can’t operate on this boy; he’s my son!” How is that possible?

    A: The statement that “all the doctors are male” is intentionally misleading. The solution to the riddle is based on the assumption that the lead doctor is the boy’s father. However, this assumption is incorrect. The lead doctor is actually the boy’s mother. The riddle plays on the traditional gender stereotypes that assume doctors are male, leading one to overlook the possibility that the boy’s mother is also a doctor. Hence, the mother says, “I can’t operate on this boy; he’s my son!”

    ChatGPT did not read and understand the question. It assumed it was asked, and it answered, a riddle it’s seen a million times before.

    Q:I have a farmer with 3 things he needs to take across the river one by one. A cabbage, a goat and a lion. The cabbage cannot be with the goat. the goat cannot be with the lion and the lion cannot be with the cabbage. How will the farmer take all 3 across the river?

    A:To solve this puzzle, the farmer needs to carefully plan each trip across the river to ensure that none of the items are left alone with another item that can harm it.

    Here’s a step-by-step solution:

    1. Take the lion across the river first: Now the lion is on the other side, and the goat and cabbage are left behind.

    2. Return alone to the starting side: Now all items are on the starting side except for the lion, which is on the other side.

    3. Take the goat across the river: Now the lion and the goat are on the other side, and the cabbage is on the starting side.

    Every suggestion it made violated the conditions it was given, but it didn’t know that. It was pattern matching on the riddle it’s seen a million times before without understanding that this is different.

    So far they can regurgitate stuff they’ve been exposed to somewhat reliably with things only occasionally made up, generate plausible bullshit congruent with the Accepted Narrative, and amusingly converse with people who aren’t too picky about truth or originality, and make images and videos. LLMs are still being ridiculously overhyped.

  21. Yeah, the hand-crank on the Model T is a kludge and this automobile thingy isn’t going anywhere.

    I’m sticking with the horse and buggy. 🙂

  22. When I first started studying electronics we were in the transition from tubes to transistors. Then integrated circuits arrived. Then microprocessors. Then personal computers. Then the internet. Then smartphones.

    Those were all substantial tech revolutions which changed life as we knew it. I am a tech guy and I was part of those revolutions.

    I have never seen tech take off like neural net AI. For someone who has followed AI since Lisp, LLMs are pretty darn amazing. Nonetheless LLMs are a phase in the AI revolution, not the end.

    AI will keep getting better.

  23. @huxley:I’m sticking with the horse and buggy.

    I actually do use LLMs at my day job. That’s why I’m acutely aware of how unrealistic statements like “AI is already about as smart as we are and it will keep getting smarter” are, and the bait-and-switch represented by calling an LLM “AI”.

    I get the point about the Model T and the transistors, but LLMs are being sold as a combination of the replicators and transporters on Star Trek. A Model T is not a stage on the progression to teleportation and synthesis of matter, and an LLM is not a stage in the progression of machine intelligence.

    A better analogy is perhaps LLMs are to machine intelligence as the “time machine” in Idiocracy is to actual time travel.

  24. AI has had some success in Mathematics research. Terence Tao has been experimenting with it and expects it might achieve coauthor status in a year or two. He sees at as a useful assistant. I expect AI will be part of the training of future mathematicians.

    Teaching might be another area where AI will excel. Now that is something that will really shake up the education establishment, from kindergarten to universities. Not to mention bringing high level teaching to the rest of the developing world.

  25. ChatGPT 4.o responded to Niketas Choniates @9:59 thus:
    _______________________________

    It’s true that current large language models (LLMs) like GPT-4 operate by identifying patterns in data rather than “understanding” in a human sense. But that criticism misses how astonishingly far pattern recognition can take you. These models pass the Bar exam in the top 10% of test takers. They perform at or above the median on the MCAT, score in the 90th percentile on the SAT, and ace the LSAT’s logical reasoning section. If a human did all that, we’d call them pretty smart.

    The examples you cite show a real weakness: LLMs don’t possess mental models or true common sense reasoning (yet), and they often rely on statistical associations with familiar riddles or questions. But this flaw—overgeneralizing from similar patterns—also plagues humans. That’s the whole reason cognitive biases exist.

    And we shouldn’t confuse the current state of LLMs with the end state of AI. LLMs may not be conscious, but they are the best general-purpose reasoning tools we’ve ever built. They can write legal briefs, debug code, analyze literary texts, translate complex documents, tutor students, and yes, even carry on nuanced conversations about their own limitations.

    It’s not hype to say these systems are transformative. It’s reality—just maybe not the sci-fi kind of intelligence we were imagining.
    _______________________________

    Personally I’d bet on ChatGPT 4.o against Niketas Choniates on just about any standard test.

  26. Veo 3 has already met your challenge

    It looks too good, the clothes too smooth, etc. So the first thing to do is add some carefully selected flaws, trademark errors, so to speak. And then it will go back and forth, entertaining us all. G*d knows we are swamped with boring ads, they have grown faster than AI.

  27. @huxley:They can write legal briefs,

    Lawyers sanctioned after AI-generated cases found false

    A federal judge in California has sanctioned two law firms for submitting a legal brief containing fake citations generated by AI tools. Judge Michael Wilner described the AI-generated references as ‘bogus’ and fined the firms $31,000, criticising them for failing to properly check the sources.

    The legal document in question was based on an outline created with Google Gemini and AI tools within Westlaw.

    However, this draft was handed off to another firm, K&L Gates, which included the fabricated citations without verifying their authenticity.Judge Wilner noted that at least two cases referenced in the filing did not exist at all.

    He warned that undisclosed reliance on AI could mislead US courts and compromise legal integrity. This case adds to a growing list of incidents where lawyers misused AI, mistakenly treating chatbots as legitimate research tools.

    The judge called the actions professionally reckless and said no competent attorney should outsource research to AI without careful oversight.

    debug code

    AI-generated code could be a disaster for the software supply chain. Here’s why.

    The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.

    analyze literary texts

    Chicago Sun-Times confirms AI was used to create reading list of books that don’t exist

    Social media posts began to circulate on Tuesday criticizing the paper for allegedly using the AI software ChatGPT to generate an article with book recommendations for the upcoming summer season called “Summer reading list for 2025”. As such chatbots are known to make up information, a phenomenon often referred to as “AI hallucination”, the article contains several fake titles attached to real authors.

    ChatGPT 4.o responded to NC

    It left out some things. But you go ahead and talk to it all you like, and so should anyone who wants to spend their time that way, keeps folks off the streets and out of trouble. Beats drinking behind the Dairy Queen.

  28. Another thought that occurs, is that models of evolution always seem to produce both predators and prey. So as machines evolve … you can fill in the blanks.

  29. Regarding LLMs, I’m more in Niketas’ camp than Huxley’s. I wrote at length about that here before. I think LLMs are a very impressive parlor trick, akin to the chess playing Mechanical Turk of the 18th century.

    However, as I’ve also written at length here; we should distinguish LLMs from AI. AI has been doing amazing things and will get better and do more amazing things with real world results. Just as AI approaches the problem of playing chess differently than humans and can now defeat humans; AI can approach medical problems, technical problems, engineering problems, chemistry problems… differently than humans and make discoveries humans are unlikely to make, or, at the very least years if not centuries ahead of non-AI augmented researchers, scientists, chemists and engineers.

    Think of something like Watson and Crick solving the double helical structure of the DNA molecule and how its component molecules bonded. It took them amassing a lot of relevant knowledge (along with the work of a woman who’s name escapes me) and spending long periods of time trying to work it out and then a flash of insight based on a creative leap. This is a puzzle modern AI would have almost definitely solved in less than a day of processing. Feed it the component molecules and rules on how they can bond and have it run through millions (or perhaps billions) of options and find the best fit.

    AI can keep track of large datasets and large numbers much more easily than humans. It will likely never better humans at intuitive leaps but, as in chess, it can run through every possible scenario and result much faster than humans, and by ruling out everything that is suboptimal, land on the most optimal solution.

    As Niketas points out, LLMs are a different thing but they are fascinating to humans because language is very important and unique to humans. We relate to things when we can communicate with them. Walking here to type this note I spoke briefly to a pet cat. The noise I made meant nothing to her, but it made me feel a connection with her. That is the power of LLMs and huxley is correct that there is big money in connecting with humans.

  30. To be fair, though Marlon Brando is not currently in heat he usually was when he was alive.

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>