We already knew that the content of his cranium was a nonfunctional void. Legal ratification would be a nice touch.
Banned: I agree with you, but remain skeptical we will see this get anywhere. Too bad: it would be a good conclusion to Biden’s political career.
WHOEVER CONTROLLED THE AUTOPEN CONTROLLED THE PRESIDENCY
It was Ex-P.F.C. Wintergreen.
I actually used that in a political conversation with an intelligent left-wing friend of mine a few weeks ago. She replied, “Let’s leave Biden out of the discussion.” O….K…. But, that is kind of a big issue isn’t it?
She wanted to talk about Trump & why people support him.
I think it’s appropriate to be skeptical of “we can legally erase a Presidency we don’t like using this one weird trick”, since we’ve seen more than one of them in the last few administrations.
There has been a fair amount of discussion about AI here; especially from huxley and karmi. I use it a fair amount professionally and personally, and have been hesitant to make any type of future predictions. It seems very safe to predict it will have a huge impact on many things we humans do, but specifics beyond that are beyond my prognostication abilities.
I recently came to a conclusion that “AI” is too broad a term for what we are encountering and discussing, and there are actually two, very different developments occurring. It could be helpful to distinguish between the two developments when discussing and pontificating on AI.
One aspect, and perhaps the most shocking aspect that’s been a topic of widespread discussion since ChatGPT was made available to the public several years ago is the LLM component of AI. I think of this segment as “the ability to converse with the Internet.” Just this morning I had a delightful conversation with Grok on theories explaining how New World monkeys may have gotten to the New World (Americas).
At a foundational level I obtained nothing from this that I could not have obtained from the Internet a decade ago. Ten years ago I would have gone to a Google search window, typed in “theories on New World monkey origins” and sifted through the search results, clicking on references that seemed focused on what I was searching to understand, and then reading through the contents.
But what is different (and truly fun and enjoyable) is that I got the information through an engaging conversation. It was precisely as if I had an expert in primate evolution and migration in the room with me. A very entertaining and effervescent expert in primate evolution and migration. And if that conversation led me to want to know more about plate tectonics and the location of Antarctica and South America at various points in history I instantly had an entertaining, engaging expert in plate tectonics in the room with me (also Grok).
This is LLM (Large Language Model) computer science and it is an amazing breakthrough. I have been following this area, off and on, since the mid-80s and the leap in the past several years is immense. There is a lot to discuss regarding the LLM nature of what we call AI (and I’ll do a wee bit of that in a follow-up comment). This thing (LLM) is “intelligence” in that it’s using computer logic to intelligently (vey intelligently) mimic human conversational patterns and techniques. However, this is not intelligence in that it will ever advance an idea or theory. It is an Internet search engine that talks, rather than spitting out a list of links. It also listens. It also adapts based on the flow of our conversation. But it will never put forth a theory on New World monkeys that has not been put forth by a human and placed on the Internet prior. LLM is a parlor trick*. A parlor trick that may have immense impact on humanity, but a parlor trick nevertheless.
The second division of AI is data analytics. This also has potential for immense impacts on humanity. But, like the Internet search engine aspect of LLM, it has been with us for awhile. This is akin to the literal term, “artificial intelligence.” It has nothing to do with LLM. LLMs can be coupled with it, so you can interact with the data analyzer in a conversational mode, but they are two, different things. This was the first, real use of computers, and it has been progressing since the 1950s. This is where true “ideas” can actually arise. This is where AI has potential to bring forth knowledge beyond what an individual human has yet pieced together.
Computers are dumb and fast. At their foundational level they can only perform rudimentary calculations, but they can do rudimentary calculations much faster than a human. Since their inception humans have been improving computer hardware and software to make them faster and faster. Most of you likely know the story of CBS using UNIVAC to predict the 1952 Presidential election: https://www.npr.org/sections/alltechconsidered/2012/10/31/163951263/the-night-a-computer-predicted-the-next-president
To lay people it seemed like UNIVAC was “thinking,” but it was just doing simple mathematics off of myriad data inputs very quickly. Most of you are probably familiar with the evolution of computer chess. Initial chess software assigned basic point totals to move outcomes and played all possible moves and the opponent’s possible responses and then made the move with the highest point total. With 32 pieces and 2 dimensions the options scale up very quickly. Early computers could only “think” 2 or 3 moves ahead. But as hardware and software got faster they could think many moves ahead and beat most humans. Then developers started feeding past games into their databases so their programs could reference any number of moves from great chess masters of the past as well as play games forward. Many of you also probably know the programmed instructions for chess computers is at a point that the men and women who developed the machines don’t always understand why the machine does what it does. It’s that complex and that fast, billions of instructions per second.
As I wrote, this “data analytics” AI has been around for over 70 years. It’s what computers were built to do. IBM’s Deep Blue chess computer beat Gary Kasparov in 1997. And chess computing has only gotten better since.
So, just as you can give a computer the rules of how chess pieces can move in 2 dimensions and it can play move and response iterations going forward as well as comparing with a database of thousands of past games by Grand Masters; you can give a computer the DNA of a tomato and data on chemicals and ask it to play forward iterations of how it will interact with combinations of those chemicals to put forth a viable product that will foster growth, or protect against drought… There are incredible videos online of Google engineers giving their AI detailed instructions and it nearly instantly pours through thousands of peer reviewed scientific papers and presents a graph with plots synthesizing the data.
In 10 minutes a properly programmed computer can read and “comprehend” more scientific articles than a human will in a lifetime. And, like in a chess game, a computer can simulate billions of moves forward and analyze the outcomes.
This aspect of AI is where a lot of theory and conjecture is. Since 1997, there are chess computers that do things Grand Masters do not “understand,” but result in success. What if AI can do that with biology, medicine, chemistry, engineering? It can and already does in certain areas. And, if it is able to do science, design and engineering better than us, will it then design AI that is better than we have currently developed, leading to very fast, exponential breakthroughs?
We may or may not be on the cusp of a paradigm shift in the data analytics aspect of AI, but I think people are coupling it with the amazing things they see in LLM and confusing the two. Just because Grok 3.0 is 1,000 times the conversationalist Grok 1.0 was just a year ago, doesn’t mean Grok 3.0 can design an anti-gravity field.
I’ve had some weird health issues last fall following taking an antibiotic from a misdiagnosis of MRSA, which turned out to be shingles which might have caused an acute kidney injury. Then a bout of some upper respiratory virus in January. I started asking Grok about it, looking for connections of the various symptoms.
The most concerning in the short term was my BP dropping very low while sleeping, even though I had quit taking one of the medicines I was on (suggested by my doctor) and reducing the dose of the other (cutting the pill in half). (It has improved).
Any question I ask, Grok references the potential connectedness of previous questions.
Be careful what you reveal to Grok. It’s in it’s memory forever. Well, maybe not forever– but for the last couple of months anyway.
Follow up comment on LLMs.
Even though I exclude LLMs from “artificial intelligence” I see immense potential for human benefit and harm.
They will be so attractive to humans seeking companionship and may lead to greater human isolation. Robert Putnam was worried we were bowling alone in 1995. LLMs may make 1995 seem like a golden age of human interaction.
We all know of OnlyFans and other, similar sites. John Henry beat the steam powered hammer but died in the process. Henry Ford’s best assembly line worker is no match for the robots assembling his company’s cars today. The most energetic college co-ed will soon be no match for her AI competition on OnlyFans, who never needs to take a break to go to class, or eat, or sleep.
What if anyone, for a relatively low cost, could have a companion who is available whenever they want, and never bothers them when they want to be left alone? And that companion knows them better than a spouse who has lived with them for 30 years, or a childhood friend they grew up with? That companion has not only read every book, seen every movie, heard every song they have, and is willing to engage in endless conversations about them, but that companion will tell them about new books, movies and songs they have never heard of and that they will love! That companion can even read the book, watch the movie with them or play the song for them.
It doesn’t take a lot of imagination to see how this can do great harm to humans. And, for the same reasons, one can envision good. A lot of humans are isolated from human contact for valid reasons; someone has to tend that lighthouse in Nova Scotia. Disability makes it difficult for many people to get out and socialize. LLMs will be a source of solace, comfort and happiness to many.
I spent long periods of my life alone with books. For the most part that seemed to make me a better human; a better thinker, a better conversationalist, a better friend, employee, husband and father. There is no doubt LLMs have the same potential and could be more efficient than books or other conventional ways of learning. As I conversed with Grok about New World monkeys this morning I thought about how much a tool like this could have sped up my formal education. It wasn’t just that I didn’t have to walk to a library and search a card catalog. The information Grok fed me was really well tuned to precisely what I was trying to learn. And I could follow up with questions down other paths and nearly instantly get nearly perfectly relevant information.
To quote huxley’s great uncle, it’s a brave, new world.
Rufus – As you point out, AI can, correctly, do some things that people cannot do and do not understand. I think the danger is that people will infer from this that AI is just like a super-intelligent human being, which it is not.
The auto pen controversy is fun, but to invalidate all those signatures, they’d have to produce incontrovertible evidence that Biden didn’t authorize the signing, and that’s going to be difficult to impossible.
Rufus T. Firefly–appreciate the overview, and the distinctions you pointed out, they were very helpful.
Naturally, observing all of these rapid advances in computer science (and robotics), which seem to be accelerating (and heading toward the “Singularity” some have predicted?), aside from the whole SKYNET thing, I am very uneasy with the idea of computer code which is so complex that those who create it have a hard time understanding it, and what it will do.
But, even more worrisome, I am not happy with the idea of computers so complex that their human programmers cannot totally understand what they are doing and why, and, then, having these computers design the next even more complex generation of even more complex and less understandable computers, which, it would seem, might be totally out of human control.
The idea of just cutting off a computer’s power source, of “pulling the plug” of a computer system which may have achieved some form of sentience, and thereby acquired a sense of self-preservation, seems extremely naive.
I’d imagine such a sentient entity would, as it’s first priority, take steps, in the first couple of nanoseconds of it’s existence, to assure that could not be, in effect “killed,” and that it’s power source could not be cut off, by arranging for many alternative backup power sources, and well as replicating itself, and distributing these replications among a number of other computer systems it might be, or could arrange to be, connected to.
A pretty standard science fiction plot but, yet, to me, it does not seem all that far fetched.
Brian E,
Really sorry to read of your medical struggles. It seems like most everyone I know older than 70* has had a dire medical outcome as a result of taking medication to treat an unrelated problem. Coincidentally, this is an area where AI will likely bring great benefits. AI can keep track of medicines, complications and side effects, a patient’s past medical history and a huge database of users who have suffered complications better than any human physician.
* I have no idea how old you are**. I only list the age 70 as a reference point. No doubt most all of us have similar issues from medications prior to age 70, but they seem to compound and can become life threatening when we are older and less able to rebound from medical mishaps.
** But apparently Grok does. 🙂
Re: Autopen – I think Kate nailed the evidence-related challenges. Even presuming that you could prove to whatever applicable standard that Biden didn’t authorize his own signature, I don’t think the courts would touch that matter with a 10 foot pole.
The consequences of invalidating even a significant chunk of executive signatures during the Biden presidency would just break too many things.
Thanks, Rufus. Yeah, I’m 75. I am doing much better.
I’m scheduled for a Cardiologist later in the month (takes 2 months to get to see them). Even my GP, takes a month.
So in the meantime, you fret about all these symptoms. I wouldn’t say Grok cured me, but it certainly kept the anxiety level in check.
Brian E… just for comparison purposes
I made an appointment to see my cardiologist this past week (early March). My appointment is in November, late November, and the nice lady doing the scheduling said they prefer to work 12 months ahead.
It’s Australia…so there’s that. God bless your healing. I’m not over 70 yet but I can see it from here.
From what I understand, as of now, we know far too little, down to the granular level, about what actually happens inside your body when you are taking several different medications–some older people prescribed 10, 12, or even more–how they interact with each other, and, in particular, how your particular genetics and physiology might interact with and effect those interactions.
The real AI fun will be when I can take any movie and insert myself into the story anyway I please. Now it is the Magnificent Eight and I have a BAR.
Rufus T. Firefly wrote:
Just this morning I had a delightful conversation with Grok on theories explaining how New World monkeys may have gotten to the New World (Americas).
At a foundational level I obtained nothing from this that I could not have obtained from the Internet a decade ago. Ten years ago I would have gone to a Google search window, typed in “theories on New World monkey origins” and sifted through the search results, clicking on references that seemed focused on what I was searching to understand, and then reading through the contents.
But what is different (and truly fun and enjoyable) is that I got the information through an engaging conversation. It was precisely as if I had an expert in primate evolution and migration in the room with me.
Personally I’ve been using an LLM (specifically Grok) to help fine tune another LLM (specifically a 4 bit version of “meta-llama-3.1-8b”) for my business using Python with Huggingface’s “Unsloth” library. It’s been a difficult learning experience to say the least, but it probably would have been next to impossible for me without the assistance of Grok.
For a coding job like mine it really is like having a friendly, very knowledgeable expert in the room with you. I just give it my code along with the errors it barfs out and Grok responds with an extremely well written response replete with explanations, examples and even a full rewrite of the code to address the errors. I can then ask it specific questions about anything I might not fully grasp. And then I just rerun the corrected code it provides. It has helped me tremendously.
To be fair, I still have yet to produce a final “fine-tuned” model that doesn’t immediately hallucinate with nonsensical responses, but I’m getting closer. Problem is that on my current hardware it takes something like 3 hours to train the new model each time. Fortunately, Grok is able to “remember” the full chat history so I can just report my results to it and it can then provide helpful suggestions and ideas to improve it. It’s a slow, iterative process, but I’m fairly confident that I’ll get there in another week or so.
Nonapod,
I have only used AI (Copilot) for help with coding once, but my experience was akin to yours (at a much, much more basic level). It hallucinated incorrect code often, but as I pointed out its errors and specifically what was wrong it improved on each iteration and it didn’t take too long to have working code that did a fairly complex thing.
Because coding is akin to human language (that’s why source code was developed, after all, as a bridge between common* language and object {machine} code) it’s no surprise one of the engineering tasks LLMs are mastering fastest is computer programming. In a way, your text inputs to Grok are designing a 4GL based on how you uniquely communicate.
* Admiral Grace Hopper even put the words “common” and “language” in the title of her source code innovation.
Snow on Pine,
Physics struggles with prediction when only 3 bodies orbit one another with known velocities and paths.
10 medications in a human with unique DNA, unique diet and physical regimen is a lot of variables. At least 15 years ago I was promised virus sized nanobots that would circulate in my blood stream like the Fantastic Voyager spacecraft and eliminate any unwanted foreign pathogens.
Rufus T Firefly—I’ve really been impressed with some the animations which have been made showing how the machinery of a cell works, and I do mean that these moving parts look and work a lot like extremely intricate and complex machines.
Add to this the thousands, maybe tens of thousands or more biochemicals which the various cells and organs in your body produce, and understanding how it all works together would seem to be an extremely daunting task.
But, then, adding pharmaceuticals into the mix takes it to a whole new level.
Neo,
I had bookmarked your “Tracing the Use of the Anonymous Source” , and linked to it several times on other blogs.
Seems to have disappeared. Sure would come in handy these days if it could be resurrected.
Personally I’ve been using an LLM (specifically Grok) to help fine tune another LLM (specifically a 4 bit version of “meta-llama-3.1-8b”) for my business using Python with Huggingface’s “Unsloth” library.
Nonapod:
Cool! You’re living la vida AI loca. Please keep us apprised.
[AI] really is like having a friendly, very knowledgeable expert in the room with you.
Exactly. I can’t tell Chat to write the perfect code I want, then drop it in, no worries. We have to iterate, iterate, iterate.
But it’s faster than when I have to remember the APIs, the structure of standard blocks, the peculiarities of the language, and avoid off-by-one errors, etc. on my own.
I find Chat excels when it comes to all the fiddly bits of setting up the compiler/linker/IDE and libraries along with the OS. Back in the day you almost had to have a local guru. Later there was Stack Overflow.
I’ll take Chat.
And just in time.
IMO today’s programming tests the limits of complexity which human brains can handle.
Re; AI diffusion programming
AI coding is not like human coding.
Here’s a fascinating advance which handles code like it would handle refining an image, where the image starts out blurry and random, then sharpens to crystal clarity (as the “Outer Limits” voice used to say).
Now imagine AI code generation which starts out as gibberish text then sharpens into actual working code. Here we go:
Cute trick, sure, but it also works about 10x faster than the usual method.
AI is going so fast too.
Whether you are talking LLMs or Data Analysis, what your AI Aide will produce depends on what it had for inputs.
“Garbage in, garbage out” always rules.
However, as explained by Nonapod and huxley, the garbage can at least be strained out, IF you know you need to do that.
Naïve uses (aka non-commuter-geeks) won’t know that the hallucinations aren’t valid responses, unless they are as blatant fake court case citations or Vikings of Color.
And the way our educational systems are going, pretty soon the public won’t be able to recognize those either.
Other than that, the AI stories are really cool 😉
Is it just me, or, according to what I see on Youtube, are airports, check in counters, and the airlines, these days, increasingly becoming just one gigantic mental ward.
I’ve heard of “in flight entertainment” but this is just getting rediculous.*
“The auto pen controversy is fun, but to invalidate all those signatures, they’d have to produce incontrovertible evidence that Biden didn’t authorize the signing, and that’s going to be difficult to impossible.”
It would take a whistleblower, and possibly an arrest or two and a plea bargain with a canary. I wouldn’t bet on it.
Wendy K Laubach wrote, re: auto pen controversy
“It would take a whistleblower, and possibly an arrest or two and a plea bargain with a canary. I wouldn’t bet on it.”
I sadly agree.
That we lived through the horrid 4 years of an obviously brain-adled POTUS demands more attention.
I wonder if anyone who could be a whistleblower will begin to feel some obligation.
Also, what is the statute of limitations, for such a crime??
Leave a Reply
HTML tags allowed in your
comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>
Oof. Me, too.
Absolutely precious.
She ought to try some Gershwin on him, maybe Summertime, or My Man’s Gone Now and see how that goes?
This might have legs. Maybe:
Could A Bombshell Discovery Render All of Biden’s Presidential Actions ‘Null and Void’?
We already knew that the content of his cranium was a nonfunctional void. Legal ratification would be a nice touch.
Banned: I agree with you, but remain skeptical we will see this get anywhere. Too bad: it would be a good conclusion to Biden’s political career.
WHOEVER CONTROLLED THE AUTOPEN CONTROLLED THE PRESIDENCY
It was Ex-P.F.C. Wintergreen.
I actually used that in a political conversation with an intelligent left-wing friend of mine a few weeks ago. She replied, “Let’s leave Biden out of the discussion.” O….K…. But, that is kind of a big issue isn’t it?
She wanted to talk about Trump & why people support him.
I think it’s appropriate to be skeptical of “we can legally erase a Presidency we don’t like using this one weird trick”, since we’ve seen more than one of them in the last few administrations.
There has been a fair amount of discussion about AI here; especially from huxley and karmi. I use it a fair amount professionally and personally, and have been hesitant to make any type of future predictions. It seems very safe to predict it will have a huge impact on many things we humans do, but specifics beyond that are beyond my prognostication abilities.
I recently came to a conclusion that “AI” is too broad a term for what we are encountering and discussing, and there are actually two, very different developments occurring. It could be helpful to distinguish between the two developments when discussing and pontificating on AI.
One aspect, and perhaps the most shocking aspect that’s been a topic of widespread discussion since ChatGPT was made available to the public several years ago is the LLM component of AI. I think of this segment as “the ability to converse with the Internet.” Just this morning I had a delightful conversation with Grok on theories explaining how New World monkeys may have gotten to the New World (Americas).
At a foundational level I obtained nothing from this that I could not have obtained from the Internet a decade ago. Ten years ago I would have gone to a Google search window, typed in “theories on New World monkey origins” and sifted through the search results, clicking on references that seemed focused on what I was searching to understand, and then reading through the contents.
But what is different (and truly fun and enjoyable) is that I got the information through an engaging conversation. It was precisely as if I had an expert in primate evolution and migration in the room with me. A very entertaining and effervescent expert in primate evolution and migration. And if that conversation led me to want to know more about plate tectonics and the location of Antarctica and South America at various points in history I instantly had an entertaining, engaging expert in plate tectonics in the room with me (also Grok).
This is LLM (Large Language Model) computer science and it is an amazing breakthrough. I have been following this area, off and on, since the mid-80s and the leap in the past several years is immense. There is a lot to discuss regarding the LLM nature of what we call AI (and I’ll do a wee bit of that in a follow-up comment). This thing (LLM) is “intelligence” in that it’s using computer logic to intelligently (vey intelligently) mimic human conversational patterns and techniques. However, this is not intelligence in that it will ever advance an idea or theory. It is an Internet search engine that talks, rather than spitting out a list of links. It also listens. It also adapts based on the flow of our conversation. But it will never put forth a theory on New World monkeys that has not been put forth by a human and placed on the Internet prior. LLM is a parlor trick*. A parlor trick that may have immense impact on humanity, but a parlor trick nevertheless.
The second division of AI is data analytics. This also has potential for immense impacts on humanity. But, like the Internet search engine aspect of LLM, it has been with us for awhile. This is akin to the literal term, “artificial intelligence.” It has nothing to do with LLM. LLMs can be coupled with it, so you can interact with the data analyzer in a conversational mode, but they are two, different things. This was the first, real use of computers, and it has been progressing since the 1950s. This is where true “ideas” can actually arise. This is where AI has potential to bring forth knowledge beyond what an individual human has yet pieced together.
Computers are dumb and fast. At their foundational level they can only perform rudimentary calculations, but they can do rudimentary calculations much faster than a human. Since their inception humans have been improving computer hardware and software to make them faster and faster. Most of you likely know the story of CBS using UNIVAC to predict the 1952 Presidential election: https://www.npr.org/sections/alltechconsidered/2012/10/31/163951263/the-night-a-computer-predicted-the-next-president
To lay people it seemed like UNIVAC was “thinking,” but it was just doing simple mathematics off of myriad data inputs very quickly. Most of you are probably familiar with the evolution of computer chess. Initial chess software assigned basic point totals to move outcomes and played all possible moves and the opponent’s possible responses and then made the move with the highest point total. With 32 pieces and 2 dimensions the options scale up very quickly. Early computers could only “think” 2 or 3 moves ahead. But as hardware and software got faster they could think many moves ahead and beat most humans. Then developers started feeding past games into their databases so their programs could reference any number of moves from great chess masters of the past as well as play games forward. Many of you also probably know the programmed instructions for chess computers is at a point that the men and women who developed the machines don’t always understand why the machine does what it does. It’s that complex and that fast, billions of instructions per second.
As I wrote, this “data analytics” AI has been around for over 70 years. It’s what computers were built to do. IBM’s Deep Blue chess computer beat Gary Kasparov in 1997. And chess computing has only gotten better since.
So, just as you can give a computer the rules of how chess pieces can move in 2 dimensions and it can play move and response iterations going forward as well as comparing with a database of thousands of past games by Grand Masters; you can give a computer the DNA of a tomato and data on chemicals and ask it to play forward iterations of how it will interact with combinations of those chemicals to put forth a viable product that will foster growth, or protect against drought… There are incredible videos online of Google engineers giving their AI detailed instructions and it nearly instantly pours through thousands of peer reviewed scientific papers and presents a graph with plots synthesizing the data.
In 10 minutes a properly programmed computer can read and “comprehend” more scientific articles than a human will in a lifetime. And, like in a chess game, a computer can simulate billions of moves forward and analyze the outcomes.
This aspect of AI is where a lot of theory and conjecture is. Since 1997, there are chess computers that do things Grand Masters do not “understand,” but result in success. What if AI can do that with biology, medicine, chemistry, engineering? It can and already does in certain areas. And, if it is able to do science, design and engineering better than us, will it then design AI that is better than we have currently developed, leading to very fast, exponential breakthroughs?
We may or may not be on the cusp of a paradigm shift in the data analytics aspect of AI, but I think people are coupling it with the amazing things they see in LLM and confusing the two. Just because Grok 3.0 is 1,000 times the conversationalist Grok 1.0 was just a year ago, doesn’t mean Grok 3.0 can design an anti-gravity field.
LLM is not AI.
*And, speaking of artificial intelligence, parlor tricks and chess, the Mechanical Turk was fooling people almost 250 years ago; https://en.wikipedia.org/wiki/Mechanical_Turk
Another aspect of AI– it never forgets.
I’ve had some weird health issues last fall following taking an antibiotic from a misdiagnosis of MRSA, which turned out to be shingles which might have caused an acute kidney injury. Then a bout of some upper respiratory virus in January. I started asking Grok about it, looking for connections of the various symptoms.
The most concerning in the short term was my BP dropping very low while sleeping, even though I had quit taking one of the medicines I was on (suggested by my doctor) and reducing the dose of the other (cutting the pill in half). (It has improved).
Any question I ask, Grok references the potential connectedness of previous questions.
Be careful what you reveal to Grok. It’s in it’s memory forever. Well, maybe not forever– but for the last couple of months anyway.
Follow up comment on LLMs.
Even though I exclude LLMs from “artificial intelligence” I see immense potential for human benefit and harm.
They will be so attractive to humans seeking companionship and may lead to greater human isolation. Robert Putnam was worried we were bowling alone in 1995. LLMs may make 1995 seem like a golden age of human interaction.
We all know of OnlyFans and other, similar sites. John Henry beat the steam powered hammer but died in the process. Henry Ford’s best assembly line worker is no match for the robots assembling his company’s cars today. The most energetic college co-ed will soon be no match for her AI competition on OnlyFans, who never needs to take a break to go to class, or eat, or sleep.
What if anyone, for a relatively low cost, could have a companion who is available whenever they want, and never bothers them when they want to be left alone? And that companion knows them better than a spouse who has lived with them for 30 years, or a childhood friend they grew up with? That companion has not only read every book, seen every movie, heard every song they have, and is willing to engage in endless conversations about them, but that companion will tell them about new books, movies and songs they have never heard of and that they will love! That companion can even read the book, watch the movie with them or play the song for them.
It doesn’t take a lot of imagination to see how this can do great harm to humans. And, for the same reasons, one can envision good. A lot of humans are isolated from human contact for valid reasons; someone has to tend that lighthouse in Nova Scotia. Disability makes it difficult for many people to get out and socialize. LLMs will be a source of solace, comfort and happiness to many.
I spent long periods of my life alone with books. For the most part that seemed to make me a better human; a better thinker, a better conversationalist, a better friend, employee, husband and father. There is no doubt LLMs have the same potential and could be more efficient than books or other conventional ways of learning. As I conversed with Grok about New World monkeys this morning I thought about how much a tool like this could have sped up my formal education. It wasn’t just that I didn’t have to walk to a library and search a card catalog. The information Grok fed me was really well tuned to precisely what I was trying to learn. And I could follow up with questions down other paths and nearly instantly get nearly perfectly relevant information.
To quote huxley’s great uncle, it’s a brave, new world.
Rufus – As you point out, AI can, correctly, do some things that people cannot do and do not understand. I think the danger is that people will infer from this that AI is just like a super-intelligent human being, which it is not.
The auto pen controversy is fun, but to invalidate all those signatures, they’d have to produce incontrovertible evidence that Biden didn’t authorize the signing, and that’s going to be difficult to impossible.
Rufus T. Firefly–appreciate the overview, and the distinctions you pointed out, they were very helpful.
Naturally, observing all of these rapid advances in computer science (and robotics), which seem to be accelerating (and heading toward the “Singularity” some have predicted?), aside from the whole SKYNET thing, I am very uneasy with the idea of computer code which is so complex that those who create it have a hard time understanding it, and what it will do.
But, even more worrisome, I am not happy with the idea of computers so complex that their human programmers cannot totally understand what they are doing and why, and, then, having these computers design the next even more complex generation of even more complex and less understandable computers, which, it would seem, might be totally out of human control.
The idea of just cutting off a computer’s power source, of “pulling the plug” of a computer system which may have achieved some form of sentience, and thereby acquired a sense of self-preservation, seems extremely naive.
I’d imagine such a sentient entity would, as it’s first priority, take steps, in the first couple of nanoseconds of it’s existence, to assure that could not be, in effect “killed,” and that it’s power source could not be cut off, by arranging for many alternative backup power sources, and well as replicating itself, and distributing these replications among a number of other computer systems it might be, or could arrange to be, connected to.
A pretty standard science fiction plot but, yet, to me, it does not seem all that far fetched.
Brian E,
Really sorry to read of your medical struggles. It seems like most everyone I know older than 70* has had a dire medical outcome as a result of taking medication to treat an unrelated problem. Coincidentally, this is an area where AI will likely bring great benefits. AI can keep track of medicines, complications and side effects, a patient’s past medical history and a huge database of users who have suffered complications better than any human physician.
* I have no idea how old you are**. I only list the age 70 as a reference point. No doubt most all of us have similar issues from medications prior to age 70, but they seem to compound and can become life threatening when we are older and less able to rebound from medical mishaps.
** But apparently Grok does. 🙂
Re: Autopen – I think Kate nailed the evidence-related challenges. Even presuming that you could prove to whatever applicable standard that Biden didn’t authorize his own signature, I don’t think the courts would touch that matter with a 10 foot pole.
The consequences of invalidating even a significant chunk of executive signatures during the Biden presidency would just break too many things.
Thanks, Rufus. Yeah, I’m 75. I am doing much better.
I’m scheduled for a Cardiologist later in the month (takes 2 months to get to see them). Even my GP, takes a month.
So in the meantime, you fret about all these symptoms. I wouldn’t say Grok cured me, but it certainly kept the anxiety level in check.
https://www.dailymail.co.uk/news/article-14469211/sanctuary-city-migrant-fire-woman-slept-nyc-subway-avoids-deportation.html
Brian E… just for comparison purposes
I made an appointment to see my cardiologist this past week (early March). My appointment is in November, late November, and the nice lady doing the scheduling said they prefer to work 12 months ahead.
It’s Australia…so there’s that. God bless your healing. I’m not over 70 yet but I can see it from here.
From what I understand, as of now, we know far too little, down to the granular level, about what actually happens inside your body when you are taking several different medications–some older people prescribed 10, 12, or even more–how they interact with each other, and, in particular, how your particular genetics and physiology might interact with and effect those interactions.
The real AI fun will be when I can take any movie and insert myself into the story anyway I please. Now it is the Magnificent Eight and I have a BAR.
Rufus T. Firefly wrote:
Personally I’ve been using an LLM (specifically Grok) to help fine tune another LLM (specifically a 4 bit version of “meta-llama-3.1-8b”) for my business using Python with Huggingface’s “Unsloth” library. It’s been a difficult learning experience to say the least, but it probably would have been next to impossible for me without the assistance of Grok.
For a coding job like mine it really is like having a friendly, very knowledgeable expert in the room with you. I just give it my code along with the errors it barfs out and Grok responds with an extremely well written response replete with explanations, examples and even a full rewrite of the code to address the errors. I can then ask it specific questions about anything I might not fully grasp. And then I just rerun the corrected code it provides. It has helped me tremendously.
To be fair, I still have yet to produce a final “fine-tuned” model that doesn’t immediately hallucinate with nonsensical responses, but I’m getting closer. Problem is that on my current hardware it takes something like 3 hours to train the new model each time. Fortunately, Grok is able to “remember” the full chat history so I can just report my results to it and it can then provide helpful suggestions and ideas to improve it. It’s a slow, iterative process, but I’m fairly confident that I’ll get there in another week or so.
Nonapod,
I have only used AI (Copilot) for help with coding once, but my experience was akin to yours (at a much, much more basic level). It hallucinated incorrect code often, but as I pointed out its errors and specifically what was wrong it improved on each iteration and it didn’t take too long to have working code that did a fairly complex thing.
Because coding is akin to human language (that’s why source code was developed, after all, as a bridge between common* language and object {machine} code) it’s no surprise one of the engineering tasks LLMs are mastering fastest is computer programming. In a way, your text inputs to Grok are designing a 4GL based on how you uniquely communicate.
* Admiral Grace Hopper even put the words “common” and “language” in the title of her source code innovation.
Snow on Pine,
Physics struggles with prediction when only 3 bodies orbit one another with known velocities and paths.
10 medications in a human with unique DNA, unique diet and physical regimen is a lot of variables. At least 15 years ago I was promised virus sized nanobots that would circulate in my blood stream like the Fantastic Voyager spacecraft and eliminate any unwanted foreign pathogens.
Rufus T Firefly—I’ve really been impressed with some the animations which have been made showing how the machinery of a cell works, and I do mean that these moving parts look and work a lot like extremely intricate and complex machines.
Add to this the thousands, maybe tens of thousands or more biochemicals which the various cells and organs in your body produce, and understanding how it all works together would seem to be an extremely daunting task.
But, then, adding pharmaceuticals into the mix takes it to a whole new level.
Neo,
I had bookmarked your “Tracing the Use of the Anonymous Source” , and linked to it several times on other blogs.
Seems to have disappeared. Sure would come in handy these days if it could be resurrected.
Personally I’ve been using an LLM (specifically Grok) to help fine tune another LLM (specifically a 4 bit version of “meta-llama-3.1-8b”) for my business using Python with Huggingface’s “Unsloth” library.
Nonapod:
Cool! You’re living la vida AI loca. Please keep us apprised.
[AI] really is like having a friendly, very knowledgeable expert in the room with you.
Exactly. I can’t tell Chat to write the perfect code I want, then drop it in, no worries. We have to iterate, iterate, iterate.
But it’s faster than when I have to remember the APIs, the structure of standard blocks, the peculiarities of the language, and avoid off-by-one errors, etc. on my own.
I find Chat excels when it comes to all the fiddly bits of setting up the compiler/linker/IDE and libraries along with the OS. Back in the day you almost had to have a local guru. Later there was Stack Overflow.
I’ll take Chat.
And just in time.
IMO today’s programming tests the limits of complexity which human brains can handle.
Re; AI diffusion programming
AI coding is not like human coding.
Here’s a fascinating advance which handles code like it would handle refining an image, where the image starts out blurry and random, then sharpens to crystal clarity (as the “Outer Limits” voice used to say).
Now imagine AI code generation which starts out as gibberish text then sharpens into actual working code. Here we go:
–Matthew Berman, “LLM generates the ENTIRE output at once (world’s first diffusion LLM)”
https://youtu.be/X1rD3NhlIcE?t=268s
Cute trick, sure, but it also works about 10x faster than the usual method.
AI is going so fast too.
Whether you are talking LLMs or Data Analysis, what your AI Aide will produce depends on what it had for inputs.
“Garbage in, garbage out” always rules.
However, as explained by Nonapod and huxley, the garbage can at least be strained out, IF you know you need to do that.
Naïve uses (aka non-commuter-geeks) won’t know that the hallucinations aren’t valid responses, unless they are as blatant fake court case citations or Vikings of Color.
And the way our educational systems are going, pretty soon the public won’t be able to recognize those either.
Other than that, the AI stories are really cool 😉
Is it just me, or, according to what I see on Youtube, are airports, check in counters, and the airlines, these days, increasingly becoming just one gigantic mental ward.
I’ve heard of “in flight entertainment” but this is just getting rediculous.*
* See https://www.thegatewaypundit.com/2025/03/wth-woman-loses-it-strips-naked-during-episode/
“The auto pen controversy is fun, but to invalidate all those signatures, they’d have to produce incontrovertible evidence that Biden didn’t authorize the signing, and that’s going to be difficult to impossible.”
It would take a whistleblower, and possibly an arrest or two and a plea bargain with a canary. I wouldn’t bet on it.
Well, he’s a guy sooooo…
is AI self aware
https://www.youtube.com/watch?v=pBgr7itiUWg&t=1293s
two AI’s reviewing the latest offering,
now would AI’s do this
https://x.com/DataRepublican/status/1898771731340460361
it’s not about emotional fragility, but malice,
Wendy K Laubach wrote, re: auto pen controversy
“It would take a whistleblower, and possibly an arrest or two and a plea bargain with a canary. I wouldn’t bet on it.”
I sadly agree.
That we lived through the horrid 4 years of an obviously brain-adled POTUS demands more attention.
I wonder if anyone who could be a whistleblower will begin to feel some obligation.
Also, what is the statute of limitations, for such a crime??