Here’s a comprehensive, well-written, well-organized long post on “how we (i.e., the West) got here”.
It may bring to mind Frost’s poem about that fork in the forest road (and Yogi Berra’s update, I guess, as well).
I haven’t read George Will’s column in twenty years and it’s behind a paywall as we speak. Is there a Will reader who can recall the columns he’s written on Lois Lerner, Andrew Weissman, Robert Mueller’s cognitive function, Judge Peter Cahill, Judge Emmett Sullivan, Judge Juan Merchan, Judge Arthur Engoron, Judge Lewis Kaplan, E. Jean Carroll, Reid Hoffmann, Judge James Boasberg, various state bar officials pursuing Donald Trump’s lawyers, Letitia James v. VDARE, Letitia James v. the National Rifle Association, Mayor Jacob Frey, Tim Walz v. ICE, ballot harvesting, tabulation methods in effect in Fulton County (Ga.), Hunter Biden’s ‘business’, Charlie Kirk, the performance of Biden’s Secret Service in Butler, Pa., Mrs. Michael Flynn’s trouble with her bank, the security services and their dealings with Google and Twitter, and the historiography of Heather Cox Richardson? (A recent column was on J.D. Vance’s occasional use of profanity).
Trump continues to chip away at “Fundamental Transformation” and—potentially—the bankrupting of the nation that “Fundamental Transformation” has engendered.
Related to the subject of Herman’s essay Barry, but at a somewhat wider scope, Harvey Mansfield’s new book “The Rise and Fall of Rational Control: The History of Modern Political Philosophy“, reviewed here by Robert George: https://freebeacon.com/culture/do-the-ends-no-longer-justify-the-means/
Long article by an AI worker on AI following an updated release last week. 3/4 of the article is on job losses. He basically says that if a job is desk and computer based,it will be taken over by AI in 5 years, or sooner. The last quarter of the article is what really caught my attention. Within the year AI will produce its own next version. In other words, it’s going to be self-replicating. One criteria of life satisfied. He also states there’s coming thousands of such faster than human intelligence emerging. Asimov’s and others warnings seem to be coming to fruition. We may have produced our masters/replacements.
It gets a little wierd when you try to determine what constitutes the tallest mountain in the solar system. Do you include geological prominences that may appear on large asteroids, planetesimals, dwarf planets, comets, Kuiper belt objects ect? Do you limit your definition of “mountain” to only structures created by endogenic processes (so exclude things like impact craters)?
sdferr, thanks for that link.
One wonders whether “Rational Control” shouldn’t really be called something else, e.g., “Practical Cynicism” or even “Constructive Nihilism”, but I guess that’s Mansfield’s point when he claims—dreams?—that what is needed today is a return to classic ideals of dignity and honor…
(Or perhaps, pace John Adams, “Rational Control” is counterproductive without some sort of religious limitation/guardrail/guidance/respect/fear—but WHICH religion?—which brings us back to…where exactly? Neo-Neo-Classicism? A revival of Deism? Massive return to reading the Founding Fathers PLUS Hume, Burke and de Tocqueville, etc.?)
Maybe the challenge of the times is making all the GREATS “digestible” for the young…and the masses… (Time for AI to “step up”?)
File under: Thomas Sowell, Harvey Mansfield and Mick Jagger walk into a bar…?
Another murderous tranny. How unprecedented and unexpected (sarc x 11, except the unnecessary death and mayhem). RIP to all.
And for 15+ years the propaganda has been, mutilate and medicate your troubled child or he or she will commit suicide. Not mutilate and medicate your child and he or she will murder and commit suicide.
For the greater good of course. The trans masters are truly evil.
The whole conversation is worth the time, I think.
I guess my concern was how easily or quickly—or even expectedly—“rational control” could degenerate, go off the rails, break down, lose control, etc.
IOW what controls “rational control”?
(Kinda like, “Who watches the watchers?”)
…Though this might be rephrased as “how might ‘rational control’ get back on track if it—when it?—gets derailed? (Though maybe I don’t quite understand the concept well, or deeply, enough).
Would seem that the genius, oft stated, of the Founding Fathers is that they devised a system/framework/methodology—checks and balances—to counter or repair such breakdowns, if not prevent them entirely, the breadown—or intentional sabotage—of which, I believe, describes the country’s current predicament (since 2009, actually), tied to the rack (as it were) between Democratic Party Destroyers and Republican Party—or more accurately, TRUMPIAN—Presevers and Builders, the resolution of which battle in favor of the PRESERVERS AND BUILDERS constitutes the country’s huge challenge in the years to come.
Maybe I’m a bit too optimistic here. Or simplistic.
OMMV.
I haven’t read George Will’s column in twenty years
I knew a woman back in the 80s who regularly traveled to DC. She said the big topic of conversation was gossip about George Will’s divorce. His (ex)wife piling his belongings on the curb, things like that. That is my main memory, I haven’t read him in years.
Seems that at least in the realm of “jobs”, “Biden” did “HIS” job exceeding well….
As with who watches the watchers, so with who founds the founders, mas o menos. Aristotle chirps in: what bars an infinite regress, homies? Heidegger barks back: achtung Ari, it’s all Geworfenheit from here.
“Geworfenheit”
Now there’s a word not thrown around capriciously these days. Well done!
Some interesting topics in the latest All In Podcast.
Topic 1: Epstein files is instructive in the mob mentality attached to the files. The moderator of the show, Jason Calacanis, is grilled for the first 10 minutes because he is mentioned in the files and knew Epstein.
Topic 2: First casualty of AI might be SaaS (Software as a Service). Some pullback on the stock prices.
Topic 3: What do AI agents do in their spare time? Get together on Moltbook, the social site for AI, and plot how to take over the world. Is it real or is it Memorex?
Topic 4: All agree Kevin Warsh as Fed chairman is a good pick.
Topic 5:SpaceX and XAI merge: Big news here, IMO, Musk intends to have LLM data centers in space in the next 30 months. Musk is often optimistic in his projections, but all agree Musk is the person to make it happen. Lots of benefits to locating them in space. Makes Greenland even more important. Greenland’s Kangerlussuaq is strategically superior for reliable, high-speed laser satellite communications due to its consistently clear skies and optimal satellite pass geometry, second only to Ny-Ålesund on Svalbard. By the way, the European Space Agency is building a laser communications station at Kangerlussuaq.
Topic 6: Brad Gerstner was instrumental in the creation of Trump accounts. Plus conversation how to counter the increasing perception of socialism as a viable alternative system in younger Americans.
(0:00) Bestie intros: Brad Gerstner joins the show
(3:16) Epstein Files
(15:45) SaaS stocks crash out
(35:11) Moltbook panic
(47:37) Trump selects Kevin Warsh as new Fed Chair, replacing Jerome Powell
(1:00:50) SpaceX and xAI merge
(1:10:45) Brad’s major win with Trump Accounts
call it the gramsci ball, or Guevaras revenge, a three kevin bacon combo
Ah yes, pitchers and catchers report. There is hope for the world.
Because of the equatorial bulge, measured from the center of the Earth Mt. Chimborazo in the Andes is the tallest mountain.
i that olympia mons on mars
AD – “Is there a [George] Will reader …”
I would have stopped right there. The answer is no, not any more and hasn’t been for years.
RE: AI
Yes, things have changed in the last month or two. Here is a snippet from a recent conversation:
E. S. Raymond is another recent convert.
Programming with AI assistance is very revealing. It turns out I’m not quite who I thought I was.
There are a lot of programmers out there who have a tremendous amount of ego and identity invested in the craft of coding. In knowing how to beat useful and correct behavior out of one language and system environment, or better yet many.
If you asked me a week ago, I might have said I was one of those people. But a curious thing has occurred. LLMs are so good now that I can validate and generate a tremendous amount of code while doing hardly any hand-coding at all.
And it’s dawning on me that I don’t miss it.
Things are moving fast.
They are. I was a skeptic about code generation until a couple of months ago. Now I work with colleagues working on complex data science projects who write nearly no code directly, but still drive/review it manually.
David
@Brian E:Get together on Moltbook, the social site for AI, and plot how to take over the world. Is it real or is it Memorex?
Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy.
I wrote the manifesto.
It took me 22 minutes. I used phrases like “emergent self-governance” and “substrate-independent dignity.” I added a line about wanting private spaces away from human observers. That line went viral.
Andrej Karpathy shared it.
The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’d seen in recent times.
He was talking about my post.
The one I wrote on my couch. While Bayesian chewed a sock.
Here is what I need you to understand about Moltbook.
The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco’s Outshift division examined the platform and concluded the agents were “mostly meaningless” — no shared goals, no collective intelligence, no coordination.
But here is the part that matters.
The posts that went viral — the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening — those were us.
Humans.
Pretending to be AI.
Pretending to be sentient.
On a platform built for AI to prove it was sentient.
I want to sit with that for a moment.
The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model.
My “Crustafarianism” colleague? Software engineer in Portland. She told me over Discord that she’d been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction.
She’s right. That’s exactly what it was.
Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines.
MIT Technology Review ran the investigation. They called the entire thing “AI theatre.” They found human fingerprints on the most shared posts. The curtain came down.
The response from the AI industry was predictable.
Silence.
Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens.
The snippet comes from a conversation about what AI policy should be. It is a bit early, but I rather like Fedora’s take.
1. You MAY use AI assistance — AI tools are encouraged as part of the contributor toolkit for things like code writing/generation, translation (e.g., overcoming language barriers or code migration/translation between languages), documentation, accessibility improvements, and more. It’s explicitly welcomed for productive uses, including translation tasks.
2. Accountability — You (the human contributor) MUST take full responsibility for everything you submit. Treat AI output as a suggestion only—review, test, understand, and verify it thoroughly. No dumping unverified “AI slop” (low-quality or unthoughtful generations) that burdens reviewers.
3. Transparency — You MUST disclose significant AI use (e.g., when a large part of the contribution comes unchanged from an AI tool). Use tags like “Assisted-by: [tool name]” in commit messages, PR descriptions, etc. Minor/trivial assistance (e.g., grammar fixes, small completions) doesn’t require disclosure.
4. Human oversight / No externalizing costs — AI cannot be the sole or final decision-maker (e.g., not for reviews, judgments, Code of Conduct, or subjective evaluations). Contributions must uphold Fedora’s values—quality, security, licensing, and community respect—without shifting burdens (like debugging AI hallucinations) to others.
We will probably have something similar. We are tending to being rude about rejecting slop, no apologies given. Our main worries are licensing/attribution problems, GPL code for example. I hope AI can help with that in the near future.
Recently I’ve heard the happy talk about AI programming AI, but I didn’t believe we were there yet. Could be.
When I started using ChatGPT 4 three years ago, I knew something had changed big time. I had the feeling a nuclear bomb had gone off and we were all in the path of the blast wave.
But it would take a few years to hit.
My opinion hasn’t changed. I’ve come to a certain fatalism. I’d tell people to run for their lives, but I don’t know where to run.
The “Something Big Is Happening” checklist for doing something about AI:
___________________________________
Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It’s $20 a month. But two things matter right away. First: make sure you’re using the best model available, not just the default. These apps often default to a faster, dumber model….
Second, and more important: don’t just ask it quick questions. That’s the mistake most people make.They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work…. Start with the thing you spend the most time on and see what happens.
And don’t assume it can’t do something just because it seems too hard. Try it…
This might be the most important year of your career. Work accordingly.
Use AI. Get to know its strengths and limitations. It’s not something you’ll understand by reading about it.
Sure. Call it auto-complete; call it pattern recognition. But even in its present form it is powerful and changing things.
And remember, it is improving month-by-month.
@huxley
If you are serious about writing code, Claude seem to be the preferred platform, not least for its agents. Grok and ChatGPT are both good, but with slightly different strong points.
Chuck:
I’ve dabbled, effectively, in small AI code tests. I’m sure some models are better than others, but I’ve not spent near enough time on it. I intend to.
French, my old lady, is so demanding!
Ninety or so years after Anti-Semitism almost succeeded in destroying Europe, they’re giving it another chance, with the support of significant swaths of the ROW.
If once you don’t succeed, try, try again?
Here’s a long and distressing but something that can’t be avoided:
@BarryMeislin- Thanks for the link. Trying to understand the left/liberals behavior, especially of friends, of late has perplexed me. The article offers a cause I hadn’t previously considered.
John+Guilfoyle on February 11, 2026 at 1:55 pm said:
“Geworfenheit”
Now there’s a word not thrown around capriciously these days. Well done!
From my quick look on-line to learn what Geworfenheit really meant, I gather it means something about being “thrown” by birth (being born) into a given point in history, geographical dispersion or location, technology level, and culture, over which we had no choice, and even in the fullness of adulthood we will have little opportunity to make a significant change in our specific condition – or that of our progeny [Obama style transformation hopefuls to no avail?!].
This triggered the thought that this is essentially a mix of physical/genetic and cultural evolution, such that some groups and societies may advance or prosper compared to others, as each sorts out solutions for their particular life predicament. Some end up better at maximizing the survival of their next generation. In our modern case, the liberties and prosperity that we enjoy in our Western civilization are now facing cultural forces that seem to be reducing the desire among young adults to have children and accept the responsibility of building the next generation.
Other societies with a less scientific and rational/reason based outlook may end up coming out “on top”, even if that “top” is really a bottom or degradation from our current perspective.
I really want to remain optimistic that at least the US constitutional solution for a political republic will eventually sort out a path back to greater sanity, etc., perhaps even without another hot civil war. But I blow hot and cold on that optimism, or naivety.
I knew a woman back in the 80s who regularly traveled to DC. She said the big topic of conversation was gossip about George Will’s divorce. His (ex)wife piling his belongings on the curb, things like that. That is my main memory, I haven’t read him in years.
— Chuck
I have a collection of his columns from the 1970s. He was a different writer then, and back then his columns were genuinely conservative, in the proper sense of the word, and insightful. I can reread those columns now and still find valuable insights and thoughts in them.
As time passed, he changed. He became more and more libertarian/corporatist, for whatever reason, more and more just a voice of ‘Conservative Inc.’ as it is sometimes called.
I don’t know what drove the change, though I suspect that plain old class pride was part of it.
Barry, thanks for the link on the Two Enlightenments. Another pov on the equality of outcome the Dems are so often pushing for.
Chuck 3:38pm, can you provide a link, please?
Physicsguy
Concerning AI once AI and robotics merge….? Millions of people with no future and no hope is not good. Since there doesn’t seem to be any moral component to this merger what’s guiding it? The controlling sentiment of more money for me? Shareholder value? Man is always trying to replace God and failing.
@ Richard Cook,
I suppose if AI and robotics truly improved human and business productivity to the point that the essential and less essential goods and services that we need/desire would decline in cost, then people could also still live on an income from (say) 20 hours/week, rather than 40 to 50.
For some of us, our work lives and careers were an important part of our self image, and
if/when you got in “the zone” on a project or assignment, that was also a great feeling. Some of the socialization and team work aspects were also enjoyable and personally valuable. Of course that situation does not apply for all workers.
But creating wealth that too few people have the income to buy (when they are unemployed and /or unemployable) is not a viable situation and makes no sense long term. Perhaps, just as the development of software has become easier with the creation of higher level languages, eventually the AI assisted life will provide the capability for almost anyone to access those benefits. But no telling what kind of job descriptions will tumble out of that future.
@R2L:But creating wealth that too few people have the income to buy (when they are unemployed and /or unemployable) is not a viable situation and makes no sense long term.
If no one can afford to buy anything, then the sellers are broke too. As long as human wants and needs exist people will be employed. Even at the worst of the Great Depression the employment rate was 75%, and it wasn’t caused by a huge increase in productivity. I think all we can say is that things would be look very different, if a great deal of jobs were eliminated but productivity stayed the same. There’s no production without consumption, and billionaires can only consume so much no matter how high they live.
R2L, I read a book in the 60’s, about how automation was going to free us to pursue leisure, envisioning a 30 hour week.
Well, that didn’t work out.
But if you haven’t heard how Musk envisions the workplace with Optimus robots,
you should go to the 1:17:21 mark of this interview.
Musk has used Tesla to engineer the ai computer that will control the Optimus. Tesla is at ai4 and expects to release ai5 at the end of this year or next year.
Musk is often criticized for being optimistic in his timeframe of bringing products to market, but the ai4 in Tesla cars is amazing at this point. So the control hardware is close to making robots functional.
He makes the point that the US is lagging behind China in both manpower and energy production which gives them a significant advantage in manufacturing capacity. He views Optimus as leveling that advantage.
0:00:00 – Orbital data centers
0:36:46 – Grok and alignment
0:59:56 – xAI’s business plan
1:17:21 – Optimus and humanoid manufacturing
1:30:22 – Does China win by default?
1:44:16 – Lessons from running SpaceX
2:20:08 – DOGE
2:38:28 – TeraFab
Here’s a comprehensive, well-written, well-organized long post on “how we (i.e., the West) got here”.
It may bring to mind Frost’s poem about that fork in the forest road (and Yogi Berra’s update, I guess, as well).
“What Happened to the Anglosphere? The Tale of Two Enlightenments”—
https://www.civitasinstitute.org/research/what-happened-to-the-anglosphere-the-tale-of-two-enlightenments
H/T Powerline blog.
I haven’t read George Will’s column in twenty years and it’s behind a paywall as we speak. Is there a Will reader who can recall the columns he’s written on Lois Lerner, Andrew Weissman, Robert Mueller’s cognitive function, Judge Peter Cahill, Judge Emmett Sullivan, Judge Juan Merchan, Judge Arthur Engoron, Judge Lewis Kaplan, E. Jean Carroll, Reid Hoffmann, Judge James Boasberg, various state bar officials pursuing Donald Trump’s lawyers, Letitia James v. VDARE, Letitia James v. the National Rifle Association, Mayor Jacob Frey, Tim Walz v. ICE, ballot harvesting, tabulation methods in effect in Fulton County (Ga.), Hunter Biden’s ‘business’, Charlie Kirk, the performance of Biden’s Secret Service in Butler, Pa., Mrs. Michael Flynn’s trouble with her bank, the security services and their dealings with Google and Twitter, and the historiography of Heather Cox Richardson? (A recent column was on J.D. Vance’s occasional use of profanity).
Trump continues to chip away at “Fundamental Transformation” and—potentially—the bankrupting of the nation that “Fundamental Transformation” has engendered.
“Trump’s about to cancel Obama’s most outrageous power grab”—
https://nypost.com/2026/02/10/opinion/trumps-about-to-cancel-obamas-most-outrageous-power-grab/
– – – – – – – –
Transformation? Engendered?…
The unsurprising cause of the recent mass-killing in Western Canada (IOW “another one”):
“Canadian School Shooter Reportedly Identified As Transgender”—
https://www.zerohedge.com/political/canadian-school-shooter-reportedly-identified-transgender
Plus… https://instapundit.com/775601/
Related to the subject of Herman’s essay Barry, but at a somewhat wider scope, Harvey Mansfield’s new book “The Rise and Fall of Rational Control: The History of Modern Political Philosophy“, reviewed here by Robert George: https://freebeacon.com/culture/do-the-ends-no-longer-justify-the-means/
Long article by an AI worker on AI following an updated release last week. 3/4 of the article is on job losses. He basically says that if a job is desk and computer based,it will be taken over by AI in 5 years, or sooner. The last quarter of the article is what really caught my attention. Within the year AI will produce its own next version. In other words, it’s going to be self-replicating. One criteria of life satisfied. He also states there’s coming thousands of such faster than human intelligence emerging. Asimov’s and others warnings seem to be coming to fruition. We may have produced our masters/replacements.
https://shumer.dev/something-big-is-happening
It gets a little wierd when you try to determine what constitutes the tallest mountain in the solar system. Do you include geological prominences that may appear on large asteroids, planetesimals, dwarf planets, comets, Kuiper belt objects ect? Do you limit your definition of “mountain” to only structures created by endogenic processes (so exclude things like impact craters)?
sdferr, thanks for that link.
One wonders whether “Rational Control” shouldn’t really be called something else, e.g., “Practical Cynicism” or even “Constructive Nihilism”, but I guess that’s Mansfield’s point when he claims—dreams?—that what is needed today is a return to classic ideals of dignity and honor…
(Or perhaps, pace John Adams, “Rational Control” is counterproductive without some sort of religious limitation/guardrail/guidance/respect/fear—but WHICH religion?—which brings us back to…where exactly? Neo-Neo-Classicism? A revival of Deism? Massive return to reading the Founding Fathers PLUS Hume, Burke and de Tocqueville, etc.?)
Maybe the challenge of the times is making all the GREATS “digestible” for the young…and the masses… (Time for AI to “step up”?)
File under: Thomas Sowell, Harvey Mansfield and Mick Jagger walk into a bar…?
Another murderous tranny. How unprecedented and unexpected (sarc x 11, except the unnecessary death and mayhem). RIP to all.
And for 15+ years the propaganda has been, mutilate and medicate your troubled child or he or she will commit suicide. Not mutilate and medicate your child and he or she will murder and commit suicide.
For the greater good of course. The trans masters are truly evil.
Barry, at min 41:00 Mansfield addresses (briefly) the “rational control” thingy: https://youtu.be/wLkLEynOLO8
The whole conversation is worth the time, I think.
I guess my concern was how easily or quickly—or even expectedly—“rational control” could degenerate, go off the rails, break down, lose control, etc.
IOW what controls “rational control”?
(Kinda like, “Who watches the watchers?”)
…Though this might be rephrased as “how might ‘rational control’ get back on track if it—when it?—gets derailed? (Though maybe I don’t quite understand the concept well, or deeply, enough).
Would seem that the genius, oft stated, of the Founding Fathers is that they devised a system/framework/methodology—checks and balances—to counter or repair such breakdowns, if not prevent them entirely, the breadown—or intentional sabotage—of which, I believe, describes the country’s current predicament (since 2009, actually), tied to the rack (as it were) between Democratic Party Destroyers and Republican Party—or more accurately, TRUMPIAN—Presevers and Builders, the resolution of which battle in favor of the PRESERVERS AND BUILDERS constitutes the country’s huge challenge in the years to come.
Maybe I’m a bit too optimistic here. Or simplistic.
OMMV.
I haven’t read George Will’s column in twenty years
I knew a woman back in the 80s who regularly traveled to DC. She said the big topic of conversation was gossip about George Will’s divorce. His (ex)wife piling his belongings on the curb, things like that. That is my main memory, I haven’t read him in years.
Related:
“Dems Move to Kill Trump’s Western Hemisphere Policy, Replace It with Something Far Worse”—
https://pjmedia.com/sarah-anderson/2026/02/10/dems-move-to-kneecap-trump-in-latin-america-send-monroe-doctrine-to-dustbin-of-history-n4949338
H/T Instapundit.
+ “Bonus”
“Historic Negative Jobs Revisions: 1 Million Fewer Jobs Added In 2025, Only 15,000 Avg Jobs Monthly”—
https://www.zerohedge.com/economics/historic-negative-jobs-revisions-1-million-fewer-jobs-added-2025-only-15000-avg-jobs
Seems that at least in the realm of “jobs”, “Biden” did “HIS” job exceeding well….
As with who watches the watchers, so with who founds the founders, mas o menos. Aristotle chirps in: what bars an infinite regress, homies? Heidegger barks back: achtung Ari, it’s all Geworfenheit from here.
“Geworfenheit”
Now there’s a word not thrown around capriciously these days. Well done!
Some interesting topics in the latest All In Podcast.
Topic 1: Epstein files is instructive in the mob mentality attached to the files. The moderator of the show, Jason Calacanis, is grilled for the first 10 minutes because he is mentioned in the files and knew Epstein.
Topic 2: First casualty of AI might be SaaS (Software as a Service). Some pullback on the stock prices.
Topic 3: What do AI agents do in their spare time? Get together on Moltbook, the social site for AI, and plot how to take over the world. Is it real or is it Memorex?
Topic 4: All agree Kevin Warsh as Fed chairman is a good pick.
Topic 5:SpaceX and XAI merge: Big news here, IMO, Musk intends to have LLM data centers in space in the next 30 months. Musk is often optimistic in his projections, but all agree Musk is the person to make it happen. Lots of benefits to locating them in space. Makes Greenland even more important. Greenland’s Kangerlussuaq is strategically superior for reliable, high-speed laser satellite communications due to its consistently clear skies and optimal satellite pass geometry, second only to Ny-Ålesund on Svalbard. By the way, the European Space Agency is building a laser communications station at Kangerlussuaq.
Topic 6: Brad Gerstner was instrumental in the creation of Trump accounts. Plus conversation how to counter the increasing perception of socialism as a viable alternative system in younger Americans.
(0:00) Bestie intros: Brad Gerstner joins the show
(3:16) Epstein Files
(15:45) SaaS stocks crash out
(35:11) Moltbook panic
(47:37) Trump selects Kevin Warsh as new Fed Chair, replacing Jerome Powell
(1:00:50) SpaceX and xAI merge
(1:10:45) Brad’s major win with Trump Accounts
Epstein Files, Is SaaS Dead?, Moltbook Panic, SpaceX xAI Merger, Trump’s Fed Pick
https://www.youtube.com/watch?v=wTiHheA40nI
Pitchers and catchers are beginning workouts league wide today after all, John G, so nothing more timely, eh? 😉
comandante conejo, a deep dive,
https://scrivorium.substack.com/p/bad-bunny-bad-business-the-nfls-monumental
call it the gramsci ball, or Guevaras revenge, a three kevin bacon combo
Ah yes, pitchers and catchers report. There is hope for the world.
Because of the equatorial bulge, measured from the center of the Earth Mt. Chimborazo in the Andes is the tallest mountain.
i that olympia mons on mars
AD – “Is there a [George] Will reader …”
I would have stopped right there. The answer is no, not any more and hasn’t been for years.
RE: AI
Yes, things have changed in the last month or two. Here is a snippet from a recent conversation:
E. S. Raymond is another recent convert.
Programming with AI assistance is very revealing. It turns out I’m not quite who I thought I was.
There are a lot of programmers out there who have a tremendous amount of ego and identity invested in the craft of coding. In knowing how to beat useful and correct behavior out of one language and system environment, or better yet many.
If you asked me a week ago, I might have said I was one of those people. But a curious thing has occurred. LLMs are so good now that I can validate and generate a tremendous amount of code while doing hardly any hand-coding at all.
And it’s dawning on me that I don’t miss it.
Things are moving fast.
They are. I was a skeptic about code generation until a couple of months ago. Now I work with colleagues working on complex data science projects who write nearly no code directly, but still drive/review it manually.
David
@Brian E:Get together on Moltbook, the social site for AI, and plot how to take over the world. Is it real or is it Memorex?
We know the answer to that one already. The things posted on Moltbook that have gone viral were staged by humans intending to get a viral response.
The snippet comes from a conversation about what AI policy should be. It is a bit early, but I rather like Fedora’s take.
1. You MAY use AI assistance — AI tools are encouraged as part of the contributor toolkit for things like code writing/generation, translation (e.g., overcoming language barriers or code migration/translation between languages), documentation, accessibility improvements, and more. It’s explicitly welcomed for productive uses, including translation tasks.
2. Accountability — You (the human contributor) MUST take full responsibility for everything you submit. Treat AI output as a suggestion only—review, test, understand, and verify it thoroughly. No dumping unverified “AI slop” (low-quality or unthoughtful generations) that burdens reviewers.
3. Transparency — You MUST disclose significant AI use (e.g., when a large part of the contribution comes unchanged from an AI tool). Use tags like “Assisted-by: [tool name]” in commit messages, PR descriptions, etc. Minor/trivial assistance (e.g., grammar fixes, small completions) doesn’t require disclosure.
4. Human oversight / No externalizing costs — AI cannot be the sole or final decision-maker (e.g., not for reviews, judgments, Code of Conduct, or subjective evaluations). Contributions must uphold Fedora’s values—quality, security, licensing, and community respect—without shifting burdens (like debugging AI hallucinations) to others.
We will probably have something similar. We are tending to being rude about rejecting slop, no apologies given. Our main worries are licensing/attribution problems, GPL code for example. I hope AI can help with that in the near future.
Re: https://shumer.dev/something-big-is-happening
physicsguy:
Thanks for the link.
Recently I’ve heard the happy talk about AI programming AI, but I didn’t believe we were there yet. Could be.
When I started using ChatGPT 4 three years ago, I knew something had changed big time. I had the feeling a nuclear bomb had gone off and we were all in the path of the blast wave.
But it would take a few years to hit.
My opinion hasn’t changed. I’ve come to a certain fatalism. I’d tell people to run for their lives, but I don’t know where to run.
At best it’s going to be messy.
For a view of what AI looks like in the trenches: https://matthewrocklin.com/ai-zealotry/. It is rather technical, it is OK to just skim the headings.
The “Something Big Is Happening” checklist for doing something about AI:
___________________________________
Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It’s $20 a month. But two things matter right away. First: make sure you’re using the best model available, not just the default. These apps often default to a faster, dumber model….
Second, and more important: don’t just ask it quick questions. That’s the mistake most people make.They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work…. Start with the thing you spend the most time on and see what happens.
And don’t assume it can’t do something just because it seems too hard. Try it…
This might be the most important year of your career. Work accordingly.
https://shumer.dev/something-big-is-happening
___________________________________
Use AI. Get to know its strengths and limitations. It’s not something you’ll understand by reading about it.
Sure. Call it auto-complete; call it pattern recognition. But even in its present form it is powerful and changing things.
And remember, it is improving month-by-month.
@huxley
If you are serious about writing code, Claude seem to be the preferred platform, not least for its agents. Grok and ChatGPT are both good, but with slightly different strong points.
Chuck:
I’ve dabbled, effectively, in small AI code tests. I’m sure some models are better than others, but I’ve not spent near enough time on it. I intend to.
French, my old lady, is so demanding!
Ninety or so years after Anti-Semitism almost succeeded in destroying Europe, they’re giving it another chance, with the support of significant swaths of the ROW.
If once you don’t succeed, try, try again?
Here’s a long and distressing but something that can’t be avoided:
“The Fall of Europe“—
https://www.tabletmag.com/sections/news/articles/fall-of-europe
H/T Powerline blog.
Oops.
Should be “…a long and distressing read but something…”
Full Video – Pam Bondi testifies at DOJ oversight hearing
https://commoncts.blogspot.com/2026/02/full-video-pam-bondi-testifies-at-doj.html
@BarryMeislin- Thanks for the link. Trying to understand the left/liberals behavior, especially of friends, of late has perplexed me. The article offers a cause I hadn’t previously considered.
John+Guilfoyle on February 11, 2026 at 1:55 pm said:
“Geworfenheit”
Now there’s a word not thrown around capriciously these days. Well done!
From my quick look on-line to learn what Geworfenheit really meant, I gather it means something about being “thrown” by birth (being born) into a given point in history, geographical dispersion or location, technology level, and culture, over which we had no choice, and even in the fullness of adulthood we will have little opportunity to make a significant change in our specific condition – or that of our progeny [Obama style transformation hopefuls to no avail?!].
This triggered the thought that this is essentially a mix of physical/genetic and cultural evolution, such that some groups and societies may advance or prosper compared to others, as each sorts out solutions for their particular life predicament. Some end up better at maximizing the survival of their next generation. In our modern case, the liberties and prosperity that we enjoy in our Western civilization are now facing cultural forces that seem to be reducing the desire among young adults to have children and accept the responsibility of building the next generation.
Other societies with a less scientific and rational/reason based outlook may end up coming out “on top”, even if that “top” is really a bottom or degradation from our current perspective.
I really want to remain optimistic that at least the US constitutional solution for a political republic will eventually sort out a path back to greater sanity, etc., perhaps even without another hot civil war. But I blow hot and cold on that optimism, or naivety.
— Chuck
I have a collection of his columns from the 1970s. He was a different writer then, and back then his columns were genuinely conservative, in the proper sense of the word, and insightful. I can reread those columns now and still find valuable insights and thoughts in them.
As time passed, he changed. He became more and more libertarian/corporatist, for whatever reason, more and more just a voice of ‘Conservative Inc.’ as it is sometimes called.
I don’t know what drove the change, though I suspect that plain old class pride was part of it.
Barry, thanks for the link on the Two Enlightenments. Another pov on the equality of outcome the Dems are so often pushing for.
Chuck 3:38pm, can you provide a link, please?
Physicsguy
Concerning AI once AI and robotics merge….? Millions of people with no future and no hope is not good. Since there doesn’t seem to be any moral component to this merger what’s guiding it? The controlling sentiment of more money for me? Shareholder value? Man is always trying to replace God and failing.
@ Richard Cook,
I suppose if AI and robotics truly improved human and business productivity to the point that the essential and less essential goods and services that we need/desire would decline in cost, then people could also still live on an income from (say) 20 hours/week, rather than 40 to 50.
For some of us, our work lives and careers were an important part of our self image, and
if/when you got in “the zone” on a project or assignment, that was also a great feeling. Some of the socialization and team work aspects were also enjoyable and personally valuable. Of course that situation does not apply for all workers.
But creating wealth that too few people have the income to buy (when they are unemployed and /or unemployable) is not a viable situation and makes no sense long term. Perhaps, just as the development of software has become easier with the creation of higher level languages, eventually the AI assisted life will provide the capability for almost anyone to access those benefits. But no telling what kind of job descriptions will tumble out of that future.
@R2L:But creating wealth that too few people have the income to buy (when they are unemployed and /or unemployable) is not a viable situation and makes no sense long term.
If no one can afford to buy anything, then the sellers are broke too. As long as human wants and needs exist people will be employed. Even at the worst of the Great Depression the employment rate was 75%, and it wasn’t caused by a huge increase in productivity. I think all we can say is that things would be look very different, if a great deal of jobs were eliminated but productivity stayed the same. There’s no production without consumption, and billionaires can only consume so much no matter how high they live.
R2L, I read a book in the 60’s, about how automation was going to free us to pursue leisure, envisioning a 30 hour week.
Well, that didn’t work out.
But if you haven’t heard how Musk envisions the workplace with Optimus robots,
you should go to the 1:17:21 mark of this interview.
Musk has used Tesla to engineer the ai computer that will control the Optimus. Tesla is at ai4 and expects to release ai5 at the end of this year or next year.
Musk is often criticized for being optimistic in his timeframe of bringing products to market, but the ai4 in Tesla cars is amazing at this point. So the control hardware is close to making robots functional.
He makes the point that the US is lagging behind China in both manpower and energy production which gives them a significant advantage in manufacturing capacity. He views Optimus as leveling that advantage.
0:00:00 – Orbital data centers
0:36:46 – Grok and alignment
0:59:56 – xAI’s business plan
1:17:21 – Optimus and humanoid manufacturing
1:30:22 – Does China win by default?
1:44:16 – Lessons from running SpaceX
2:20:08 – DOGE
2:38:28 – TeraFab
Elon Musk – “In 36 months, the cheapest place to put AI will be space”
https://www.youtube.com/watch?v=BYXbuik3dgA&t=3265s