A snippet of life in New York City:
My friend, David, parked his car, then helped his partner with her luggage into their apartment building. He came straight back to his car. He was gone less than two minutes. His cell phone was gone.
Here’s how it happened:
The phone is now in Bronx, under new ownership, and locked. David wiped all the phone’s contents. You can do that with iPhones.
There’s a lesson here somewhere.
Now to AI
My enthusiasm for Nvidia, the king of AI, continues to be rewarded.

Everyone and their uncle are waxing about AI. This is the cover of their current edition.


Here’s the entire Economist’s magnificent article — they call it a leader:
FOR MOST of history the safest prediction has been that things will continue much as they are. But sometimes the future is unrecognisable. The tech bosses of Silicon Valley say humanity is approaching such a moment, because in just a few years artificial intelligence (AI) will be better than the average human being at all cognitive tasks. You do not need to put high odds on them being right to see that their claim needs thinking through. Were it to come true, the consequences would be as great as anything in the history of the world economy.
Since the breakthroughs of almost a decade ago, AI’s powers have repeatedly and spectacularly outrun predictions. This year large language models from OpenAI and Google DeepMind got to gold in the International Mathematical Olympiad, 18 years sooner than experts had predicted in 2021. The models grow ever larger, propelled by an arms race between tech firms, which expect the winner to take everything; and between China and America, which fear systemic defeat if they come second. By 2027 it should be possible to train a model using 1,000 times the computing resources that built GPT-4, which lies behind today’s most popular chatbot.
What does that say about AI’s powers in 2030 or 2032? As we describe in one of two briefings this week, many fear a hellscape, in which AI-enabled terrorists build bioweapons that kill billions, or a “misaligned” ai slips its leash and outwits humanity. It is easy to see why these tail risks command so much attention. Yet, as our second briefing explains, they have crowded out thinking about the immediate, probable, predictable—and equally astonishing—effects of a non-apocalyptic AI.
Before 1700 the world economy grew, on average, by 8% a century. Anyone who forecast what happened next would have seemed deranged. Over the following 300 years, as the Industrial Revolution took hold, growth averaged 350% a century. That brought lower mortality and higher fertility. Bigger populations produced more ideas, leading to yet faster expansion. Because of the need to add human talent, the loop was slow. Eventually, greater riches led people to have fewer children. That boosted living standards, which grew at a steady pace of about 2% a year.
Subsistence to silicon
AI faces no such demographic constraint. Technologists promise that it will rapidly hasten the pace at which discoveries are made. Sam Altman, OpenAI’s chief executive, expects AI to be capable of generating “novel insights” next year. AIs already help program better AI models. By 2028, some say, they will be overseeing their own improvement.
Hence the possibility of a second explosion of economic growth. If computing power brings about technological advances without human input, and enough of the pay-off is reinvested in building still more powerful machines, wealth could accumulate at unprecedented speed. Economists have long been alive to the relentless mathematical logic of automating the discovery of ideas. According to a recent projection by Epoch AI, a bullish think-tank, once AI can carry out 30% of tasks, annual growth will exceed 20%.
True believers, including Elon Musk, conclude that self-improving AI will create a superintelligence. Humanity would gain access to every idea to be had—including for building the best robots, rockets and reactors. Access to energy and human lifespans would no longer impose limits. The only constraint on the economy would be the laws of physics.
You don’t need to go to that extreme to conjure up AI’s mind-boggling effects. Consider, as a thought experiment, just the incremental step to human-level intelligence. In labour markets the cost of using computing power for a task would limit the wages for carrying it out: why pay a worker more than the digital competition? Yet the shrinking number of superstars whose skills were not automatable and could directly complement AI would enjoy enormous returns. The only people doing better than them, in all likelihood, would be the owners of AI-relevant capital, which would be gobbling up a rising share of economic output.
Everyone else would have to adapt to gaps in AI’s abilities and to the spending of the new rich. Wherever there was a bottleneck in automation and labour supply, wages could rise rapidly. Such effects, known as “cost disease”, could be so strong as to limit the explosion of measured gdp, even as the economy changed utterly.
The new patterns of abundance and shortage would be reflected in prices. Anything AI could help produce—goods from fully automated factories, say, or digital entertainment—would see its value collapse. If you fear losing your job to AI, you can at least look forward to lots of such things. Wherever humans were still needed, cost disease might bite. Knowledge workers who switched to manual work might find they could afford less child care or fewer restaurant meals than today. And humans might end up competing with AIs for land and energy.
This economic disruption would be reflected in financial markets. There could be wild swings between stocks as it became clear which companies were winning and losing winner-takes-all contests. There would be a rapacious desire to invest, both to generate more AI power and in order for the stock of infrastructure and factories to keep pace with economic growth. At the same time, the desire to save for the future could collapse, as people—and especially the rich, who do the most saving—anticipated vastly higher incomes.
Persuading people to give up capital for investment would therefore require much higher interest rates—high enough, perhaps, to make long-duration asset prices fall, despite explosive growth. Scholars disagree, but in some models interest rates rise one-for-one or more with growth. In an explosive scenario that would mean having to refinance debts at 20-30%. Even debtors whose incomes were rising fast could suffer; those whose incomes were not hitched to runaway growth would be pummelled. Countries that were unable or unwilling to exploit the AI boom could face capital flight. There could also be macroeconomic instability anywhere, because inflation could take off as people binged on their anticipated fortunes and central banks did not raise rates fast enough.
It is a dizzying thought experiment. Could humanity cope? Growth has accelerated before, but there was no mass democracy during the Industrial Revolution; the Luddites, history’s most famous machine-haters, did not have the vote. Even if average wages surged, higher inequality could lead to demands for redistribution. The state would also have more powerful tools to monitor and manipulate the population. Politics would therefore be volatile. Governments would have to rethink everything from the tax base to education to the protection of civil rights.
Despite that, the rise of superintelligence should provoke wonder. Dario Amodei, boss of Anthropic, told The Economist this week that he believes AI will help treat once-incurable diseases. The way to look at another acceleration, if it comes, is as the continuation of a long miracle, made possible only because people embraced disruption. Humanity may find its intelligence surpassed. It will still need wisdom. ■
The AI takeup is extraordinary. All my friends are using it multiple times each day. Their frequency of use is escalating. What they use it for is broadening daily, as they discover new uses.
I find AI to be unbelievably useful for what I do — which is research.
Here’s a recent inquiry of mine of Perplexity.ai. I find Perplexity so useful I pay it monthly.



This is a 2019 Kia Forte
My son got it from favorite car rental place — Turo.

He loves the car. He’s been driving it on New York and Mass highways, schlepping his kids to summer camps.
He’s averaging an amazing 46.7 MPG. Which is clearly better than a slap in the belly with a cold fish. (Use that on your friends. They’ll be amazed at your erudition.)
These are dumb, but funny.



Little Johnny teaches us how to sell
The kids filed into class Monday morning. They were all very excited. Their weekend assignment was to sell something, then give a talk on salesmanship.
Little Sally led off. “I sold Girl Scout cookies and I made $30” she said proudly. “My sales approach was to appeal to the customer’s civil spirit and I credit that approach for my obvious success.”
“Very good”, said the teacher.
Little Debbie was next. “I sold magazines” she said. “I made $45 and I explained to everyone that magazines would keep them up on current events.”
“Very good, Debbie”, said the teacher.
it was Little Johnny’s turn. The teacher held her breath. Little Johnny walked to the front of the classroom and dumped a box full of cash on the teacher’s desk. “$2,467”, he said.
“$2,467!” cried the teacher, “What in the world were you selling?”
“Toothbrushes”, said Little Johnny.
“Toothbrushes”, echoed the teacher. “How could you possibly sell enough tooth brushes to make that much money?”
“I found the busiest corner in town”, said Little Johnny. “I set up a Dip & Chip stand and I gave everybody who walked by a free sample.” They all said the same thing; “Hey, this tastes like dog poop!” I would say, “It is dog poop. Wanna buy a toothbrush?”
Little Johnny got five stars for his assignment.
Bless his little heart.
My stock holdings are booming
They’re booming, not because I’m smart, or good at picking stocks, but we’re in the middle of an AI-fueled stock market boom. Hence Blind Freddie is getting rich. There’s a list of the 70-odd stocks currently in my portfolio on my web site in the right hand column. Click here.
My dumb philosophy: The stock market is moving so fast it’s impossible to spend huge time researching particular stocks. Hence, I buy a little based on cursory checking, then watch them. Dump then or buy more as I learn and watch more.
Warren Buffett wouldn’t agree with this approach. Take the example of Robinhood. It’s got oodles of momentum powering it and I’m up substantially. But, as I came to research it more — by opening an account with them, trying to give them a little money to trade with, I gave up on using them.
Take this Robinhood idiocy. I link my bank account, then send them $500. They tell me it won’t appear in my Robinhood trading account for six days. You read that right – six days. Dumb. Dumb. Don’t get me started on what I think of their “trading platform,” also called their web site. But the dumb stock — HOOD is its symbol — is going up… Their stock is good. Their trading platform sucks. That’s a technical term.
George Bernard Shaw was right.
Only mad dogs and Englishmen go out in today’s noonday heat and humidity. I tried playing tennis in the heat and the humidity. That gave dumbness a whole new meaning.
— Harry Newton