Skip to content
 

AI: The best sales tool? Expertise on tap? World’s best stockpicker? A frightening political tool? Or all four?

My friend used ChatGPT to write her resignation letter. Fast Company reports someone used it to beat a parking ticket. The judge called the defense persuasive.

Of all the game-changing technologies from the printing press to the steam engine, from electricity to the Internet, nothing has taken off faster than the new AI.

ChatGPT currently has over 100 million users. And its OpenAI website receives over 1.8 billion visitors per month.

I’ve been trying to wrap my brain around it the new AI. I’ve been at the forefront of new technologies, like computers, fiber optic and satellite communications. I started a magazine and trade show which we called Computer Telephony to portray what marvelous things happened when you combined computer intelligence with ultra-high-speed telecommunications lines The iPhone and the Apple App Store are babies born of that era.

But this is a new more powerful era. I’m writing this blog helped by an Internet fiber line that’s running at a thousand million bits per second.

Google started by saying it could find us anything anywhere on the Internet. Finding simple stuff, as Google does, palls when compared to the power of knowing all the stuff and having the intelligence to figure to put out complex, useful answers.

In a recent blog I wrote how Chat GPT laid out for me a beautiful Swiss mountain tour, replete with trains and hotels,  All in a few seconds. No travel agent. No Google research. (For that blog, click here.)

But it’s a lot more. On the way to Switzerland, I need to drop by Amsterdam to see the grandkids. I booked into a hotel for two nights — using the standard online web site. It asked if I wanted an early checkin? I said “Yes.” It said it would tell the hotel and maybe a room would be available when I got there on the morning of August 10. Maybe it would be available. Maybe it wouldn’t. Really useful.

It didn’t offer me an early check-in for $50 or whatever. It didn’t ask if was flying in from across the Atlantic and would have jet lag and would l like a massage for $150 or whatever. What about booking me an Uber from the airport to the hotel. I actually asked the hotel’s Chatbot how do I get to the hotel from the airport? It could not answer that simple question. It kept giving me the hotel’s address. But no way of getting there.

More importantly, it did not offer to sell me a more expensive room. It probably could have sold me a pricier room because I choose a room in the middle of the price range – not knowing precisely what the room contained. Did it have a desk and decent speed Internet? The photos of the room I chose and discussion of what it contained were slim to none.

What about breakfast in my early-check-in room? Would I want washing on the second day of my stay?

The hotel left so much money on the table, it made me sick.

I’ll fly KLM. Their website had a business class ticket from Boston to Amsterdam for $2832, — “Only 7 tickets available at this price.” When I tried to buy it, the site said “an error had happened.” End of story. No explanation. No nothing. I called them on the phone. Remember that thing? I actually got someone. But they couldn’t  couldn’t give me the price listed on the web site. Her advice? Keep checking the web site every few hours! (I have nothing else to do.) In the end I bought a cheaper “premium” economy ticket on Delta. KLM lost $3,000 on their idiocy.

The new AI is a super sales tool, and then some. In yesterday’s New York Times, there’s a piece “The Optimist’s Guide to Artificial Intelligence and Workplace.” In it,

David Autor, a professor of economics at the Massachusetts Institute of Technology, said that A.I. could potentially be used to deliver “expertise on tap” in jobs like health care delivery, software development, law, and skilled repair. “That offers an opportunity to enable more workers to do valuable work that relies on some of that expertise,” he said.

The article continues:

New technology can lead to new jobs. Farming employed nearly 42 percent of the work force in 1900, but because of automation and advances in technology, it accounted for just 2 percent by 2000. The huge reduction in farming jobs didn’t result in widespread unemployment. Instead, technology created a lot of new jobs. A farmer in the early 20th century would not have imagined computer coding, genetic engineering or trucking. In an analysis that used census data, Autor and his co-authors found that 60 percent of current occupational specialties did not exist 80 years ago.

The big picture is great for the economists. But you and I are more interested in A.I. stocks. In that previous AI blog I posted on April 20, I instanced a collection of AI stocks including (in the order I mentioned them) NVDA, SMCI, MSFT, GOOGL, META. and TSM. They’ve done well since. Thank you, Harry.

Well now comes something new, my dear friend and financial guru Richard Grigonis did some research. This stuff is awesome:

ChatGPT’s New Hat — Stock Guru

It’s finally happening. Generative AI – ChatGPT to be specific – has successfully found its way into everything from writing essays for cheating students, to translating Shakespeare into Swahili, to answering dumb questions. So it was inevitable that investors are now tempted to treat it as a possible magic bullet or “electronic guru” to predict stock movements.

The financial comparison site finder.com did an experiment between March 6 and April 28. Amazingly, in a contest, they found that a dummy portfolio of 38 stocks selected by ChatGPT gained 4.9 per cent, while 10 leading investment funds clocked an average loss of 0.8 per cent.

Finder’s analysts had carefully picked the 10 most popular UK funds on the Interactive Investor trading platform. They included major, respected companies like HSBC and Fidelity, who have access to all sorts of sophisticated software and human expertise not available to you and me.

ChatGPT, on the other hand, was tasked with making its stock selections using only common criteria like growth history and low debt levels. And yet, it outperformed its competition and demonstrated its prowess by cherry-picking winners such as Microsoft, Netflix, and Walmart.

The news spread like wildfire, with CNN buzzing about the groundbreaking results and highlighting ChatGPT’s potential to revolutionize retail investors’ decision-making.

The CEO of Finder, Jon Ostler, seized the opportunity to underscore ChatGPT’s potential as a guiding light for retail investors. He confidently predicted a surge in consumers flocking to leverage ChatGPT for financial gain. He thinks that it won’t be long before people catch on to its extraordinary abilities and go in whole hog.

Oster is probably right. In a Finder survey of 2,000 UK adults, eight percent have already sought financial advice from ChatGPT, and 19 percent are seriously considering it.

Moreover, an April study released by the University of Florida reveals that it could predict the stock price movements of specific companies with far greater precision than elementary analysis models. They examined a data set of stock-related headlines from October 2021 to December 2022, ensuring that none of the news pieces were part of ChatGPT’s training data.

The researchers used “sentiment analysis,” feeding the chatbot 67,586 headlines relating to 4,138 unique companies during this time, and asking whether the headlines were good, bad, or irrelevant news for the companies’ stock prices. These headlines were filtered for relevance, with the team narrowing their focus to full articles and press releases, excluding any stock-gain or stock-loss headlines. They also removed duplicate news.

The Florida study found that ChatGPT outperformed other “traditional sentiment analysis methods” that also use data from headlines and social media to forecast stock movements. In short, “Our results suggest that incorporating advanced language models into the investment decision-making process can yield more accurate predictions and enhance the performance of quantitative trading strategies.”

The results were jaw-droppingly spectacular:

·       For the October 2021 to December 2022 period the team tested six different investing strategies.

·       The Long-Short strategy, which involved buying companies with good news and short-selling those with bad news, yielded the highest returns, at over 500%.

·       The Short-only strategy, focusing solely on short-selling companies with bad news, returned nearly 400%.

·       The Long-only strategy, which only involved buying companies with good news, returned about 50%.

·       Three other strategies resulted in net losses: the “All News” hold strategy, the Equally-Weighted hold strategy, and the Market Value-Weight hold strategy.

If these kinds of results continue to pour in, then the implications are staggering.  AI chatbots like ChatGPT have the power to empower little investors, giving them a tool that could, with continued development, soon rival the big boys and the super-expensive computer models of, say, Jim Simons’ Renaissance Technologies’ hedge funds.

At first this all sounds great. Ostensibly it’s a tale of triumph, where a lively chatbot outshines investment funds, captures the hearts of retail investors, and even wows the researchers with its uncanny predictions. The future of financial decision-making seems bright, with ChatGPT leading the charge with its usual charm and flair.

But, like everything else in AI, there’s a possible wild side to all this.

Let’s say the fantasy becomes real and investors find the “magic prompt” that yields superlative financial results. The prospect of such accuracy might sound enticing, but it also raises concerns about the potential for chaos and disruption.

One of the immediate consequences could be increased volatility in the market. If a large number of investors start relying on the AI program’s predictions, it could lead to a significant influx of buying or selling activity based on those predictions. This sudden surge in trading volume could result in exaggerated price movements and heightened market instability.

Moreover, widespread use of such an AI program could potentially create a self-fulfilling prophecy. With millions of market participants acting on the program’s predictions simultaneously, their collective actions could actually influence stock prices in the predicted direction. This could further reinforce the program’s accuracy and create a feedback loop that amplifies market movements.

I’m reminded of the old joke that, if everybody knew what the market was going to do, then the top of the market would become the bottom and the bottom would become the top.

Additionally, relying on an AI oracle for stock predictions might lead to a decline in market efficiency. If investors rely solely on the program’s recommendations without conducting their own analysis, it could result in herding behavior and a lack of independent thinking. This could reduce the diversity of investment strategies and hinder the market’s ability to incorporate new information efficiently.

Regulatory challenges could also pop up. Fearful regulators might step in to assess the impact of AI on market integrity and investor protection. They would have to determine if additional measures or safeguards are needed to prevent market distortions, ensure fair competition, and maintain the overall stability of the financial system.

So if this all turns out to be real, then we’re all going to have to carefully consider the implications and take measures to mitigate potential risks. Striking a balance between innovation and market stability may soon become a major topic.

But — in the meantime — give it a go. (This is Harry speaking now.)

A period of uncertainty

Crashing office values. Declining office attendance. Raging homelessness. The hollowing out of major cities — Portland, OR, San Francisco, Los Angeles, etc. — Rising interest rates. The looming debt ceiling catastrophe (or not).

This is not predictable. Except we all know we’ve been through worse before. Much worse.

But for us with cash, we are eyeing huge opportunities as “distress” real estate emerges once again from the banks and the lenders who believed the stories of infinite wealth from real estate and ever rising values.

Favorite reason to return to the office?

This is  a real office. Nobody dares to enter the office. Imagine knocking over one of the piles!

Of yes, the evil of the new A.I.?

The new AI can pump out persuasive fake news and fake images like there was no tomorrow. If you thought figuring truth from fiction was once easy, you ain’t seen nothing yet.

What about some regulations? Everyone and their uncle seems in favor of something to protect us from the fraudsters and the autocrats. So let me quote you the final paragraphs in Monday’s New York Times article:

When OpenAI’s chief executive, Sam Altman, testified in Congress this week and called for regulation of generative artificial intelligence, some lawmakers hailed it as a “historic” move. In fact, asking lawmakers for new rules is a move straight out of the tech industry playbook. Silicon Valley’s most powerful executives have long gone to Washington to demonstrate their commitment to rules in an attempt to shape them while simultaneously unleashing some of the world’s most powerful and transformative technologies without pause.

One reason: A federal rule is much easier to manage than different regulations in different states, Bruce Mehlman, a political consultant and former technology policy official in the Bush administration, told DealBook. Clearer regulations also give investors more confidence in a sector, he added.

The strategy sounds sensible, but if history is a useful guide, the reality can be messier than the rhetoric:

  • In December 2021, Sam Bankman-Fried, founder of the failed crypto exchange FTX, was one of six executives to testify about digital assets in the House and call for regulatory clarity. His company had just submitted a proposal for a “unified joint regime,” he told lawmakers. A year later, Bankman-Fried’s businesses were bankrupt, and he was facing criminal fraud and illegal campaign contribution charges.

  • In 2019, Facebook founder Mark Zuckerberg wrote an opinion piece in The Washington Post, “The Internet Needs New Rules,” based on failures in content moderation, election integrity, privacy and data management at the company. Two years later, independent researchers found that misinformation was more rampant on the platform than in 2016, even though the company had spent billions trying to stamp it out.

  • In 2018, the Apple chief Tim Cook said he was generally averse to regulation but supported more strict data privacy rules, saying, “It’s time for a set of people to think about what can be done.” But to maintain its business in China, one of its biggest markets, Apple has largely ceded control of customer data to the government as part of its requirements to operate there.

You can read the Times’ piece here.

I found these words in my reading in the Economist:

With the new generation of ai (notice no one knows how to spell it), the battlefront is shifting from attention to intimacy.

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

This could have seismic implications for the industry’s future. “The barrier to entry for training and experimentation has dropped from the total output of a major research organisation to one person, an evening, and a beefy laptop,” the Google memo claims. An llm can now be fine-tuned for $100 in a few hours. With its fast-moving, collaborative and low-cost model, “open-source has some significant advantages that we cannot replicate.” Hence the memo’s title: this may mean Google has no defensive “moat” against open-source competitors. Nor, for that matter, does Openai.

Here’s the final word on fear-mongering:

Personally, life has been pleasant recently

I’m back on the tennis court every day. And friends still send me wonderful cartoons:

The final good news:

See you soon. — Harry Newton