Skip to content
 

Where to invest in 2025

These are the stocks I like in order — no surprises here. Sorry.

+ Nvidia. Major supplier of everything in demand and booming and still cheap based on P/E.
+ Google. Gaining the most (at present) from monetizing AI.
+ Broadcom. Great networking and chipmaking expertise.
+ Amazon. AI-enhanced retailing is truly stupendous. I bet I know where all your Christmas presents came from?
+ Microsoft. Amazing growth of its cloud business.
+ Meta. Its various platforms see advertising exploding. Its Auto-fill is amazing.
+ Apple. I’m warming to its new iPhone 16 and Apple Intelligence.

In a moment, I will want you to watch this wonderful video which tells where the semi-conductor business is going, who’s benefiting and why.

But, first, you’ll need some definitions. So, jump below.

First, hyperscalers, which are big customers of Nvidia stuff. Here’s a definition of hyperscaler, courtesy Perplexity.ai:

A hyperscaler is a large-scale data center operator that provides massive computing resources and cloud services on a global scale. These companies specialize in delivering enormous amounts of computing power, storage capacity, and network resources to organizations and individuals worldwide15.

Key characteristics of hyperscalers include:

+ Massive infrastructure: Hyperscalers operate millions of physical servers distributed across multiple data centers globally.

+ Scalability: They can rapidly scale their infrastructure to accommodate growing demands and workloads without compromising performance or efficiency.

+ Wide range of services: Hyperscalers offer various cloud services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

+ Advanced technologies: They provide access to cutting-edge technologies such as artificial intelligence, machine learning, and big data analytics.

+ Pay-as-you-go model: Hyperscalers typically adopt a usage-based pricing model, allowing customers to pay only for the resources they consume.

+ Global presence: With data centers located worldwide, hyperscalers can offer low-latency access and improved performance to users across the globe.

Examples of prominent hyperscalers include Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), IBM Cloud, and Alibaba Cloud.

Hyperscalers play a crucial role in modern computing by enabling businesses to access vast computing resources without the need for significant upfront investments in infrastructure. This allows organizations to focus on innovation and core business activities while leveraging the scalability and advanced capabilities provided by hyperscale cloud services.

Next, inference, which is what AI is all about. This definition is from ChatGPT.

Inference is the process of deriving logical conclusions from premises or evidence. It involves reasoning to reach new knowledge or understanding based on what is already known. Inferences can be made in various fields, such as science, mathematics, logic, and everyday decision-making.

There are three primary types of inference:

+ Deductive Inference: Drawing specific conclusions from general premises or rules. If the premises are true and the reasoning is valid, the conclusion must also be true. Example:

Premise 1: All humans are mortal.
Premise 2: Socrates is human.
Conclusion: Socrates is mortal.

+ Inductive Inference: Making generalizations based on specific observations or evidence. The conclusions are probable but not guaranteed. Example:

Observation: The sun has risen in the east every day so far.
Conclusion: The sun will rise in the east tomorrow.
Abductive Inference: Inferring the most likely explanation for a set of observations. This reasoning is used when forming hypotheses or explanations.
Example:

Observation: The grass is wet.
Possible explanation: It rained last night.

Inference plays a crucial role in problem-solving, decision-making, and developing theories across disciplines.

Now to synethetic data. Courtesy Microsoft CoPilot:

Synthetic data refers to information that’s artificially generated rather than obtained by direct measurement or observation. It’s created using algorithms, simulations, or statistical models to mimic the characteristics of real-world data, allowing researchers and developers to test systems without compromising privacy or security.

This kind of data is particularly useful in scenarios where real data is scarce, costly to obtain, or carries privacy concerns. By leveraging synthetic data, organizations can efficiently train machine learning models, conduct experiments, and ensure robust testing environments without relying on actual user data.

And now to tokens.

Google Gemini defined it thus:

A token can have a few different meanings depending on the context. Here are a few of the most common ones:

  • A symbol or sign: This is the most general definition. A token can be anything that represents something else, like a white flag being a token of surrender.
  • A small gift or memento: A token can be a small gift given as a gesture of appreciation or remembrance.
  • A piece of paper or card that can be exchanged for goods or services: For example, a bus token or a gift card.
  • A unit of a cryptocurrency: In the world of cryptocurrencies, a token is a unit of a digital asset that is built on top of a blockchain platform.

Which definition of “token” is most relevant will depend on the specific situation you’re referring to.

Now to retraining.

ChatGPT 4.0 defined it as:

Retraining in the context of AI refers to the process of updating a machine learning model by training it again, either from scratch or using a previously trained model as a starting point. Retraining is typically done to improve the model’s performance, adapt it to new data, or address deficiencies. Here’s a breakdown of retraining scenarios:

1. Improving Accuracy

  • Retraining is used to improve the model’s performance when new data becomes available. The additional data may help the model better generalize and make more accurate predictions.

2. Updating with New Data

  • When a model’s predictions are based on data distributions that have changed over time (a phenomenon known as data drift), retraining on more recent data ensures that the model remains relevant and accurate.

3. Transfer Learning

  • A pretrained model (e.g., a large neural network trained on a vast dataset) can be retrained on a smaller, domain-specific dataset. This allows the model to leverage its existing knowledge while adapting to a new task or domain.

4. Fine-Tuning

  • Retraining can involve fine-tuning a pretrained model. In this case, only specific parts of the model (e.g., the final layers) are updated to suit a new dataset or objective.

OK. Now you understand the concepts in the video that I didn’t, please watch it. It’s very good.

My list of stocks to own for 2025 is above. Frankly, I can’t find any area matching the growth of these tech companies.

AI is the fastest uptake of any technology I’ve ever seen. I arrived in technology 55 years ago — before the Internet, before PCs, before the cell phone, and my friends say, before electricity. Not nice.

The bottom step

This is the bottom step in New York City’s subway system. Notice how the railing sticks out beyond the last step.

That’s because Janno’s smart designers want you to hold on and not fall on The Last Step going down. Most staircases are not designed with Janno’s brilliance. Often the railing runs out before the last step.

See you tomorrow> – Harry Newton

Legal stuff: This blog does not offer investment advice. The blog is for education and (hopefully) entertainment.