Being in the right place at the right time is one means to success (assuming you’re smart enough to recognize the opportunity). Anticipating a market that’s developing is another way, and creating a market is yet one more road to success. Nvidia has done all that and more with AI.
Before large language models (LLMs), transformers, and generative AI exploded onto the scene, Nvidia was already seeding what was then called “accelerated compute,” or GPU compute, and used its CUDA C++ like programming language as a catalyst and gateway to exploiting the power of parallel processing with a GPU. GPUs are complex devices, and getting multiple threads of data to behave properly and in sync is a tricky process. CUDA took a lot of the drudgery out of that work, and the payoff was so good that hundreds of developers in large organizations took advantage of it and built up a huge library of proprietary and open programs that ran on Nvidia GPUs.
You can read Dr. Jon Peddie’s latest editorial HERE on the Ojo-Yoshida Report.