Believe it or not, despite all the money being poured into AI, it’s not as profitable as one might think. All this injection of funds and investments is for a future where AI could become the main driving force. But for now, most AI companies are bleeding cash. This is due to expensive hardware, the costs of keeping data centers running, and more. But Google might have found a solution to that problem by getting Samsung to build its TPU .

Future Google TPU chips could be made by Samsung

According to a recent post on X by @jukan05 , Google could outsource the manufacturing of its TPU to Samsung. It seems that Google executives paid a visit to Samsung’s semiconductor facility in Taylor, Texas. The visit also saw them discuss how many TPUs Samsung might be able to supply.

This is good news for Samsung . The South Korean tech giant might be leading in terms of smartphone sales, but that’s only on the consumer front. On the enterprise side of things, Samsung is way behind TSMC, which produces chips for companies like Apple, Qualcomm, NVIDIA, and more.

Google potentially working with Samsung could help the company’s resume, which, in the future, could attract other companies that might want to wean themselves off their reliance on TSMC. At the moment, Google’s TPU was developed in collaboration with Broadcom. It is said to cost 80% less than NVIDIA’s H100, while offering similar, if not better, performance.

If Google were to go with Samsung instead of TSMC, it could result in even cheaper TPUs in the future. This could reduce Google’s overall costs when it comes to building future data centers or upgrading existing ones.

What is Google’s TPU, and why is it important?

Running AI models is resource-intensive. Even thanking an AI and getting it to say “You’re welcome” consumes more resources than you might think, even if it’s just a couple of words and a few characters.

This is why companies like NVIDIA have shifted focus towards developing more hardware for AI . With Google’s TPU, the company has created a custom AI chip designed to speed up AI and machine learning tasks. This includes training models like Gemini for tasks like image recognition, powering neural nets, inference, and more.

Google’s approach is different from that of NVIDIA. Google’s TPU is designed for neural network math, while NVIDIA’s GPUs are designed for broader AI-related workloads. Google working with Broadcom and Samsung could result in cheaper TPUs, which could cut Google Cloud costs while challenging NVIDIA’s datacenter dominance.