Microsoft is developing its own artificial intelligence chip, internally code-named Athena, for powering large language models (LLMs) which are used to develop generative AI products, revealed a report by The Information on April 18. Speaking to two anonymous sources with “direct knowledge of the project” the Information states that the project was in the works since 2019, and a small group of Microsoft and OpenAI employees is already testing the technology. “Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts,” the report adds. Microsoft has also been pushing for integrating Open AI's ChatGPT with Bing to enhance the search engine’s abilities as well as with Microsoft Teams Premium to make meetings “more intelligent”. According to Gizmodo, the company is mainly looking at cutting down the cost of procuring chips from Nvidia, which is a traditionally dominant supplier of high-end chips that help deploy AI systems efficiently. Why it matters: Companies are not only focusing on algorithms or LLMs for scaling their generative AI efforts but are also shifting focus towards developing their own hardware components required to run and train large AI models. AI analysts are of the view that this new pattern indicates that big cloud service providers are “trying to cut costs by lessening their reliance on AI hardware/software vendor Nvidia and longtime chipmakers Intel and AMD while making innovations in AI”. It is said that this will also diversify and…
