简体
  • 简体中文
  • 繁体中文

热门资讯> 正文

Qualcomm推出新的AI芯片,在数据中心竞赛中竞争

2025-10-27 22:00

Qualcomm Technologies, Inc. (NASDAQ:QCOM) announced on Monday the launch of its next-generation artificial intelligence inference-optimized solutions for data centers, namely the Qualcomm AI200 and AI250 chip-based accelerator cards and racks.

  • QCOM is showing upward movement. Get the complete analysis here

Building on the company’s leadership in Neural Processing Unit (NPU) technology, these solutions offer rack-scale performance and superior memory capacity for fast generative AI inference, delivering high performance per dollar per watt, Qualcomm said.

Qualcomm AI200 introduces a purpose-built rack-level AI inference solution designed to deliver low total cost of ownership (TCO) and optimized performance for large language & multimodal model (LLM, LMM) inference, as well as other AI workloads.

Also Read: Qualcomm And Valeo Broaden Collaboration To Speed Hands Off Driving Features

Performance

It supports 768 GB of LPDDR per card, offering higher memory capacity and lower cost, while enabling exceptional scale and flexibility for AI inference.

The Qualcomm AI250 solution will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads by delivering more than 10 times higher effective memory bandwidth and significantly lower power consumption.

This enables disaggregated AI inferencing for efficient utilization of hardware while meeting customer performance and cost requirements.

Both rack solutions feature direct liquid cooling for thermal efficiency, PCIe for scale up, Ethernet for scale out, confidential computing for secure AI workloads, and a rack-level power consumption of 160 kW.

Qualcomm AI200 and AI250 will be commercially available by 2026 and 2027, respectively.

Competition

Qualcomm’s AI accelerator rivals include Nvidia Corp’s (NASDAQ:NVDA) H100 and H200 chips, Advanced Micro Devices, Inc’s (NASDAQ:AMD) Instinct MI300X accelerators, and Intel Corp’s (NASDAQ:INTC) Gaudi accelerators.

Alphabet Inc. (NASDAQ:GOOGL) Google has developed its own Tensor Processing Units (TPUs), which are optimized for popular machine learning frameworks, including TensorFlow and PyTorch.

Amazon.com Inc. (NASDAQ:AMZN) Amazon Web Services (AWS) created Inferentia chips to help customers scale machine learning applications more effectively.

QCOM Price Action: Qualcomm shares were up 3.48% at $174.91 at the time of publication on Monday, according to Benzinga Pro data.

Read Next:

  • Qualcomm And Google Cloud Forge AI Alliance To Transform Cars Into Smart Agents

Photo via Qualcomm

风险及免责提示:以上内容仅代表作者的个人立场和观点,不代表华盛的任何立场,华盛亦无法证实上述内容的真实性、准确性和原创性。投资者在做出任何投资决定前,应结合自身情况,考虑投资产品的风险。必要时,请咨询专业投资顾问的意见。华盛不提供任何投资建议,对此亦不做任何承诺和保证。