IBM unveils on-chip accelerated AI processor
At the annual Hot Chips conference, IBM unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time.
Telum is IBM’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. The on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.
The chip features a centralized design, which allows clients to leverage the power of the AI processor for AI-specific workloads, such as financial services workloads, like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering and risk analysis.
According to IBM, “Clients will be positioned to enhance existing rules-based fraud detection or use machine learning, accelerate credit approval processes, improve customer service and profitability, identify which trades or transactions may fail, and propose solutions to create a more efficient settlement process.”
The chip contains eight processor cores with a deep super-scalar out-of-order instruction pipeline, running with over 5GHz clock frequency, used for the demands of heterogenous enterprise class workloads. The redesigned cache and chip-interconnection infrastructure provides 32MB cache per core, and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.