Research Bits: April 19

CPU power prediction
Researchers from Duke University, Arm Research and Texas A&M University have developed an artificial intelligence method to predict CPU power consumption, returning results more than a trillion times per second while consuming very little power itself.

“This is an intensively studied problem that has traditionally relied on additional circuitry to solve,” said Zhiyao Xie, a doctoral student at Duke. “But our approach runs directly on the microprocessor in the background, which opens up many new opportunities. I think that’s why people are excited about it.

The approach, called APOLLO, uses an artificial intelligence algorithm to identify and select just 100 of a processor’s millions of signals that most closely match its power consumption. It then builds a power consumption model from those 100 signals and monitors them to predict the performance of the entire chip in real time.

“APOLLO approaches an ideal power estimation algorithm that is both accurate and fast and can easily be integrated into a processing core at low power cost,” Xie said. “And because it can be used in any type of processing unit, it could become a common component in future chip design.”

In addition to monitoring power consumption, the researchers said it could be used as a tool to optimize processor designs.

“Once the AI ​​has selected its 100 signals, you can look at the algorithm and see what they are,” Xie said. “Many of the selections make intuitive sense, but even if they don’t, they can provide insight to designers by letting them know which processes are most strongly correlated to power consumption and performance.”

APOLLO was prototyped on the Arm Neoverse N1 and Cortex-A77 microprocessors.

Less analog-to-digital conversion for in-memory processing
Researchers from Washington University in St. Louis, Jiao Tong University in Shanghai, the Chinese Academy of Sciences and the Chinese University of Hong Kong have designed a new in-memory processing (PIM) circuit which uses neural approximators to reduce the amount of analog information needed. to convert to digital.

“Today’s computing challenges are data-intensive,” said Xuan “Silvia” Zhang, associate professor in the Department of Electrical and Systems Engineering at Washington University in St. Louis. “We have to process tons of data, which creates a performance bottleneck at the CPU and memory interface.”

The team created resistive random-access memory PIM, or RRAM-PIM. “In resistive memory, you don’t have to translate to digital or binary. You can stay in the analog domain. If you need to add, you connect two currents,” Zhang said. “If you need to multiply, you can change the resistor value.”

However, RRAM-PIM encounters a bottleneck when the information needs to be converted to digital. To reduce this, the team added a neural approximator. “A neural approximator is built on a neural network that can approximate arbitrary functions,” Zhang said.

In the RRAM-PIM architecture, after the resistors in a crossbar network have performed their calculations, the responses are translated into a digital format. This means in practice that we add the results of each column of resistors on a circuit. Each column produces a partial result. Each of these must be converted to digital, an energy-intensive operation.

Neural approximation makes the process more efficient by performing multiple computations in columns, across columns, or in the most efficient way. This leads to fewer ADCs and increased computational efficiency, the researchers said.

“No matter how many analog partial sums are generated by the columns of the RRAM crossbar array – 18 or 64 or 128 – we only need analog-to-digital conversion,” said Weidong Cao, a postdoctoral researcher at the University from Washington to St. Louis. “We used a hardware implementation to reach the theoretical low limit.”

The researchers say the approach could have great benefits for large-scale PIM computers.

Quick Charge Challenges
Researchers from Argonne National Laboratory and the University of Illinois at Urbana-Champaign have identified some of the problems that arise when batteries charge too quickly, which hampers battery life for things like fast charging electric vehicles.

Lithium-ion batteries typically use a graphite anode. The process of inserting lithium ions into the anode is called intercalation. When a battery is charged too quickly, instead of intercalating, the lithium ions tend to aggregate above the surface of the anode, creating a plating effect.

“Plating is one of the primary causes of battery performance degradation during fast charging,” said Daniel Abraham, a battery scientist at Argonne. “As we rapidly charged the battery, we found that in addition to the plating on the anode surface, there was a buildup of reaction products inside the pores of the electrode.” As a result, the anode itself experiences some degree of irreversible expansion, which adversely affects battery performance.

The researchers used scanning electron nanodiffraction to observe the battery. They discovered that at the atomic level, the network of graphite atoms at the edges of the particles deforms due to repeated fast charging, which hinders the intercalation process. “Basically what we’re seeing is that the atomic lattice in the graphite is warping, which prevents the lithium ions from finding their ‘home’ inside the particles – instead they plate onto the particles,” Abraham said.

“The faster we charge our battery, the more atomically disordered the anode will become, which ultimately prevents the lithium ions from being able to move back and forth,” Abraham said. “The key is to find ways to prevent this loss of organization or to somehow modify the graphite particles so that the lithium ions can intercalate more efficiently.”

Comments are closed.