RONALD GOERKE, Principal Engineer, Advantest
The rapid evolution of semiconductor devices has amplified the demand for advanced automated test equipment (ATE) that can handle increasingly complex test scenarios for logic devices. ATE vector memory is becoming an increasingly valuable commodity as scan-pattern volume soars. Extrapolations based on data from the International Technology Roadmap for Semiconductors (ITRS) indicate that scan data volume will double every three years, and some new data suggests that with the growth of AI products, scan data could begin increasing tenfold over future three-year periods. Furthermore, as parallel and multiplexed scan give way to multi-gigabit high-speed serial I/O (HSIO) scan (as specified in the IEEE 1149.10 standard or in proprietary implementations), devices with fewer pins require even more vector memory behind every single device pin.
Contending with the data
Key drivers of this data explosion include higher gate counts, new and more intricate fault models, and chiplets as they demand lower DPPM. Consequently, ATE systems are increasingly likely to run out of memory when testing complex devices. Several possible solutions can help to more efficiently use available memory: you can use higher levels of pattern compression, avoid pattern duplication, simplify instructions, or combine patterns to avoid complex operating sequences, for example. If such steps are not sufficient, you can use site memory sharing, which must be enabled on a per-pattern basis, or traditional memory pooling, which occurs automatically, although the user must consider load-board design. In either case, sharing is restricted to one memory pool, which can create bottlenecks for data-intensive scan and functional test and can complicate load-board design.
As an example, a Pin Scale 5000 digital card contains eight modules, each with 32 channels and four test processors, providing eight channels per test processor. The eight channels represent one memory pool, and traditional memory pooling can stack all eight channels of memory behind one pin, with fanout supporting up to eight channels for multisite memory sharing (FIGURE 1). However, with the traditional implementation, a given memory pool in the Pin Scale 5000 test instrument cannot extend beyond eight channels, potentially leading to a tradeoff between a costly hardware upgrade and compromising test coverage and efficiency.

Extending the memory pool
Advantest developed Xtreme Pooling technology to overcome this limitation and avoid unpleasant tradeoffs. Introduced with SmarTest release 8.7.2.0, the new technology extends the vector memory pool for the Pin Scale 5000 card beyond eight channels, optimizing flexibility and efficiency for high-speed, high-data volume test applications.
Enabled by the proprietary Xtreme Link communication-network technology for ATE systems, Xtreme Pooling is possible to implement because a test program usually does not fill the vector-memory pool of all test processors. In FIGURE 2, moving from left to right, each group of eight vertical bars represents eight channels of memory available for test processors TP2, TP3, and TP4. The dark areas represent memory that the respective test processors utilize, while the lightly shaded areas represent unused memory that could be allocated to other test processors.

Several naming conventions help to clarify how Xtreme Pooling works:
- Xtreme Pool refers to all free vector memory.
- Donor refers to a test processor whose memory can store data that can be executed on other test processors.
- Recipient refers to a test processor that can execute vector data copied from other test processors.
In addition, a new pattern property describes two memory locations: local (standard, associated with a particular test processor) and remote (in the Xtreme Pool).
Xtreme Pooling allows any test processor on a Pin Scale 5000 card to store vector data in other test processors’ underutilized memory.. Xtreme Pooling can serve in HSIO applications with data rates up to 4 Gb/s in multisite configurations as well as in any application with high data volumes.