Skip to content


    Modern Systems on Chips (SoCs) commonly require either large amounts of embedded SRAM (for example for AI applications) or ultra low leakage (IoT), both applications coming with their own set of challenges that need to be addressed. In cases of large amounts of SRAM, the embedded memory in SoCs can reach several Mbs. In such cases, the memory area accounts for a large portion of the chip area. To reduce the area impact, bitcells are often designed for highest density, using close-to-minimum size transistors. Pelgroms Law states that variation of the threshold voltage of a transistor is proportional to 1/ √ LW where L and W are the length and width of the transistor respectively. This increases the likelihood for a transistors threshold voltage to get close to the supply voltage (near-threshold operation), and consequently the risk for bitcell instability.

    Figure 1: Chip failure rate in function of the bitcell failure rate expressed in sigma under the assumption that if a single bitcell fails, the entire chip fails.

    As can be seen in the plot, more than 6σ yield can be required for memory-intensive cuts. One of the most efficient approaches for low leakage designs is to work at a scaled supply voltage.

    However, the minimum operating voltage of modern SRAMs can barely scale before bitcells start losing stability. A common way to mitigate this issue is to use dual rail memories. However, doing so requires on chip DC-DC converters which come with a penalty on both area and power. Furthermore, as the supply voltage approaches the threshold voltage, sensitivity to process variations become larger because of the increasing risk of reaching near-threshold operation. In order to mitigate the impact of variations while optimizing for power efficiency, it is necessary to minimize design margins via accurate high sigma simulations. These need to evaluate reliability and behavior across all targeted process corners and supply voltages as well as the effects of required assist techniques. 1 Xenergic is utilizing both voltage scaling and various assist techniques to provide memories that are tailored to any given application. In order to guarantee access time and robustness of our memories, it is imperative to get a complete view of the stability and performance effects of each configuration. To this end, we have developed XenVerifier, our in-house characterizer which accurately evaluates all components of the memory with respect to power, performance and yield.

    1.1 Bitcell qualification

    In order for a bitcell to be usable with a given configuration, 6σ (or higher) reliability in at least following modes has to be confirmed: • Data retention in sleep mode • Data retention during access • Read access within time constraint • Write access within time constraint

    Figure 2: With importance sampling, we are sampling from variation cases that normally are extremely rare. Here, the dotted curve represents the distribution of threshold voltages in the physical model. The solid curve shows a generated distribution shifted 4.5 standard deviations (σ) to the left.

    1.2 Bitcell evaluation

    The conventional way of statistically characterizing any component is to run a set of Monte Carlo (MC) simulations. However, the number of MC simulations required to verify the yield of a circuit is dependent on the desired yield. 2 Bitcells commonly occurs in the millions on a chip making extremely high yield necessary, typically calling for a yield of 6σ (∼ 10−9 probability of failure). As can be seen in Figure 1, even higher bitcell yield may be required as varies with cut size and expected chip yield. Estimating this yield with 10% tolerance and a 95% confidence would require 380 billion MC simulations and take years to simulate, which is obviously not feasible. A common way to handle this issue is with importance sampling, a method in which sampling distributions are shifted so that very improbable variation scenarios are being simulated (see Figure 2). The probability of getting those low-probability samples are then mapped back to the original distribution and used to estimate the failure rate.

    Figure 3: XenVerifier converges much faster than MC. Here, a high failure rate case is demonstrated to make a MC comparison feasible.

    XenVerifier evaluates bitcell behavior with 6σ+ variations by using our patented approach for rare failure event simulation. It features failure rate independent simulation time and a speedup of over 6 orders of magnitude compared to MC in normal cases. The approach has been verified to accurately find the most likely points of failure in circuits with over 300 variation parameters and dynamically modify sampling distributions accordingly. Figure 3 shows an example comparison against MC in a case with 4σ failure rate. It should be emphasized that if it were 6σ, MC would not find a single failing case while XenVerifier would converge at a similar rate. The high speed and accuracy of the XenVerifier allows us to evaluate bitcell performance and stability under a wide range of combinations of architecture choices and assist techniques. This information is then fed to XenCompiler which will choose the most area and/or power efficient configuration that meets the provided specifications.

    Download White Paper

    Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.