Innovotek -News & BLOG

A blog about the news of the latest technology developments, breaking technology news, Innovotek Latest news and information tutorials on how to.

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Login
    Login Login form
Innovotek

Innovotek

Super User has not set their biography yet

b2ap3_thumbnail_download-2.pngThese days, a typical corner (TT) is no longer typical for most applications. For that matter, standard PVT Corners (FF/TT/SS), generally, do not represent the exact environmental conditions in which an ASIC/SoC will be functioning. This means the voltage may not be a nominal Vdd in a typical case or Vdd±10% in an extreme case; and the temperature may not be 25C in a typical case or 125C/-40C in extreme cases. Also, in today's market, everyµW of power saved, and nS of delay avoided, makes a significant difference in a product's performance and cost. Therefore, it is important to know how a system behaves under real-time PVT conditions. One needs to characterise foundation IPs at these special (custom) corners to avoid overdesign and achieve optimal product for best power and performance. When estimating the power and timing numbers of an IP at a custom corner (e.g., @95C and Vdd+3%), it is not easy to derive values from regular SS, TT, and FF characteristics as these may not support linear extrapolations. Even small errors in calculation can be very risky. One approach is to use characterisation tools (e.g., Silicon Smart from Synopsys) that can easily characterise foundation IPs to estimate power and performance of an SoC at any custom corner with substantial accuracy using reference ".lib" files.

Ensuring accuracy

In order to generate an accurate custom corner ".lib" file, one must ensure that a reference ".lib" file, which is already provided by an IP vendor, can be generated using the setup. The better co-relation achieved ensures more accurate ".lib" generation for the custom corner. Various options and settings available in the tool enable proper alignment of setup to adhere to the processes followed by different vendors to generate highly accurate ".lib" files. The tool also provides the flexibility to choose between different simulator environments available in the market (e.g., HSpice, Spectre).

...
Hits: 11902
Rate this blog entry:
2

b2ap3_thumbnail_9TpdK4aTE.gifChip design costs are expected to shoot up, but software—not hardware—is playing a much greater role in the problematic equation.

This is according to Mentor Graphics Corp. chairman and CEO Walden Rhines, who said that the shift in the equation will require a new type of EDA technology—embedded software automation (ESA)—as a means to attack the problem.

Rhines warned that IC design costs for many devices are projected to hit the dreaded $100 million level within the next three years. Not long ago (and even today), IC design costs ranged between $20-to-$50 million.

...
Hits: 4895
Rate this blog entry:
2

b2ap3_thumbnail_download-1.pngWhat if the supply chain community could emulate the “Internet world” and create a universal, open logistics network that is economically, environmentally, and socially efficient and sustainable? Such a concept exists, and it’s called the Physical Internet. Today the Physical Internet is a vision for an end-to-end global logistic network, but there are plans to turn it into a reality by 2050.

Companies constantly strive to improve the efficiency of the logistics networks that move their goods worldwide. Although performance levels have increased significantly over recent decades, they are far from satisfactory. For example, too many containers and freight vehicles transport empty space or are idle because of operational delays. All too often, disruptions prevent products from reaching consumer markets, adding to the waste that pervades many logistics networks.

The Physical Internet proposes to eliminate these inefficiencies in much the same way that the Internet transformed the flow of information around the globe.

...
Hits: 5042
Rate this blog entry:
2

b2ap3_thumbnail_images-1.pngApplication specific integrated circuits (ASICs) typically conjure up the notion of massively complex logic chips containing tens or hundreds of thousands (even millions) of transistors configured to solve a customer’s unique set of problems. Unlike multi-function standard product ICs such as a micro-controller that can find its way into a wide variety of applications, ASICs are designed for one specific application and generally for one specific product or product family.To better understand the role and applicability of ASICs, it is important to briefly review their historical origins.

The first integrated circuits from the early ‘60s contained just a few transistors and performed simple digital logic functions such as "and", "or", "nor", etc. These were called SSI devices, meaning small-scale integration. As photolithography techniques improved, more and more transistors could be built on a single sliver of silicon. Soon, chip companies were developing medium scale integration (MSI) functions like flip-flops, buffers, latches, etc (10-100 transistors). Large-scale integration or LSI (100-1,000 transistors) and eventually VLSI (up to 100,000 transistors) ICs followed, providing lower system costs and higher levels of performance. Today, of course, we have digital chips in excess of a billion transistors thanks to advanced sub-micron lithography and the low voltage, high speed processes upon which they are built.

The first digital ASICs were built using a standard cell library consisting of fixed-height, variable-width ‘tiles’ containing the digital logic functions discussed above. The ability to reuse these blocks over and over saved time and money when designing a custom logic IC. Analog ICs were initially comprised of a pair of matched transistors and soon expanded to include rudimentary op amps, voltage regulators, comparators, timers and much more.

Demands of analog
Analog applications typically involve much higher voltages, so these ICs needed their own unique set of manufacturing processes. More recently, market demands for smaller size, higher speeds and lower power consumption have forced a merging of analog and digital functionality on a single silicon chip. Cells consisting of the basic analog building blocks discussed above were created and added to the digital libraries. These Analog cells were restricted to the digital fab processes developed for predominately logic applications.

Today, most ASIC companies offer some degree of analog functionality as a part of their services. In many cases, the analog functions are mimicked with digital design techniques. In others, compromises to the analog functionality must be made to facilitate the use of standard library cells that are designed to yield well in the fab processes developed for high speed, high density, low power digital designs. Often, these chips are referred to as Mixed-signal ASICs or as big “D”, little “A” ASICs, meaning high digital content and minimal analog content.

Analog ASICs play a critical role in our lives. Without them, none of the portable electronic devices we use in our daily lives would exist. Imagine a world without cell phones, MP3 players and navigation systems. Building them with standard products would make them prohibitively expensive and physically impossible to carry in our purses or pockets. Every automobile contains dozens of ASIC chips for everything from climate control to airbag deployment; suspension control to entertainment systems. ASICs also play important roles in applications for hospital medical equipment, eMeters, home appliances such as washers and dryers, scuba gear, hearing aids, and much more.

Picking an ASIC design partner
The analog ASIC market is huge. In fact, research firm IC Insights reports that almost 60% of the nearly $37B of analog ICs sold in 2010 were ASICs. Yet very few mixed-signal ASIC design houses fully understand the implications of custom analog design and its applicability to analog-centric ASICs. ASICs requiring high analog content should be directed to those design houses that specialize in analog circuit design rather than those who simply select analog IP blocks from a library. Analog ASIC companies have large staffs of competent, experienced, analog engineers with expertise in a wide range of analog functions.

Reviewing an ASIC house’s patent portfolio as a quick guide as to the creativity of its engineering team will serve as a first order measure of its analog expertise.

Clearly, the large analog IC houses (like ADI, Linear Tech, Maxim, National, TI) have patent portfolios a mile deep. Those that also engage in analog ASIC development set high bars regarding who can access this capability and impose high minimum order requirements. For example, TI reports that their application-specific analog business focuses on a small number of large customers like Seagate, Sony, Samsung, Hitachi Global Storage Technology, Toshiba and a few others that require custom application-specific products. Minimum annual unit and or dollar volumes force the majority of the smaller customers to seek out independent analog or mixed-signal ASIC design houses.

Hits: 5408
Rate this blog entry:
2

b2ap3_thumbnail_images.pngFollowing a series of fatal accidents in the mid-1990s, a formal investigation was conducted with the Therac-25 radiotherapy machine. Led by Nancy Leveson of the University of Washington, the investigation resulted in a set of recommendations on how to create safety-critical software solutions in an objective manner. Since then, industries as disparate as aerospace, automotive and industrial control have encapsulated the practices and processes for creating safety- and/or security-critical systems in an objective manner into industry standards.

Although subtly different in wording and emphasis, the standards across industries follow a similar approach to ensuring the development of safe and/or secure systems. This common approach includes ten phases:

1. Perform a system safety or security assessment
2. Determine a target system failure rate
3. Use the system target failure rate to determine the appropriate level of development rigor
4. Use a formal requirements capture process
5. Create software that adheres to an appropriate coding standard
6. Trace all code back to their source requirements
7. Develop all software and system test cases based on requirements
8. Trace test cases to requirements
9. Use coverage analysis to assess test completeness against both requirements and code
10. For certification, collect and collate the process artifacts required to demonstrate that an appropriate level of rigor has been maintained.

...
Hits: 5310
Rate this blog entry:
3

b2ap3_thumbnail_cedronics-physical-design-250x250.gifWhen designing a product that operates off of a small battery, low power consumption is extremely critical. A well-designed power system can be a key differentiator for a competitive low-power product. Designing an ultra-low-power system, on the other hand, can be a highly complex undertaking. A design team needs to balance and integrate a variety of low-power design approaches and techniques to achieve their goals. Using a combination of multiple power domains and operating voltages, along with thorough statistical analysis, a low power product can be designed to provide a competitive advantage.

Any company that develops products that operate off a small battery can benefit from considering well-integrated, physical design practices for low power consumption design targets. Wireless sensors, mesh networks, wearables, Bluetooth devices, Internet of Thing (IoT) devices, hearing aids, mobile phone audio/video processing capabilities, tablets and headsets all require an optimized approach to achieve ultra-low power.

So, what is it that these ultra-low-power products need? Often, a minimum clock rate will be used meet performance goals. Higher threshold voltage (VT) devices may be used to minimize leakage. Perhaps multiple clock domains, multiple power domains and/or multiple operating voltages will be part of the low-power design solution. Lower operating and standby voltages lead to much lower overall power consumption; however, multiple power domains also result in a complex SOC (system on a chip) power structure.

...
Hits: 5331
Rate this blog entry:
2

b2ap3_thumbnail_download.pngAbstract
Simulink models are used as executable specifications in commonly used design flows for mixed-signal ASICs. Based on these specifications, analog and digital components are directly implemented in mixed-signal design environments. This step constitutes a large leap of abstraction. In this work, we address this aspect by showing and discussing an approach for automated transitions from Simulink models representing analog and digital components to HDL descriptions using HDL Coder. On the one hand, we translate analog Simulink components into continuous-value discrete-time HDL descriptions that can serve as reference behavioral models in the mixed-signal design environment. On the other hand, for digital Simulink components, we developed optimizations for Simulink models in order to achieve resource-efficient HDL descriptions. Both solutions in the analog and digital domain were integrated into Simulink Model Advisor. An evaluation of the presented design flow, as applied to an automotive hardware design, is shown.

Introduction
Electronic Control Units (ECUs) in the field of automotive electronics generally interact with the physical environment by using sensors and actuators. Thereby, mixed-signal ASICs (Application Specific Integrated Circuits) are needed as an interface between microcontrollers and sensors as well as actuators. In this work, we focus on ASICs connected to sensors, whereby the sensor is often enclosed with the ASIC into a system-in-package (SiP). In the most general case, mixed-signal ASICs consist of analog, non-programmable digital and programmable digital components.

The increasing integration density of ASICs allows more and more functionality, which leads to more complex ASIC designs. These require a holistic view on a high abstraction level at the beginning of the design. Therefore, a system-level (SL) design methodology is needed, where all ASIC components and the associated sensor are modeled in a common SL design environment. In standard flows, the design starts by developing an SL model that serves as executable functional specification. Based on this specification, the particular ASIC components are designed on implementation level (IL) isolated from the overall system and without any reuse of the design effort performed at SL. The isolation between SL and IL constitutes a gap in the design flow, which leads to redundant implementation efforts and consistency problems between SL and IL. Furthermore, the isolation of components from the overall system during implementation leads to lost optimization potential. That is why in (1), we proposed a seamless SL design methodology, which uses automated transitions from SL to IL models in order to reduce the effort of design transfer between SL and IL (see Figure 1).

...
Hits: 5051
Rate this blog entry:
2

b2ap3_thumbnail_IC31.gif

Abstract:
This paper aims to emphasize on the importance of integrating design for failure analysis in the layout considerations during the IC development process. It will have a brief overview on the importance of failure analysis in an IC development process, followed by an understanding of the failure analysis methodologies in the industries. This leads us to the different considerations in the layout of ICs to facilitate failure analysis, and also the challenges in failure analysis with increasing complexities when the design approaches nano-electronics.


I. INTRODUCTION

The introduction of the System-on-a-Chip (SOC) and the increasing complexity of ASIC designs has made testability and analysis of Integrated Circuits (IC) more challenging. It has almost become a mandatory requirement to do a Design-for-Testability (DFT) and Design-for-Analysis (DFA) before the freezing of the ASIC design as the ability to test and analyze a complex design will bring about a shorter product cycle and faster time-to-market. In DFT, testability requirements are presented in various levels of an SOC. In terms of failure analysis, there are many techniques available to analyze the device for physical defects, from isolate the faulty behavior to the failing transistor and to capture the images at localized root cause in substrate level to understand the failing root cause. Unfortunately, even though there are many existing techniques available for failure analysis, the analysis may not always be possible. This is because the inevitable trend in IC design is that most of the time, in order to have a faster time to market, IC designers need to embed cores that are untested and are not manufactured in-house. There are inadequate considerations in the IC layout that make analyzability of an IC difficult or impossible. Furthermore, smaller technology nodes also brings about an increase in the metal layers of an IC, new materials in wafer manufacturing processes and different IC packages which pose greater challenges to failure analysis. These challenges lead to higher equipment cost, more experienced manpower required, longer analysis turn around time and difficulties in the analyzability of an IC. Even for reused IP which are proven in the design, process shrinking or backend process parameters shift may bring problems like higher leakage current and reduced reliability. Hence there is an increasing need to integrate failure analysis requirements with the IC development in an early phase. Strong interactions between development team and failure analysts are required to identify the possible bottlenecks from the very beginning and provide solutions with adequate layout considerations. 

II. FAILURE ANALYSIS – THE DIFFERENT METHODOLOGIES

Failure analysis for integrated circuits is increasingly important and very much required for today’s complicated packaging and technology strategies. It is a process which requires the combination of analysis experiences, leading-edge equipment and techniques, and well defined failure analysis procedures to achieve fast turn around time for failure root cause findings. Failure analysis methodologies are important to the IC development process as it allows one to effectively identify the root cause of a design or process bug. The challenge of making rapid improvements in IC design and technology also requires the development of relevant failure analysis techniques. There are many analysis techniques and they can be generally divided into the 3 areas:

1. Chemical or physical preparation of the integrated circuits.
2. Fault localization of integrated circuit’s failing behavior.
3. Fault imaging of integrated circuits.

Chemical or physical preparation of the integrated circuit is the first step in the analysis of failing devices. The package has to be opened up either on the front or backside so as to allow further localization techniques to be carried out. Improved packaging techniques like Ball Grid Array (BGA) packaging had make the chemical or physical preparation increasingly challenging for failure analysis. This is an important process whereby one has to be careful not to destroy the failing signature and the electrical functions of a device.

Fault localization is the process where various techniques are used to isolate the defective areas on the die. Techniques like photoemission microscopy uses faint Infra-red radiation emitted by leakage current to localize the leakage site. Other techniques such as Thermal Induce Voltage Alteration (TIVA) uses the active approach whereby failure sites are located use a scanning ionizing beam, such as laser beams, to stimulate failures that are sensitive to carrier generation or thermal stimulation.[1] This allows one to reduce the area required for analysis and significantly save on the time required for failure analysis. The localized defect is then characterized with a view to understand the failure mechanism.


Fig. 1. Top and bottom photoemission setup in fault localization

The final step of failure analysis after fault localization is fault imaging. Failure analysis can only be completed once the root cause of the failure has been identified. Hence the defect must be imaged to prove the defective behavior. Optical microscopy is the obvious and most basic equipment for this purpose. However, with the complexity of microelectronics today, optical microscopy is of limited use. In order to achieve better resolution for imaging, microscopy is available using all kinds of beams: ultrasonic, electromagnetic (from infrared to x-ray) and particles like electrons and ions or near-field interactions with a stylus, as in scanning probe microscopy. All these imaging tools are equipped with CAD layout information for better navigation over the die. The FA processes can be generally summarized in the following table


...
Hits: 5758
Rate this blog entry:
2

Posted by on in IC Design

offering a custom silicon and software intended to streamline the design process for e-books using the same E-Ink display. 

TI's chips and software also are intended to allow OEMs to lengthen the battery lifetime of their e-books by 50 percent while shrinking the footprint by eliminating 40 discrete components.

"We are offering a comprehensive e-book development platform to e-book developers that will speed their time to market, lower their BOM, shrink their footprint by 200 square millimeters and increase their battery lifetime by about 50 percent," claimed Gregg Burke, TI's eBook business line manager.

...
Hits: 5024
Rate this blog entry:
2

Latest Article

General Partitioning Guidelines for Validation of Large ASIC Designs On FPGA
Today, a lot of the system-on-chip (SoC) designs depend on Field-...
Continue Reading...

Mailing List

sign up for email updates. We will make sure you are the first to hear Legal news .