A blog about the news of the latest technology developments, breaking technology news, Innovotek Latest news and information tutorials on how to.

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Login
    Login Login form

b2ap3_thumbnail_FPGA-Prototyping.jpgToday, a lot of the system-on-chip (SoC) designs depend on Field-Programmable Gate Arrays (FPGAs) as a way to accelerate verification, early start of software development and validate the whole system before committing to silicon. This is done primarily to meet time-to-market demands. Today's FPGAs have the capability to contain a complex and large system-level design. However, in some cases, there is a requirement for these designs to be partitioned among several FPGAs for validation or prototyping. But, splitting the design into several FPGAs can create various partitioning issues, especially for relatively large designs with complex connectivity. These issues could possibly be circumvented if certain guidelines are followed. This paper talks about the general partitioning challenges and the guidelines that can be followed to get past these issues.

Need For Partitioning:

As devices being prototyped on FPGAs are getting larger, following good design practices become important for all design flows. Adhering to recommended synchronous design practices makes designs more robust and easier to debug. Using an incremental compilation flow adds additional steps and requirements, but can provide significant benefits in design productivity by preserving the performance of critical blocks and reducing compilation time.

Hits: 44404
Rate this blog entry:
b2ap3_thumbnail_download-3.jpgMany argue that the focus point (and perhaps the linchpin) of successful supply chain management is inventories and inventory control. So how do food and agribusiness companies manage their inventories? What factors drive inventory costs? When might it make sense to keep larger inventories? Why were food companies quicker to pursue inventory reduction strategies than agribusiness firms?

In 1992, some food manufacturers and grocers formed Efficient Consumer Response to shift their focus from controlling logistical costs to examining supply chains (King & Phumpiu, 1996). Customer service also became a key competitive differentiation point for companies focused on value creation for end consumers. In such an environment, firms hold inventory for two main reasons, to reduce costs and to improve customer service. The motivation for each differs as firms balance the problem of having too much inventory (which can lead to high costs) versus having too little inventory (which can lead to lost sales).

A common perception and experience is that supply chain management leads to cost savings, largely through reductions in inventory. Inventory costs have fallen by about 60% since 1982, while transportation costs have fallen by 20% (Wilson, 2004). Such cost savings have led many to pursue inventory-reduction strategies in the supply chain. To develop the most effective logistical strategy, a firm must understand the nature of product demand, inventory costs, and supply chain capabilities.

Firms use one of three general approaches to manage inventory. First, most retailers use an inventory control approach, monitoring inventory levels by item. Second, manufacturers are typically more concerned with production scheduling and use flow management to manage inventories. Third, a number of firms (for the most part those processing raw materials or in extractive industries) do not actively manage inventory.

Hits: 9121
Rate this blog entry:

b2ap3_thumbnail_AMS_Martin_pic_blog2.jpg1 Abstract

This article provides an insight into various approaches followed for Analog and Mixed Signal (AMS) modeling and the associated challenges. The emphasis is on analyzing various approaches and finally providing options that can be used right from architectural exploration to implementation with a co-simulation based approach.

2 Introduction

Hits: 8150
Rate this blog entry:

Posted by on in IC Design

b2ap3_thumbnail_AAEAAQAAAAAAAAPSAAAAJDJhZGFjZTUyLTZhYmItNDFiNC1hZGU1LWFlODRjZmRiODI5YQ.jpgImplantable medical devices have been around for decades. Early on, most of the established applications for medical devices focused on cardiac rhythm management. Such devices were used to treat irregular heart rhythms, such as bradycardia (beating too slowly) or tachycardia (beating too fast).

Alternatively, today’s implantable circuits provide therapy to treat numerous conditions. New applications in neurological stimulation can be used to treat sleep apnea, pain management, Parkinson’s disease, epilepsy, bladder control, gastrointestinal disorders, numerous autoimmune diseases, and psychological disorders, such as obsessive compulsive disorder (OCD). Meanwhile, implantable systems can now provide precise dosage and interval delivery of drugs to treat patients while minimizing side effects.

With the ever-increasing clinical need for implantable devices comes the continuous flow of technical challenges. As with commercial portable products, implantable devices share the same need to reduce size, weight, and power (SWaP). Thus, the need for device integration becomes imperative. There are many challenges when creating an implantable medical device.

Hits: 9265
Rate this blog entry:
Joel Spolsky: "The three things I would tell people to learn are economics, writing and C programming."
Image: Stack Exchange

For new programmers, knowing which languages and skills to learn can be overwhelming.

Just to secure a job interview, developers often have to show they are familiar with the long list of languages and associated technical skills demanded by employers.

While it can be tempting for new developers to dive straight into learning every skill recruiters ask for, those who want to maximise their chances of a successful career would be better served by first getting to grips with three fundamentals, according to Joel Spolsky.

Hits: 8195
Rate this blog entry:

In most cases today, IC power analysis efforts are mainly focused at signoff. Even though some place and route (P&R) solutions provide simple checks at the floor planning stage, there are a lot of opportunities to improve power analysis capabilities during design, and to align and integrate these with the signoff tools and the overall design flow. 

Figure 1. A full-flow approach to power analysis during IC physical

Although every semiconductor company handles power analysis slightly differently, Figure 1 shows an idealized approach in which power analysis is embedded across the entire IC physical design flow. IC designers get their specifications from a cross-functional architecture power constraints definition, which is often called the “power budget.” This budget, which typically is fixed before any new IC implementation is started, specifies the maximum power permitted for the system, the board and the IC packages. At this point, the budget is essentially a rough estimate based on the know-how and experience of the IC, package and board engineers involved in the requirements definition for each new product development project. Once a high level budget is defined, it can be used to project the expected current flow and a power grid (PG) definition can be derived and partitioned into more detailed domains within the IC. The power grid should be able to accommodate the maximum expected current flows without a significant voltage drop given a design margin of around 10%. This sets the constraints for IC physical design implementation in a conventional P&R flow, which ends in a final power signoff analysis that validates the design against the power budget. Unfortunately, the process does not always proceed in this idealized, linear fashion. When the budget estimates are off, or the implementation is more difficult and power hungry than anticipated, “making ends meet” can become painful and lengthy.  This article describes in more detail how power constraints are enforced in the design flow and highlights areas of opportunity to improve the results, eliminating surprises, with a more robust power analysis capability.

Signoff Power Analysis

Hits: 10004
Rate this blog entry:

Posted by on in ASIC Design

Today, ASIC design flow is a very solid and mature process. The overall ASIC design flow and the various steps within the ASIC design flow have proven to be both practical and robust in multi-millions ASIC designs until now.

Each and every step of the ASIC design flow has a dedicated EDA tool that covers all the aspects related to the specific task perfectly. And most importantly, all the EDA tools can import and export the different file types to help making a flexible ASIC design flow that uses multiple tools from different vendors.

ASIC design flow is not exactly a push button process. To succeed in the ASIC design flow process, one must have: a robust and silicon-proven flow, a good understanding of the chip specifications and constraints, and an absolute domination over the required EDA tools (and their reports!).

Hits: 7432
Rate this blog entry:

Posted by on in IC Design

b2ap3_thumbnail_8430-demo-board---web.pngDigital circuits are circuits dealing with signals restricted to the extreme limits of zero and some full amount. This stands in contrast to analog circuits, in which signals are free to vary continuously between the limits imposed by power supply voltage and circuit resistances. These circuits find use in “true/false” logical operations and digital computation.

The circuits in this chapter make use of IC, or integrated circuit, components. Such components are actually networks of interconnected components manufactured on a single wafer of semiconducting material. Integrated circuits providing a multitude of pre-engineered functions are available at very low cost, benefitting students, hobbyists and professional circuit designers alike. Most integrated circuits provide the same functionality as “discrete” semiconductor circuits at higher levels of reliability and at a fraction of the cost.

Circuits in this chapter will primarily use CMOS technology, as this form of IC design allows for a broad range of power supply voltage while maintaining generally low power consumption levels. Though CMOS circuitry is susceptible to damage from static electricity (high voltages will puncture the insulating barriers in the MOSFET transistors), modern CMOS ICs are far more tolerant of electrostatic discharge than the CMOS ICs of the past, reducing the risk of chip failure by mishandling. Proper handling of CMOS involves the use of anti-static foam for storage and transport of IC’s, and measures to prevent static charge from building up on your body (use of a grounding wrist strap, or frequently touching a grounded object).

Hits: 12144
Rate this blog entry:


Today's classrooms, including studios, laboratories, auditoriums, and other indoor environments, have a wide variety of physical structures that support and facilitate student learning. There is no perfect classroom physical design to accommodate all types of academic activities. Because students learn in diverse ways, higher education administrators must realize that classrooms should be designed to promote various ways in which students acquire knowledge (The L-Shaped Classroom, 2007). Well-designed classrooms not only promote teamwork and interest in student learning, but also encourage active class participation (Niemeyer, 2003). Although college classrooms with permanently attached seating and furnishings are beyond the instructors' control, they may partially influence student evaluations of college instructors in terms of the overall teaching effectiveness and performance (Safer et al., 2005).

Students are not the only ones who feel helpless and hopeless when the built classroom environment is beyond their control (Veltri et al., 2006). Faculty also feel helpless and sometimes even fearful (Veltri et al., 2006). Niemeyer (2001) does not believe faculty should be fearful of their teaching environment. Unfortunately, classrooms are not always a location that empowers faculty and that is conducive to student learning. Physical settings and factors can motivate or discourage many room occupants (Lackney, 1999). Hence, a classroom's arrangement of visual, furniture, and equipment should be carefully considered in order to empower both instructors and students (Niemeyer, 2001).

Hits: 8066
Rate this blog entry:

American companies held a 54% share of the total worldwide IC market in 2015, which includes sales from IDMs and fabless IC companies, reports IC Insights.

b2ap3_thumbnail_images-1.jpgThe total does not include foundry sales.

South Korean companies captured a 20% share of total IC sales and Japanese companies placed third with only an 8% share. Chinese companies accounted for 3% of total IC sales last year.

Hits: 7497
Rate this blog entry:

b2ap3_thumbnail_SUPPLY-CHAIN-TRENDS.pngIn many ways 2015 was a momentous year for the supply chain and logistics industry in terms of acquisitions and innovations. Prof John Manners-Bell looks ahead to 2016 to see what should be expected…

1. US drives world economic growth and trade

The United States will drive the global economy in 2016 which will mean that US logistics companies will continue to prosper, both at home and abroad. The stronger dollar that has been evident at the end of 2015 will obsorb in imports from around the world, which will give Asian and European exporters a welcome boost and strengthen transpacific shipping volumes in particular. However, emerging markets have been forced to increase their interest rates which will have a detrimental effect on already struggling economies. China is experiencing a relatively hard landing in terms of falling economic growth, but it will be supported by the growth of the US economy for exports and its e-commerce market has seen staggering growth despite the economic situation. Also helping the global logistics industry will be a recovery in Europe, which is proceeding better than many economists expected.

Hits: 7706
Rate this blog entry:

b2ap3_thumbnail_man-drawing.jpgAfter 4 months of articles about data modeling, we're getting close to a functional database. I started this data modeling series with "Data Modeling" (April 2000), which discussed how to gather project requirements. I followed up with "Process Modeling" (May 2000), which reviewed process modeling and demonstrated my own variation of data-flow diagramming to illustrate what happens as data moves through the system. Then in "Entity Modeling" (June 2000), I developed a concept model, or entity relationship diagram (ERD), of the database. Last month, in "Logical Modeling" (July 2000), I translated that ERD into a logical model, which is a closer representation of the evolving database. This month, I develop a physical design that takes us one more step toward a working database.

A physical design is a specification for database implementation. At this phase of development, you must know the database platform you're going to use—perhaps SQL Server 7.0 on Windows NT Server 4.0, Microsoft Access on the desktop, Oracle on the mainframe, or some other platform. To create a physical design, you pull together all the specifications and models you've created so far, then modify and optimize them for your target platform. For example, you need to modify all column properties, including data types, so that they're specific to your target environment. You can add extra columns that don't appear in the conceptual or logical models, such as flag columns and timestamp columns that facilitate data processing. You also need to size the database, analyze data volume and use, incorporate any replication or distribution plans, and select candidate columns for indexing. If you're thinking ahead, you'll also determine user roles (groups), logins, and security permissions; requirements for data retention (archiving plans); and failover and backup and recovery plans.

Creating a Physical Design

The physical design is a composite of models that, taken together, form a complete or near-complete specification for implementing a database. So what does a physical design look like? Part of it, the piece we ordinarily call the "physical model," looks like an expanded ERD, as Figure 1, page 62, shows. Other parts, such as the Data Volume Analysis model and the Security Matrix, also look familiar.

Hits: 11209
Rate this blog entry:



Whether the finished product is a smartphone, a shirt or a sapphire ring, tracing component parts back to their original sourceslong has proved an elusive quest.

Hits: 7683
Rate this blog entry:

b2ap3_thumbnail_1024px-Wipro_Logo.svg.pngLUSTENAU, Austria & OLNEY, England—(BUSINESS WIRE)—November 4, 2008— Wipro NewLogic and IN2FAB Technology today announced the launch of a new facility to provide design porting services for analog mixed signal and custom IC designs between foundry processes and geometries. The co-operation enables IC designs and IP to be ported to a manufacturing standard in just a few weeks, typically offering up to 10X reductions in cycle time and engineering costs as well as freeing up customers engineers to focus on other potentially higher value added activities.

Known as Port-on-Demand, this service line will reside within Wipro NewLogics Product Engineering Services division based in Bangalore, India and Lustenau, Austria.

Wipro NewLogic has assembled world class semiconductor design and engineering operations in India and Europe offering a full capability for IC design services with competitive cost and time to market benefits. Over several years IN2FAB has established a strong track record of porting silicon successfully with its migration tools and methodologies covering all CMOS geometries including most recently the 45nm node. IN2FAB will provide its migration tools, methodologies and infrastructure to the Port-on-Demand facility.

Hits: 9060
Rate this blog entry:

b2ap3_thumbnail_download-1.jpgAbstract :

The hard IP cores, as hard modules, should be pre-placed or placed inside the soft modules as the first step in VLSI physical design. Normally, automatic placer can not obtain good results because some aspects, such as power, the connection with the standard cells, halo and the location of pins, are not under consideration in the placement algorithm, which is true especially for the complex design containing more IP cores with different sizes. This paper proposes a hard IP core placement method, adopting manual adjustment steps, such as rotation, removing overlap and change the placement obviously, based on the results of the automatic placer. An AVS HDTV decoder chip and a test small chip are implemented by this method. The application results have proved that this method is optimum and practical in VLSI physical design.


With the increasing of the design complexity and the popular adoption of IP cores, the module placement, as one of the main tasks of floorplanning at the early stage of VLSI physical design, has become more important than before. The modules are classified as two types: hard module and soft module. The hard IP cores, which usually have fixed area and shape, are regarded as hard modules at placement stage.

There are several placement algorithms [1][2][3] and most of automatic placers are developed on them. Placement constraints, such as clustering constraint [4], boundary constraint [5] and abutment constraint [6], are efficient for users to restrict the position of certain modules and to improve the performance of placers. These placers can deal with the placement of both hard module and soft module only consisting of standard cells.

However, there are still some disadvantages for these automatic placers to place the hard IP cores. At first, these tools deal with the parameters, such as area, routability and timing, but another critical issue of power can not be considered. In addition, the connection of the pre-placed hard IP cores with standard cells, the location of pins and the halo, are not taken into account in placement algorithms. Furthermore, the size of hard IP cores is varied, and the number of hard IP cores in VLSI design has increased distinctly. All of these factors limit the application and usefulness of many automatic placers for large and complex VLSI design in practice.

A hard IP core placement method is proposed in this paper, which takes the issues mentioned above into consideration. This method combines the automatic placement with manual adjustments, in order to optimize the hard IP core placement effect. The manual adjustments includes rotation and the overlap resolving, both of them change the location only slightly, and changing the location of hard IP cores obviously after the later steps in physical design. This method was implemented in a small test chip and an AVS HDTV decoder chip. The application results proved that the method is useful and practical.

The remainder of the paper is organized as follows: Section 2 describes the hard IP core placement method. Experimental results are presented in section 3. Section 4 draws a conclusion.


For an actual VLSI design, which is viewed as a set of hard IP cores and soft modules according to the hierarchy at the module placement stage, the automatic placer might not produce good results because of various factors. The main reason is that the location of hard IP cores usually has a priority. These modules are pre-placed or placed inside soft modules, and then the standard cells are subject to be placed in the rest area of relative soft modules. As a result, the connectivity between the standard cells and the hard IP cores will not be taken into consideration unless the standard cells are pre-placed. At the same time, the power routing plan is more complicated because the power grids will be broken by the presence of these modules, which make the IR drop more serious. In addition, the shape and the location of pins of hard IP cores are varied with the foundry. And the overlap of the halos is difficult to be treated with due to the different values of the halo.

In order to solve the problems mentioned above, this paper proposes a hard IP core placement method, which does efficient manual adjustments based on the initial automatic placement of hard IP cores, as the Figure 1 shows. For the sake of clarity, only the steps relating to the hard IP core placement are drawn and connected in the figure. The manual adjustment mainly includes two styles. One is change the location slightly, such as rotation and removing overlap. The other is to change the location obviously after the following steps in physical design, such as power routing, standard cells placement and routing.

Figure 1. The hard IP core placement method

This hard IP core placement method is a general method and the steps may change a little according to the shape and the pin location of the IP cores. The IP cores have two mainly shapes: square and rectangle. The rotations of the rectangle IP cores may cause overlap, while the rotations of the square IP cores are not. Furthermore, the pins of the hard IP cores located differently, such as in corner, in one side and in two sides. The manual adjustment steps are presented as follows.

2.1 Rotation

The initial automatically placed hard IP cores need be rotated on condition that described as follows. Firstly, rotating the hard IP cores by users according to the locations of pins could reduce the wire length between the IP cores themselves. Secondly, the location of pins also have a directly relationship with the connection of relative standard cells. Rotation could reduce the congestion, improve the routability between hard IP cores and standard cells, and avoid the DRC violations such as spacing and short. However, the requirement of this rotation only can be obtained after the standard cells are placed or the route is finished. For a complex design, the rotation should get a tradeoff between all these factors mentioned above. Furthermore, the power grid will be cut off when encounters the hard IP cores, these modules should be rotated, if these IP cores are rectangle, to decrease or avoid the influence on power routing.

The 90°or 270° rotation of hard IP cores may cause overlap of the halo or themselves, if the IP cores are rectangle and the distances between them are not large enough. The overlap should be resolved later.

2.2 Removing overlap

Removing overlap is the next step in this method. Most of the overlap present to halo, which is popular and useful in current VLSI design. Halo, the area that prevents the placement of the standard cells within, is added around the hard IP cores in order to provide additional routing space and reduce congestion. The number of the halo is estimated according to the design itself, and the width of halo in the four sides may different. However, the halo is not taken into consideration in placement algorithm, which results in the overlap of hard IP cores are avoided whereas the halo may overlap. Furthermore, the rotation may cause overlap of the halo or cores themselves if the hard IP cores are not square.  

The overlap of the hard IP cores will cause manufacture error, while the overlap of halo may cause placement error. And either overlap means the distance between the hard IP cores are too small to route or insert standard cells. The approach to remove the overlap is moving the hard IP cores which have the overlap to be separate manually as short as possible, while the general locations are changeless based on the initial placement.

2.3 Change the location obviously

When the slightly change steps are finished, the following steps in physical design will be implemented as typical physical design flow. Then, if there still have some timing violations, DRC violations or IR drop violations caused by the placement of some hard IP cores, change the location obviously step should be implemented. This step includes increase the distance of hard IP cores as long as need, exchange the location of several hard IP cores and move hard IP cores from one corner to another corner, according to the practical design. The distance of the hard IP cores could be estimated according to the target usage and the shape of the soft module, the number of hard IP cores that be placed inside this soft module, the shape and the pin locations of the hard IP cores, and the size of the soft module. 

In practice, an obviously change of the location could resolve certain violations, but may cause the other violations because the complicated relationship of the design parameters. On this condition, this step need be done several times, in order to solve all the violations and get a tradeoff of the design parameters. The number of iteration depends on the design scale, design complexity, the number of hard IP cores, the requirement of the design parameters and the experience of the designers.


This method was implemented in a small test chip (chip 1) and an AVS HDTV decoder chip (chip 2). The RAM and ROM IP cores in these two chips, which are regarded as hard IP cores, should be pre-placed and placed inside the soft modules with certain area. What’s more, the IO cells are arranged and the locations of the soft modules are fixed before the hard IP cores are placed in chip 2. The design parameters of these two chips are listed in Table 1 and the manual adjustments of them are compared in Table 2. Furthermore, the application results of this method were compared with the automatic placement, which are shown in Figure 2 and Figure 3, and the relative data are listed in Table 3 and Table 4.  

The AVS HDTV decoder chip has been delivered to the EDA center and under tape out now.

Chip 1 Chip 2
Hard IP cores(number) 10 126
Soft module (number) 1 15
Standard cell (number) 1000 40,000 in all
Hard IP core size varied varied
Hard IP core shape rectangle rectangle
Pins location 4 in corner, 6 in left side all in both left side and right side
Halo four sides: 10um up, bottom: 10um left, right: 30um
Area 5*5um^2 11000*11000um^2
Library SMIC 180nm TSMC 90nm

 Table 1. The design parameters of two chips

Figure 2. The compare hard IP core placement results of the test chip.

Figure 3. The compare results of the AVS HDTV decoder chip.

From these tables, we can get the conclusion that for these two different chips, the application of this method could obtain better results than the automatic placement in design parameters, such as timing, congestion, routability and IR drop.


A hard IP core placement method is proposed and used in designing a test chip and an AVS HDTV decoder chip. The designed results indicate that the proposed method is useful and helpful for complex VLSI physical design.  

Hits: 11255
Rate this blog entry:

Allow us to assist you with your business needs


Latest Article

General Partitioning Guidelines for Validation of Large ASIC Designs On FPGA
Today, a lot of the system-on-chip (SoC) designs depend on Field-...
Continue Reading...

Mailing List

sign up for email updates. We will make sure you are the first to hear Legal news .