Showing posts with label Forte. Show all posts
Showing posts with label Forte. Show all posts

Monday, September 10, 2012

High-Level Synthesis not just for Hardware Design, It’s for Verification Engineers, too! - Brett Cline


CynCity

Brett Cline
Brett Cline
Brett Cline is vice president of marketing and sales at Forte Design Systems. Before joining Forte in 1999, he was Director of Marketing at Summit Design, where he managed both the verification product line, including HDL Score, and marketing communications. Cline joined Summit through the … More »

High-Level Synthesis is not just for Hardware Designers, It’s for Verification Engineers, too!

 
September 6th, 2012 by Brett Cline
We’ve seen an uptick in interest in high-level synthesis (HLS) around the world lately. Some of the increased interest is from designers that have MBOs to investigate HLS in 2012. Some interest is from the visibility that Forte’s Cynthesizer and HLS have had this year. And some is from people that simply do not have enough time to get their projects done with the allocated resources. This is where we can really help.
Cynthesizer automates many of the mundane coding tasks that hardware designers have to suffer with using Verilog daily. Through that automation, it will allow designers to quickly perform “what if analysis” on their macro and micro-architectural decisions without wasting months of effort.

Since the design model will now be in SystemC, a C++ class library, the functional code will be written in C or C++. Technically, it’s all C++ because we are using a C++ compiler, but the reality is that ANSI-C can pretty much be used as-is. The benefit of SystemC and C++ come from the addition of hierarchy, clock and bit accuracy, and other hardware specifics not available in standard ANSI-C. And, since SystemC is an IEEE standard, designers know that they are being locked into a proprietary language or set of extensions.
We are often asked about the verification benefits of using SystemC models for design and it’s a great question. While there are numerous others, here are three areas that will benefit from the higher-level approach.
First, hardware design and verification teams will benefit from significantly higher performance models. SystemC models typically run between 10x and 100x faster than RTL Verilog and even faster in some cases. This allows verification teams to setup and debug their verification environment faster, as well as run far more cycles through the high-level model. Since the model can be either a transaction-level model (TLM) or a pin-cycle accurate model (PCA), the verification team can vary the level of interface detail while maintaining high-level functional code.
Second, this model can be used for Virtual System Prototypes (VSPs). These highly abstracted models require far less code to implement (usually 10-20x less code) and are available months before the RTL designs.
Third, the SystemC model with HLS can be quickly targeted for an acceleration or emulation platform, including some of the big emulator “boxes” as well has home-grown FPGA solutions. This flow gives design and verification teams the best of both worlds –– the ability to quickly make changes, and get results into the hardware and get hardware accurate simulations at high speeds. Since SystemC models for Cynthesizer are technology independent, they can be quickly retargeted from FPGA to ASIC and back saving time.
Obviously, some of these benefits are hard to quantify. If we simply look at hardware verification benefits, we can quantify some. Working with a design team, we collected data using an ARM bus-based multi-function printer system. The design consisted of several blocks, both control and datapath, entirely in SystemC.
Here’s a look at simulation performance and the huge difference between the TLM behavioral model and the Verilog RTL model –– almost 500x!:

SimulationRuntime(hh:mm:ss)Ratio (Compared to TLM)
TLM   Behavior00:00:111
PIN   Behavior00:04:4025
Verilog   RTL01:28:21482
Cycle-Accurate   RTL*00:25:51141

* This is a process by which the Cynthesizer RTL output is converted to cycle accurate SystemC and simulated in SystemC.
We also measured lines of code for each design:
SystemC   TLM3511
Generated   RTL code59812   (17x)
While generated code tends to be a bit more verbose than handwritten Verilog RTL code, it’s not off by much and it’s easy to see a 10x reduction here. In a paper published at DVCon 2012, a Cynthesizer user claimed a nearly 40x reduction in code. Now that is productivity improvement!
SystemC and HLS provide a myriad of benefits to the designers in the form of better productivity, better quality of results (QoR), and true design reuse through technology-independent design. What had been less clear are the substantial benefits in verification as well.
From high-level models developed much faster to high-speed verification, high-level synthesis really is about delivering a better methodology. It allows designers and verification engineers to spend time on real hardware design problems, not on mundane tasks required by the 20+ year old Verilog RTL methodology.

EDACafe.com - CynCity - High-Level Synthesis is not just for Hardware Designers, It’s for Verification Engineers, too!

'via Blog this'

Thursday, March 8, 2012

Magma Assimilation, Forte (news + jobs), DVcon, Specman e, Atrenta, Calypto

If you care at all about EDA and verification, you should definitely get on John Cooley's mailing list.  Good stuff!

Cheers,
Connie "I Was Assimilated by Cadence, But I Got Better" O'Dell
Sr. Verification Specialist
c.odell@co-consulting.net
303-641-5191
_____________________________________________
CO Consulting - Boulder, CO - http://co-consulting.net


---------- Forwarded message ----------
From: John Cooley
Date: Thu, Mar 8, 2012 at 3:32 AM
Subject: TSMC 28 nm, DVcon, Specman e, Atrenta, Calypto, Forte, SKILL

 "I feel bad for the Magma employees being absorbed by the
  Synopsys Borg.  Today they learn that resistance is futile."

      - An EDA Vendor on SNPS-LAVA merger completing

-------------------------------------------------

 HW forums cite rumor that TSMC suddenly halted 28 nm production
    http://www.deepchip.com/items/0500-01.html

 Brett's quickie trip reports on both DVcon'12 and NASCUG'12 confs
    http://www.deepchip.com/items/0500-02.html

 CDNS too afraid of SV sales to support SNPS/MENT Specman "e"
    http://www.deepchip.com/items/0500-03.html

 Three chip designers hands-on evals of Atrenta Spyglass Power
    http://www.deepchip.com/items/0500-04.html

 We cut another 9% of power using the Calypto RTL PowerPro tool
    http://www.deepchip.com/items/0500-05.html

 Shiv compares IC Manage, CVS, and Subversion for digital design
    http://www.deepchip.com/items/0500-06.html

 I designed a USB 3.0 core using SystemC and Forte Cynthesizer
    http://www.deepchip.com/items/0500-07.html

 Huh? SKILL sklint() already built into the Virtuoso environment
    http://www.deepchip.com/items/0500-08.html

-------------------------------------------------

 Docea Power to unveil AceThermalModeler at upcoming DATE'12
    http://www.deepchip.com/look/see120308-01.html

 Jasper CEO Kathryn Kranen in San Jose Mercury News interview
    http://www.deepchip.com/look/see120308-02.html

 San Jose, CA - Forte seeks SystemC Cynthesizer field engineers
     http://www.deepchip.com/jobs/033.html




Monday, February 27, 2012

Designing, Verifying, and Building an Advanced L2 Cache Subsystem Using SystemC

Forte Design Systems to Demonstrate SystemC High-Level Synthesis at DVCon:

Press Release
At DVCon 2012 Booth #404
Forte Design Systems to Demonstrate SystemC High-Level Synthesis at DVCon

Paneve Paper to Outline its Design Successes Using Cynthesizer
SAN JOSE, CALIF. -- February 20, 2012 --
    WHO: Forte Design Systems™, leading provider of software products that enable design at a higher level of abstraction and improve design results
    WHAT: Will demonstrate the latest version of Cynthesizer™ SystemC high-level synthesis at DVCon 2012 in Booth #404
    WHEN: Tuesday, February 28, from 3:30-6:30 p.m., and Wednesday, February 29, from 4:30-7 p.m.
    WHERE: Doubletree Hotel, San Jose, Calif.
Thomas Tessier, vice president of Research and Development at Paneve LLC, will describe Paneve's experiences using Cynthesizer with a paper, "Designing, Verifying, and Building an Advanced L2 Cache Subsystem Using SystemC" at DVCon. It will be presented during Session 3 titled "SystemC and Beyond", to be held Tuesday, February 28, from 9 a.m.-10:30 a.m.
For more details about Forte and Cynthesizer, go to: www.ForteDS.com.
Information on DVCon can be found at: www.dvcon.org.
About Forte Design Systems
Forte Design Systems is a leading provider of software products that enable design at a higher level of abstraction and improve design results. Its innovative synthesis technologies and intellectual property offerings allow design teams creating complex electronic chips and systems to reduce their overall design and verification time. More than half of the top 20 worldwide semiconductor companies use Forte's products in production today for ASIC, SoC and FPGA design. Forte is headquartered in San Jose, Calif., with additional offices in England, Japan, Korea and the United States. For more information, visitwww.ForteDS.com.
Forte acknowledges trademarks or registered trademarks of other organizations for their respective products and services.
For more information, contact:

Brett Cline, Forte Design Systems
(978) 206-1855
brett@ForteDS.com
Nanette Collins, Public Relations for Forte Design Systems
(617) 437-1822
nanette@nvc.com
Forte Design Systems to Demonstrate SystemC High-Level Synthesis at DVCon

Thursday, February 16, 2012

SystemC, UVM, TLM, DVCon: Accellera Systems Initiative Day 2012 at DVCon – Monday, February 27

accellera.org

Join Us for Accellera Systems Initiative Day 2012
1st Annual Event Featured at 2012 Design and Verification Conference (DVCon)


Monday, February 27
DoubleTree Hotel, San Jose, California
www.accellera.org

DVCon

Accellera Systems Initiative™ is proud to sponsor The Design & Verification Conference & Exhibition (DVCon™). We are pleased to announce an exciting program of events for the first ever Accellera Systems Initiative Day on Monday, February 27!

Accellera Systems Initiative Day focuses on providing in-depth knowledge for our emerging and established standards to our user community. We are hosting a forum for SystemC users and conducting four tutorials on standards with sessions running concurrently throughout the day. We'll also host an interactive town hall lunch and discuss "What will success for the Accellera Systems Initiative look like?"

Accellera Systems Initiative Day is brought to you by our global sponsors: ARM, Cadence, CircuitSutra, Forte, Mentor Graphics, and Synopsys.

Agenda

8:30am - 12:00pm North American SystemC Users Group
8:30am - 5:00pm Tutorial: UVM: Ready, Set, Deploy!
12:00pm - 1:30pm Sponsored Luncheon: Town Hall Lunch with Accellera Systems Initiative
1:30pm - 5:00pm Tutorial: An Introduction to IEEE 1666-2011, the New SystemC Standard
1:30pm - 3:00pm Tutorial: An Introduction to the Unified Coverage Interoperability Standard
3:30pm - 6:30pm Tutorial: Verification and Automation Improvement Using IP-XACT

View agenda matrix >

North American SystemC User Group (NASCUG) Meeting XVII

NASCUG provides a unique forum for sharing SystemC™ user experiences among industry, research and universities. NASCUG operates independently but works in collaboration with the Accellera Systems Initiative to provide open forums for promoting information exchange. Our goal is to make SystemC end-users more effective through shared knowledge, user interaction and collaboration.

NASCUG topics and user presentations:

  • Accellera Systems Initiative: A New Synergy for Standards
  • What does C++2011 mean to SystemC?
  • Synchronization between a SystemC based off-line restbus simulator and a Hardware-In-the-Loop FlexRay network
  • Extending Fixed Sub-Systems at the TLM Level — Experiences from the FPGA World
  • A Generic Language for Hardware & Software, Are We There Yet? An Explorative Case Study Examining the Usage of SystemC for Multicore Programming

Participation is free. Find out more and register >

Tutorial: UVM (Universal Verification Methodology): Ready, Set, Deploy!

This tutorial will begin with an introduction to UVM™, concepts of structured verification methodology, base classes, resource configuration management, error handling, and report generation. It will continue with the UVM register package, including how to create and manage stimulus and checking at the register level. The morning session will conclude with a review of all of the topics, showing how they fit together in a complex SOC verification environment.

Introduction of these fundamental concepts will be followed by several real-life user experiences including lessons learned in preparing transition to UVM, architecting reusable testbenches, debug techniques and use of TLM 2.0 in real verification environments.

Find out more and register >

Sponsored Luncheon and Town Hall Meeting

Join us at lunch to celebrate the emergence of the Accellera Systems Initiative. This "town hall" meeting will have no presentations, but rather will feature you, the forward-looking front-end standards community, exchanging ideas on the future of the new organization. Accellera Systems Initiative Officers, Board Members and Technical Working Groups Chairs will join this lively, open meeting. The main topic for this discussion will be:

What will success for the Accellera Systems Initiative look like?

There are many facets to this question, such as:

  • New standards that should be pursued
  • Synergies that ought to be exploited between existing or emerging standards
  • Relationships with or expansion into adjacent technology areas, e.g., the Embedded SW world
  • Extension of User Groups activity across all of our standards

Come prepared to discuss these and other factors that will put the Accellera Systems Initiative on a path to success that will eclipse even the stellar achievements of its two predecessors, Accellera™ and OSCI™.

Free with registration to any of the DVCon tutorials or the NASCUG meeting.

Tutorial: An Introduction to IEEE 1666-2011, the New SystemC Standard SystemC

The latest version of the IEEE 1666 Standard SystemC Language Reference Manual, published early in 2012, represents the marriage of the SystemC and TLM-2.0 libraries into a single standard, together with some significant improvements to SystemC relevant to both modeling and synthesis. This tutorial will be your first chance to see the new features of SystemC and TLM-2.0 presented in full now the new standard has been published, including a behind-the-scenes insight into the motivation behind the changes. We will also present examples illustrating the new features in action using the latest version of the OSCI Proof-of-Concept SystemC simulator, which is compliant to the new IEEE standard.

In addition, this tutorial will provide an introduction to the forthcoming draft Configuration Standard which targets the configuration of SystemC models. Key classes in the standard, which include parameters, brokers and accessors, will be described, and the use of the Configuration Standard to perform common tasks such as creating, initializing, updating, monitoring, hiding and locking parameter values will be demonstrated.

Find out more and register >

Tutorial: An Introduction to the Unified Coverage Interoperability Standard (UCIS)

This tutorial provides an overview of UCIS™ and its API and how users plan to enhance their verification flows using it. It provides a survey of many of the coverage metrics commonly used and how they are modeled in UCIS. The information that users will be able to access through UCIS will allow them to write their own applications to analyze, grade, merge and report coverage from one or more databases from one or more tool vendors. XML-based interchange format of UCIS, which provides a path to exchange coverage databases without requiring a common code library between tools and vendors, will also be discussed.

Find out more and register >

Tutorial: Verification and Automation Improvement Using IP-XACT with reception and poster session

This tutorial focuses on providing an opportunity to learn more about IP-XACT™ and how this standard can be used to enhance your IP based design and verification flow. The tutorial is composed of 4 main sub-sections and concludes with poster presentations, where you can check out current offerings from EDA companies:

  • Improving Verification efficiency using IP-XACT
  • Use-Case: Verification and Automation Improvement Using IP-XACT
  • IP-XACT and UVM
  • IP-XACT Extensions

Reception from 5:30pm-6:30pm sponsored by:

Find out more and register >


Thanks to our Global Sponsors

Accellera Systems Initiative, 1370 Trancas Street, #163, Napa, CA 94558



This message was sent from DVCon to c.odell@ieee.org. It was sent from: DVCon, 1721 Boxelder St., Ste. 107, Louisville, Colorado 80027. You can modify/update your subscription via the link below.

Manage Your Subscription

Friday, February 10, 2012

Virtual Platforms And TLMs Going Mainstream, courtesy of Electronic Design

Virtual Platforms And TLMs Going Mainstream

Fig 1. ITRS data shows that SoC complexity is fast outstripping the ability to add enough designers to fill the available gates in a given amount of silicon real estate. (courtesy of Calypto Design Systems)

Fig 1. ITRS data shows that SoC complexity is fast outstripping the ability to add enough designers to fill the available gates in a given amount of silicon real estate. (courtesy of Calypto Design Systems)

In 2011, Synopsys made the biggest splash in the EDA pool when it acquired Magma Design Automation. The teaming of these two companies may well result in some interesting doings in 2012 on the RTL-to-GDSII front.

Meanwhile, most eyes are on the front end of the design process as electronic system-level (ESL) tools and methodologies slowly but steadily make their way into the mainstream. As a whole, EDA is beginning to grow once again as a market segment.

Within EDA, the fastest growing segment is ESL, with vendors reporting revenues to the EDA Consortium of about $250 million over the last four quarters. The upswing in revenues points to increasing adoption of ESL tools and methodologies.

ESL Adoption on the upswing

Several factors lie behind the growing interest in ESL among design teams. For one thing, ESL design flows and methodologies have begun to solidify somewhat.

“Historically, most ESL adoption has been in verification,” says Brett Cline, vice president of sales and marketing at Forte Design Systems.

Designers would write transaction-level models (TLMs) of their system hardware for early verification efforts. But that has been a very fragmented market, with little cohesiveness with downstream tools and flows. Models written for one tool might not work with another. Thus, a lot of work would often be put into high-level modeling but it would be lost to the rest of the flow.

This is changing, however. A big reason for that has been the TLM 2.0 standard, which goes a long way toward creation of a standardized interface between models. As a result, models will be better able to communicate with tools and with each other.

Yet the fact remains that there still is no standardized output from virtual modeling to downstream tools. There is also no standardized input to ESL synthesis. One might point to the synthesizable subset of SystemC, but not all tools handle that same subset. That’s in contrast to the latter days of Verilog, when all of the synthesis tools on the market could more or less handle the same inputs.

A Tour Through The ESL Landscape

It is helpful to look at high-level design in a segmented way. After all, it does encompass a number of aspects. There are four major areas. The first is the earliest stages of architectural exploration. The second is the development of hardware blocks. The third is software development, and the fourth is system integration.

According to Frank Schirrmeister, senior director of product marketing in Cadence’s System and Software Realization group and one of Electronic Design’s Contributing Editors, trends are emerging in the first of those four areas, the pre-partitioning phase of system definition (see “The Next Level Of Design Entry—Will 2012 Bring Us There?”).

For one thing, more people are becoming interested in using UML or the MathWorks’ MATLAB language. These techniques can be used to describe functionality at a very high level without tying that functionality explicitly to either hardware or software.

The next step will be to connect these high-level descriptions of functionality either to software implementation or to hardware implementation by generating the shell of a SystemC model. “That’s the next level of high-level synthesis,” says Schirrmeister.

The key to this kind of technology will be the fabric that connects all of the functional blocks, such as ARM’s AMBA fabric or the Open Core Protocol-International Partnership (OCP-IP) fabric. Once you begin connecting elements of the design across the fabric, you can begin analysis with bus traffic using an accurate representation of the fabric. In the future, techniques of this kind will become more critical with the proliferation of multicore architectures and issues surrounding cache coherence.

Implementing Hardware Blocks

A second segment is the implementation of hardware blocks, in which there are two broad trends to consider. One is that intellectual property (IP) reuse continues to rise in importance. No one wants to build functional blocks from scratch if they don’t have to when they can reuse one from a library, whether from within their own organization or from an IP vendor. Thus, there will continue to be issues with integration of reused IP and how to qualify that integration effort.

The other broad trend in hardware implementation is high-level synthesis (HLS), which concerns implementation of new IP blocks. HLS has come a long way in terms of adoption, says Schirrmeister. System-level methodologies have historically been strongest in Europe and Japan, but this also is changing.

“We expect to do a fairly large portion of our business this year in the U.S.,” says Forte’s Cline. Within two years, Forte expects a majority of its business to be done domestically. Korea also reportedly is a fast-growing adopter of ESL tools and methodologies.

Within the U.S., numerous sections are increasing in their adoption of HLS. Cline cites growth in video processing, wireless design, and graphics processing. “The latter covers both datapath and control logic, and ESL’s detractors have always said that ESL doesn’t work well in control logic,” says Cline.

The consumer electronics sector is being drawn toward ESL in a big way, says Shawn McCloud, vice president of marketing at Calypto Design Systems. A prime example is the image processing done in cellular handsets to correct for distortion created by low-cost lens systems.

On the horizon are efforts to obtain feedback from RTL analysis on the HLS tools’ output and then feed that back into the HLS flow for further iteration. “In the future, you might run something through silicon place and route and get early feedback on congestion,” says Cline. “You would put that code back into HLS to tweak it and create a different architecture. That is something that will mature a little more.”

A key advantage of HLS is its ability to standardize RTL coding styles. In the future, this will influence certain aspects of RTL design, especially coding for power efficiency. There are RTL coding styles that are well known to minimize power consumption. HLS tools are built to automatically invoke such best practices in their RTL output, making that code tailor-made for downstream RTL synthesis.

Why High-Level Synthesis?

There are three key drivers behind HLS adoption. First is system-on-a-chip (SoC) complexity, which, according to ITRS data, is rising rapidly (Fig. 1). At 65 nm, gate density was in the neighborhood of 300 kgates/mm2. At 32 nm, that figure was up to 1.2 Mgates/mm2. That translates into about 60 million gates on a die measuring 50 mm2.

The problem is that given existing RTL methodologies, an RTL engineer can generate about 200 kgates/year. So if systems houses want to take advantage of process shrinks, only so much can be gained by hiring more designers. They will need tools that enable each designer to create more gates/year.

The second key driver is power integrity. Historically, systems houses have addressed power consumption through supply scaling. As they move to smaller process geometries, they scale down the power rails. But VDDs are already down to 0.7 V and the physics around leakage, thermal issues, and IR drops pose insurmountable limitations. Below 45 nm, power density scales up in nonlinear fashion.

The industry has hit an inflection point on this issue. HLS tools will be relied upon for efficient power optimization even before RTL is created. And at RTL, power optimization is mandatory to automatically insert better clock gating and to take advantage of the light-sleep modes in memory devices.

“Architectures are increasingly important to differentiate because of power limitations,” says Johannes Stahl, director of product marketing for system-level solutions at Synopsys. Expect even more pressure to optimize for power at the earliest architectural definition levels of the design cycle. Design teams will need to look at the power architecture issues at very high levels of abstraction.

The final key driver is verification, which is becoming exorbitantly expensive. The Wilson Research Group conducted a study on functional verification from 2007 to 2010 and found that the average percentage of total time engineering teams spent on verification jumped from 50% to 56% over that span. There also was a 58% increase in the number of verification engineers. Verification has become a key reason to adopt ESL if only because it can help deliver cleaner RTL to the logic-synthesis flow.

Hardware Meets Software

In typical system design cycles, software development begins long before target hardware exists on which to verify software functionality. Moreover, this is where transaction-level modeling and virtual platforms (VPs) come in. These technologies will have an increasingly important role in the future of ESL flows.

Software development is obviously being made massively more complex by multicore architectures (Fig. 2). “It’s not trivial to distribute software across cores,” says Synopsys’s Stahl. This is true in many sectors, including consumer, where innovation often comes in the context of architectures.

Likewise, in the automotive market, there is a growing trend toward more complex software. Many safety features are implemented in software, which is a direct impact of the ISO 26262 functional safety standard for vehicles. “This will likely cause a major methodology shift,” says Stahl.

Virtual platforms have two functions. One is software/hardware codesign, where designers optimize and verify their system with software in mind. The other is when the VP serves as a high-level hardware model that’s delivered to software developers before the actual target hardware exists.

The next step will be bringing the TLM platform and prototyping environment together with TLM synthesis, says Calypto’s McCloud. Doing so centers on HLS, but it also involves verification, using automatically performed C-to-RTL equivalence checking to ensure no errors have been introduced in synthesis.

Making Models That Matter

Going forward, there will be very different needs between the models that one uses in transaction-level modeling and the ones that are fed into high-level synthesis. TLMs must execute at 200 to 300 MHz to be able to run software and achieve reasonable coverage. They do not need to model all of the nuances of actual hardware. That’s why they execute faster. But models that are fed into HLS must carry specifics about interfaces and hardware protocols to synthesize properly.

Look for a move to models that execute at the higher speeds required for transaction-level work but also have enough detail to be synthesizable in an HLS flow. Calypto Design has done work in this area using a technology it calls “multi-view I/O,” which is a means of changing a transaction-level interface to a pin-level or HLS interface for implementation.

TLMs and virtual platforms are finding new applications in many areas, according to Bill Neifert, chief technology officer at Carbon Design Systems. “Verification is the number-one area for growth in virtual platforms,” he says.

Early adopters of virtual platforms used them for architectural exploration in the beginning stages of design cycles. Now, the trend is for them to move into later stages such as firmware development. In turn, firmware teams are using VPs to drive verification of their work. In addition, system integrators use the firmware results as part of the verification suite for the overall SoC.

“This is not yet a mainstream use, but leading edge customers who have used VPs for a while are branching out into verification now in a big way,” says Neifert.

Additionally, VPs are now seeing use in definitions of system power requirements. Design teams have begun to realize that making architectural decisions that positively impact power early in the process has huge advantages. An emerging trend is to get software running on a cycle-accurate VP early in the process and use the platform to generate power vectors, which are notoriously inaccurate. However, it’s relatively easy to instrument the VP, which generates power data on the fly as the software runs.

The result is a more accurate view of power consumption while the design is still in flux. Teams then can use that information to make better decisions about software, hardware, and partitioning.

Hardware/Software Integration

The last area of what can be called ESL is the integration of hardware and software after partitioning decisions are made. The notion of prototyping enters the picture here. It’s also where ESL bumps up firmly against RTL.

“There are four gears to this car, one might say,” says Cadence’s Schirrmeister. One is transaction-level modeling, another is RTL simulation, a third is emulation/acceleration, and the fourth is FPGA-based prototyping. “These are four different ‘gears’ for putting hardware and software together before you have actual silicon,” says Schirrmeister.

The connectedness of these engines is where the future lies and where Cadence and other EDA vendors will concentrate their efforts. The trend in this regard is to optimize hardware execution of parallel blocks in the design.

“Some of it already works, as in RTL simulation being combined with emulation so you have different levels of speed,” says Schirrmeister. For example, RTL simulation with Cadence’s Incisive platform can serve as the front end to both RTL simulation on a host processor and the execution of RTL on an emulation platform.

This leads into considerations of how best to choose a prototyping platform. “For multiprocessor designs with a graphics or video engine in parallel, it’s clear that the processor itself can be prototyped best on the host using VPs,” says Schirrmeister.

But blocks such as video decoders or graphics engines are so compute-intensive that they do not map well on the host. “Those items are best kept in hardware in the emulation box or on FPGA-based rapid prototyping boards,” Schirrmeister says.

Thus, a growing trend is for designers to more carefully consider their prototyping and emulation vehicles. Here is where standards such as TLM 2.0 and the Standard Co-Emulation Modeling Interface (SCE-MI) play an important role.

According to Lauro Rizzatti, general manager of EVE-USA and one of Electronic Design’s Contributing Editors, interest is growing in ESL co-emulation, particularly in Asia, and in the U.S. to a lesser degree (see “Social Media And Streaming Video Give EDA Cause For Optimism”).

“Design teams have been asking us to prove that our ZeBu emulation systems can play into the ESL environment by providing performance and cycle accuracy for anything described at RTL level,” says Rizzatti.

Co-emulation, which is the marriage of high-level models with RTL, has paved a path to adoption on a larger scale. In the U.S, the main driver is accelerating software development ahead of silicon. TLM 2.0 will help for hardware debugging with SystemVerilog testbenches (Fig. 3). According to Rizzatti, in the past year six Asian systems houses have asked EVE to integrate ZeBu using TLM 2.0 with ESL environment. This clearly points to the trend toward growth in co-emulation.

Standards on the Move

If Synopsys’s acquisition of Magma Design Automation was the biggest event in EDA last year, the second biggest might have been the merger of Accellera with the Open SystemC Initiative (OSCI). Now known as the Accellera Systems Initiative, the combined organization is in a better position than ever to positively impact the broader adoption of ESL through its standards efforts.

According to Accellera Systems Initiative chairman Shishpal Rawat, it is very important that interoperability of flows is based on industry standard info. “This way, users can define the flow that best fits their need,” Rawat says.

Rawat sees a continued move from best-in-class point tools to full flows. Thus, it’s critical for future standards efforts that the EDA vendors monitor users’ needs, while users reciprocate by making vendors aware of their concerns.

Together, vendors and users bring these observations back into the Accellera Systems Initiative, which discusses, forms, and ratifies standards. This embodies a trend toward a common platform on which system design standards, IP standards, and chip standards are formed.

Within Accellera’s verification IP technical steering committee, there has already been work that accounts for OSCI’s TLM 2.0 standard and leverages that in the development of some of the Universal Verification Methodology’s verification IP. Look for this kind of synergy to be nurtured going forward.

“I think we will ensure that they collaborate on the next generation of the UVM,” says Rawat.


Share6