Towards a Streamlined Distributed Design Environment
ECE678 Integrated Telecommunications Networks
Dr. Ralph Martinez
University of Arizona
Matthew C Scott Dan Nicolaescu
This paper presents a feasibility and implementation study of a novel methodology for streamlining the execution of components in Electronic Design Automation systems. Using the new paradigm of executable workflows, a Java based distributed object computing environment is proposed with a CORBA backplane and Autonomous and sometimes Mobile Agents as mediators of transactions. A thourough assesment is developed with consideration of the requirements and limitations of an EDA environment within a heterogenous networked computing environment and many loosely coupled programs and multimedia data objects in the mix. A focus will be given to the implementation of a Distributed Simulation Environment for transistor-level and DEVS Discrete Event Simulator. Given that the proposed system is a global design cycle management system, much emphasis has been placed upon reviewing all of the alternatives and the involved costs and benefits. Just as there is a plethora of EDA tools on the market, there is likewise a multitude of means of developing distributed multimedia management systems that needs to be considered.
Table of Contents
1. Introduction: The Curse of Rampant Creativity
2. Project Objectives
3. Solution: BB2000!
4. Recon: Overview of Design Flows, Trends
5. Strategy I: CORBA/Java Middleware
6. Strategy II: Executable Workflows
7. Strategy III: Mobile Agents and CORBA
8. Strategy IV: MultiMedia Systems in EDA Modalities
9. Tactics I: Development Schedule
10. Tactics II: A Distributed Simulation Environment
11. Assessment: Cost/Benefit Predictions
12. Conclusion, Next Steps, Over the Edge ...
13. References, related http links
1. Introduction: The Curse of Rampant Creativity
Workflow management systems for Electronic Design Automation have recently become an attractive route of acheiving the business goals of semiconductor design corporations. Besides the research presented in this paper, the (recently discovered) work of the WELD (Web-based Electronic Design) project at Berkeley is encouraging for the benefits and feasibility of this project. What the BB2000 project presents uniquely from the WELD project, is the feasibility of using the CORBA distributed computing architecture and Mobile Agents as proxy agents for transaction mediation.
This paper is structured with the interests of the Burr-Brown Design Council in mind, and in that capacity is a little more thourough than usual with respect to the business goals of the company. At most semi-conductor design corporations, the goals for increasing profitability and competitivness as are summarized by three co-dependent factors:
this is a generalization designed to sheild the finance committee from the
gory mess of competing design groups, free-for-all library and tools
development, cut-throat markets and wicked technologies whose bleeding edges
are more like guillotines for the engineer's heads.
The rank-and-file Design Engineers see the above directives and think: I need a faster machine, a faster network connection, better EDA tools and a heck of a lot of time to figure out how they all work or I'm going to kill the EDA engineer! The EDA engineers see this directive and think: Oh boy! More tools, more machines, more network stuff and a gloriously gory mess to drown in! The problem here is that unharnessing all of these fancy tools on the design environment actually causes a net reduction in design cycle time, if they are not highly integrated and easily employed by the design engineer. This is a classic case of growth pains.
A primary cause of this situation is that the electonic circuit design has recently enjoyed a phenomenal growth rate in the number of design tools and related technologies developed to accelerate the design process and improve design capacity. The tools are applied to every phase of the design cycle including: architecture and topology specification, RTL (register transfer level) coding and behavioral simulation, logic synthesis, timing analysis, spice simulations, layout placement and digital and analog level routing, DRC (design rule check) and LVS (layout vs schematic) checking, LPE (layout parasitics extraction), co-simulation, and logic testing such as DFT (design for test), BIST (built in self test) and ATPG (automatic test pattern generation). And, as evidenced by the increasing size of the annual DAC (Design Automation Conference), the number of vendors producing these large and very complex tools has also mushroomed.
With such a rapidly expanding diversity and complexity of tools, and the desire by most engineers to find the diamond in the rough that is going to save their design, the typical design environment is now innundated by a plethora of incompatible tools and an equal mess of environment disorganization due to lack of methodology and standardization on use of even one single tool. To exacerbate this condition, a typical semiconduction design house will simultaneously develop on 10, 20 or more process technologies (Bipolar, CMOS, BiCMOS). Moreover, the reductions in feature size has rendered many tools obsolete and the increases in transistor counts have choked many programs and brought compute servers to their knees. To finish it off, the advances in programming languages, operating systems, network architectures and design optimization theory are regularly flushing the knowledge base of Design and EDA engineers.
The contributing factors to the increasing need for a streamlined design environment can be summarized as follows:
2. Project Objectives
This project has the primary objectives of satisfying as well as possible the EDA Group's business goals and EDA Mission while still leveraging the strengths of a highly custom design environments.
The above listed bullets on increasing productivity are targeted in this paper through the development of a workflow management system which creates an integrated, distributed, standardized and simplified EDA environment. In particular, the following sub-targets of these business goals will be addressed:
A. Reduce Design Cycle Time by integrating and simplifying use of tools,
B. Reduce Costs by standardizing environment and harnessing unused CPU cycles
C. Increase design quality by improving analysis and simulation capabilities
Although the above directives seem rather expansive and cut accross all realms of the design environment, coordinating their development is possible in that they all fall within the jurisdiction of the EDA Group. The directives of the EDA Group's Mission are stated as follows: "Create, improve, and maintain the design environment for use with all active design processes including schematic entry, simulation tools, models, symbols, cells, and layout tools."
To be politically and socially acceptable, any deployed system must consider the habits, culture, attitudes and diverse priorities in order to maximize on engineering flexibility and creativity while enabling efficiency gains through standardization. The following points are critical to its success:
3. Solution: BB2000!
Although it may sound like a herculean task, the proposed Distributed Design Environment will provide an integrated workflow driven design interface with the following properties:
As mentioned, the expected solution will be characterized by an architecture of 'Design Cycle Executable Workflows'. It will be the task of the remainder of this paper to evaluate the requirements of such a system and to look at as many as possible of the viable competing options for implementation, including: distributed computing environments, distributed computing languages and system control mechanisms.
4. Recon: Overview of
Design Flows, Trends
Figure 1: A typical Design Tools Hierarchical layout.
As can be seen in Figure 1 above, the typical design tools hierarchy employs a gamut of tools and systems. This has generally been the only means by which an engineer could see the environment. Although it is a good layout to describe which tools are available, it does nothing to help engineers find tools, use them, and employ interfaces between them. It will be our goal to give life and intelligence to this graph in order that it, or rather, its successors may satisfy all those bullets stated above for the BB2000 environment.
Figure 2. Network Topology
In a similar vein, the
above graph depicts a generalized topology of compute systems and network
architectures for a typical design environment. As one can imagine,
mapping the above tools hierarchy to this computing topology can be rather
As with many design houses, the Design Engineering compute environment has grown very rapidly and without much in the way of centralized guidance. Only 8 years ago (1990), the system consisted of Dec vt100 terminals with RS232 lines to a Micom box and then to a dinasaur Vax machine. Only four years ago there existed no standard model libraries, no statistical process control system and no device/cell/symbol standard libraries. The best practice then was that each user copied and modified at will any libraries available. It was unfeasible to maintain a centralized repository at that time because of the network bandwidth and the dominance of standalone PC apps over Unix shared apps. Likewise, there was very little tool integration and a similarly dire paucity of documentation. With so many independent design systems and anarchy of design libraries, there could be no means to automate processes.
Figure 3. Standard Directory System
In 1995, an effort was begun to attempt to bring order to this mayhem. This initiative was inspired by the need to have standard reference libraries in order to be able to perform a provably correct Layour vs. Schematic (LVS) check. A Standard Directory System was created with the intent of centralizing the libraries and enabling globally accessible documentation. This system was placed on a large Auspex file server running SunOS. Using the automounting capabilities of NFS the system is transparently available to all workstations. Likewise, all of the Win31/Win95/WinNT PC's could mount the system via SAMBA or FTPSOFT's client interface. With such a centralized standard directory system and an open directory system, it became much more reasonable to integrate design tools and automated the processes common to a design environment.
The Standard Directory System does in fact ease the managment of the libraries, documentation and glue programs, but, because of its static nature, engineers are not higly motivated to stick to this system. For them, it is still much easier to copy libraries to their local system, reducing lag time and enabling quick modification of symbols, devices and cells.
Figure 4. Actual Library System
Given the increasing quality of designs expected from engineers, this has been an acceptible practice as it was often the case that a required component did not exist in the libraries, and there were no EDA engineers to develop large, exhaustive libraries. But, on the other hand, it could be seen that a lot of effort was being duplicated or lost since these new components were not being added back into the standard libraries.
Current Design Engineering Intranet
As the Standard Directories were visible to all machines, it was a logical step to create a web server which mapped an Intranet over these directories. Thus, from anywhere within the corporate network, anyone with a web browser could connect to the web server and instantly have access to all of the documentation embedded in the standard directories (specifically, in the 'html' subdirectories). Currently, this intranet system provides the following:
But, as mentioned, the system is static in that it does not provide a design flow interaction. For an engineer to find documentation related to their specific design or process, they must wade through a ton of stuff that will only deter their future visits. Also, after an engineer has found and read a certain document, they are not likely to read it again, which is a detrimental in that the programs and libraries may have gone through a major revision change in the interim.
5. Strategy I: CORBA/Java Middleware
This section will discuss the programming languages, distributed computing systems and middleware frameworks considered with respect to implementing the executable design flows and Distributed Design Environment.
The Internet/Intranet paradigm has provided a new medium over which business may be organized and transacted. It is seen that this medium coupled with some new computing technologies may provide a means of distributing and coordinating the functions found in a typical design environment. It has traditionally been the case that distributed computing languages were too complex and rigid to deal with a dynamic and heterogenous network environment. Likewise, the distributed computing architectures were focused on specific SIMD or MIMD architectures and required that the programs be similarly partitioned for vector execution. In general, this is still the case today, but the compute speeds and network bandwidths have made it increasingly more attractive to try to harness the excess desktop CPU resources and, through multimedia systems, reduce the actual physical travelling required for simple concurrent and collaborative design. With this enabling phase transition in mind, we will be examining the software technologies that may be co-employed to support distributed, non-asynchronous design systems. In brief, some of the primary issues include:
Distributed Design Environment Issues
Legacy software, interfaces
Management and Control, Version, Node inserts
Inter-process communications (message passing, RPC)
Control Flow: Multiple Processes, Threads, Load Balancing
Addressing and Naming Services
Directory Services, distributed files
Security: Authentication, Authorization, Encryption
There are many distributed
computing languages and environments which address these issues in varying
degrees. But many of them are rigid in their isolation to certain platforms or
inability to interface to lagacy systems. The following languages have been
employed in varying degrees within the design environment, and their relative
strengths and weaknesses are itemized.
Design Flow GUI languages
It should be apparent by now which is our preferred development language. Java is a mobile object system and platform independent due to the JVM interpreters of bytecodes. It is blessed with many API?s: URL classes, TCP and UDP classes, RMI, JDBC, AWT. There are several vendors supporting Java Development Environments, but here we will bring attention to the Inprise (Borland) Jbuilder, since Borland has recently acquired Visigenic (which will be discussed further below).
On of the primary
advantages of Java is its built-in security features. As the java code is
compiled into 'bytecodes', there may be written Java Virtual Machines (JVM) for
nearly every machine in existence. The JVM then executes these bytecode within
the confines of a secure sandbox.
Figure 5. JDK1.1 and JBuilder 1.2
Distributed Computing Frameworks
There are many distributed computing environments and specifications, but the five listed below are the more popular and well developed. A brief review of these will be given in order to clarify the reasons for going with the CORBA framework.
The Linda system uses a
distributed shared memory approach to
Communications are transacted through a
Each message has a label attached to it like in the case of sending a
message a function is called: message ("mess1", VALUE).
The message is copied in the tupple space that is seen by all the
machines and the receiver has to look in the tupple space for a
message containing the label: MESSAGE_VALUE= get_message ("mess1")
PVM (Parallel Virtual Machine)
The PVM architecture supports machines on a heterogeneous network. Communication is executed through a specialized library of RPC network calls and servers can be started/stoped remotely. One of its primary benefits is that it emulates very well a multiprocessor machine with a set of discrete machines. Although this is a very well developed and free system, it is not at all amenable to integrating legacy systems. Likewise, its reliance on a library of calls, obviates the ability to integrate objects from various languages like C++ and Java.
The OSF (Open Systems
Foundation) DCE (Distributed Computing Environment) supports threads,
remote procedure calls (RPC), a directory service, distributed time service (DTS), security service and a
distributed file service (DFS). Without going into extensive details, we will point out that the disadvantages
of DCE as compared to CORBA are that it is not inherently object oriented, does not have a dynamic
invocation interface (all programs must be available at compile time), and it does not have a GIOP .
The Common Object Request
Broker Architecture is a specification developed by the Object Management Group
CORBA provides 3 Tier DCE and extends Java with a distributed object infrastucture, workflow facilities, agent persistence, various services. As most of this project relies on the CORBA framework, we will not expound further on its virtues here, only noting that it is the new wave in Distributed Object Computing and is making a big splash in the Java-Internet ocean. As will be seen, CORBA provides for all of the necessities required for our executable workflow Distributed Design Environment. But, as will also be seen, not all of these facilities are fully supported (especially in the realm of mobile object systems).
CORBA has enjoyed rapidly increasing popularity recently. And with this popularity has come a good showing of vendors supported CORBA compliant distributed computing systems. Some of the vendors looked at and a few of their case studied are review here:
Sunsofts' Java IDL (formerly JOE)
Sun was one of the six founding member of the OMG and has been a strong advocate of complete platform independence. In this light they developed Java IDL with strong ties to Java, and IIOP client and server functions, a CORBA compliant naming service and a development environment in JDK 1.2. It is a minimalist ORB, but it is also free. One of its main drawback is that it (in its beta version) does not have an interface repository so cannot execute dynamic invocations. The greatest drawback it that it does not support the CORBA facility add-on services such as load balancing, persistence and security.
The obvious problem with DCOM is its reliance on Microsoft platforms. But, apparently Microsoft is beginning to focus heavily on COM for Unix, and there are a few COM-CORBA interface programs available. Although we can expect DCOM to dominate in the future if Microsoft continues its monopolistic trend, we will ignore it here for the more open standards of CORBA.
IONA Technologie's OrbixWeb
Iona is the leading CORBA
provider and has quite a few success stories on its web site. Its ORB runs on over
20 operating systems. The ORBs support both IIOP and Iona's proprietary Orbix
protocols. The most recent version (3.0) supports a fully CORBA compliant
IDL-to-Java mapping. Also a new OrbixCOMet Desktop acts as a communications
bridge between COM clients and CORBA servers, thus enabling object migration.
Visigenic's Visibroker 3.2, which was acquired by
Borland, which subsequently changed its name to "Inprise Corp."
The Visibroker ORB surpassed 30 million licenses in 1997, making it the worlds
most widely distributed ORB (according to its web pages). Visibroket uses JDK
1.1 on all major platforms, making it just as platform independent as Java.
The product documentation can be found at: http://www.inprise.com/techpubs/visibroker
VisiBroker for Java is embedded into Netscape Communicator client software, allowing Java applets to interoperate with object servers. Both VisiBroker for Java and VisiBroker for C++ have been embedded in Netscape Enterprise Server 3.0, the cornerstone of Netscape SuiteSpot, Netscape's integrated suite of server software. Visigenic's Visibroker is written entirely in Java, is integrated in all Netscapes, boasts the OSAgent DNS, and finally, has also the Caffeine Java-to-Java object interface which oviates the need for and IDL in Java client-server applications.
There are, like on the Iona website, many success stories supporting the use of Visibroker. A couple of the more interesting include Oracle and Silicon Graphics. Oracle Corporation uses VisiBroker for Java ORB technology to provide developers with a rich development environment for client communication within the Network Computing
Architecture (NCA). Silicon Graphics is integrating VisiBroker for Java and VisiBroker for C++ ORB technology
into IRIX, the Silicon Graphics® multi-threaded UNIX® system-based 64-bit operating environment. SGI is the first major hardware manufacturer to provide integrated operating system support for IIOP and CORBA. Other major VisiBroker licensees include: Novera, Business Objects, Bluestone, Cincom, Gemstone, Actra (acquired by Netscape in November, 1997), BBN Technologies, Trilogy, Hummingbird and Scopus.
CORBA Architecture Overview
Figure 6. CORBA Architecture
The Common Object Request Broker Architecture is a proposal developed by the Object Management Group (OMG) which consists of over 800 member companies.What CORBA does is enable legacy systems and databases to interopperate over a common object bus. The way it does this is by defining an Interface Definition Language (IDL) through which all apps and objects may talk to each other. The actual ORBs also provide services for creating and deleting objects, naming services, a means of storing state persistance and externalizing states, and means of creating various inter-object relationships. With an ORB for managing objects, you are given the flexibility to create objects which are transactional, secure, lockable and state persistent. With CORBA's directory services your objects may reside anywhere within the network. And, finally, when you throw Java into the mix, you get natural platform independence and extensibility to the entire Internet via the Internet Inter-Operability Protocol (IIOP).
Some of the benefits of CORBA are presented below.
IDL, The Interface Definition Languages
The IDL is a central part of the CORBA specification. IDL is used to express the contracts between the objects, it is not a full blown programming language, it does not include support for iterators or flow control.
The IDL compiler take the IDL description and generates stubs for it in the desired programming language. This is similar with what the RPCGEN program does, except that it can generate stubs for multiple languages.
The IDL permits the definition of the following:
GIOP, The General
Interoperability between ORBs is a very important issue when doing very large scale CORBA application.
Interoperability has many aspects one of them being network interoperability.
In order for 2 ORBs to communicate they need to be able to recognize each others messages.
The GIOP provides a framework for this by establishing a message layout and specifying message types.
GIOP defines 7 message types and layouts: Request, Reply, Cancel Request, Locate Request, Locate Reply, Close Connection, Locate Reply, Message Error.
Each message starts with a header containing the message type. Each message type is defined using an IDL description, because of the Common Data Representation standard this gives a the layout of the messages.
IIOP. The Internet Inter-ORB Protocol
The IIOP maps the the GIOP into a TCP/IP session.
It contains host and port keys that are the ones used by TCP/IP for addressing and an object key that extends the addressing level down to an object.
Static and dynamic method invocations (DII)
The client stubs are generated by the IDL compiler from the IDL description. Those stubs are linked in the client program.
A method invocation looks just like a normal function call in the program .
This enables using new interfaces that were not present at compile time (not possible with static invocation).
There are a few steps to do in order to do a dynamic method invocation: get an interface, find out the operation supported by the object, create a request object for the required operation, create the arguments for the invocation, and finally perform the invocation.
The Interface Repository is used to keep the interfaces in the system, so the objects can discover servers at run time and perform dynamic method invocations on them.
Built-in security and transaction
CORBA services exist to enable transparent transaction processing and security services.
Coexistence with Legacy Systems
CORBA can coexist with legacy systems, even more by defining an IDL interface to them they can be use as CORBA objects.
Avoids the CGI bottleneck
With the perl/CGI method, a call to the HTTPD must start a new CGI applet every time a call is made, and it must run through the same common gateway interface. With the ORB, an already instantiated object may handle the call independently, or spawn off a new object.
Scalable server-to-server infrastructure
Given the IIOP and GIOP, object may be distributed easily across the network. It is the concept of distributed object design which is inherently scalable.
Figure 7. CORBA Evolution
As can be seen from the table above, CORBA is an evolving specification which is increasingly headed towards the business process re-engineering services, mobile agents and workflow support. Although version 3.0 is not implemented yet by any of the above ORB vendors, we are continueing on the assumption that it will be available before 2000, or that alternative means may be found to provide equivalent functionality.
Visigenic Visibroker and Caffeine
As mentioned above, one of the primary attractions of Visibroker is its implementation of Caffeine. The following graph depicts the development stages for implementing an object into the ORB infrastructure.
Figure 8. Caffeine vs. IDL
The Caffeine development process starts with a Java interface that you declare to be remote by extending it -- either directly or indirectly, from CORBA.Object. You must compile your interfaces using javac and then run the output through Java2IIOP, a bytecode postprocessor that generates the CORBA stubs and skeletons. With Caffeine, a Java programmer never has to look at the CORBA interface definition language (IDL). The Java remote method invocation (RMI) uses a process similar to Caffeine to define remote interfaces. As parts of the great marriage, Caffeine and RMI may soon adopt the same APIs as well as a common approach for generating CORBA stubs and skeletons from within Java.
6. Strategy II: Executable Workflows
The executable workflows concept is to create a set of dynamic, interactive design flows which are integrated in BB2000! As will be seen later, CORBA provides various facilities to implement such workflows, and Java provides the platform independence to make global access trivial. As a restatement of the project goals, the executable flows will satisfy the following bullets:
Executable Workflows, A Paradigm for Collaborative Design on the Internet (reviewed paper)
As supporting evidence of
the feasibility and importance of executable workflows, this paper by Hemang
Lavana, Amit Khetawat, Franc Brglez, Krzsztof Kosminsky is reviewed. The paper
is a summary and status report on the Vela Project, which may be found at http://www.cbl.ncsu.edu/vela/
Figure 9. The Vela Project Constellation (borrowed)
The Vela project is a 21 month collaborative effort to develop a multimedia chip. In merges several projects including the WELD project from UC Berkeley and the Ruben Project of North Carolina State University (Raleigh) . The project has a wide range of propositions. The UC Berkeley is signed up to provide an updated version of WELD technology with a Java-based specification charts editor, Java-based tool design visualization and collaboration, and a Java-based interface to an object-management tool. North Carolina State University will provide a user-configurable, globally distributed collaborative workflow design environment based on Ruben, a limited-time licensing access to commercial EDA tools, and peer-reviewed benchmarking tests. The people running the Vela project state that:
"Since August 1, 1997, our lab is a Principal Investigator in a 21-month $1,317,623 DARPA-sponsored national-level collaborative project, "Vela Project: Globally Distributed Microsystem Design -Proof-of-Concept"
Basically, the Ruben Project implements a workflow as an executable directed hypergraph that is converted into Canonical executable Petri net (Program nodes -> Transitions). The system uses Tcl/Tk, probably because the ideas inceptions predates the advent of Java. The stated goal of Vela, as described at the project's World Wide Web site, is proof of concept for a globally distributed processor design. Tools, libraries and personnel will be located across the United States. The specification of Ruben permits workflow composition and reconfiguration while accessing and executing programs, data, and computing resources across the Internet. It allows real-time communications among team members, and permits interaction between any team member and any object in the workflow.
The WELD Project's Distributed Tool Flow Manager is a Java-based application that allows users to flexibly choose network tools, design workflow and configure servers to meet application-specific needs. This capability is of great
value, especially to EDA applications and processes.
The overall Vela Project has three main objectives which are very much in line with BB2000:
1.Demonstrate, using the most
up-to-date internet- based
technologies, the feasibility of a large-scale collaborative and
distributed design effort for electronic systems with tools, libraries
and personnel residing across the USA.
2.Identify challenges and opportunities for the next generation of
design technology, given the rapid pace of advances in
semiconductor technologies and the emergence of Internet-II.
3.Provide an effective technology transfer paradigm for DoD
missions that will increasingly rely on COTS technologies in a
distributed design environment
The following text is
included verbatim as its specification of the architecture of the WELD
environment is so similar to what we have independently conceived, that we feel
it is very important that the reader be aware of the consensus. And, besides,
we are not so much interested in writing the paper as we are in educating the
engineers as to the viability of this project.
"At the system level,
the Distributed Tool Flow Manager ties
together the whole WELD environment by
Package, Client-Server Communication Protocol, Client-Database
that are encapsulated by the Server Wrapper register with the Registry
Design Flow Components
As mentioned in section 1, the design flow of a typical DSP project traverses many stages, many of them cyclical, and nearly all of them co-dependent for convergence to optimal solutions. For example, in the list below, Timing is dependent on the logic synthesis, placement and routing stages (and all stages to some degree).
Primary Design Flows
As can be seen from the cursory list below, there are quite a few design processes and flows in the Burr-Brown repetoire. Many of these are overlapping in their use of various tools and verification cycles. But each is critically dependent on disparate libraries and process documentation. Thus, a few generic design flows may be develop with simple configurations of the executable design flow to point to the correct libraries and documentation...assuming a consistent and controlled standard library system is maintained.
General CMOS DSP/ADC/DAC Design Flow
TSMC Design Flow
OKI2UM Design Flow
MOSIS Design Flow
General BiCmos Design Flow
CBC10 Design Flow
ATT/CBIC Design Flow
General Bipolar Design Flow
P43X, P44X, P45X Design Flow
CDC Timing Driven Design Flow
LVS/LPE/DRC Design Verification Flow
EDA Live Hooks to Design Flows
In order to provide a human interface to these executable workflows, the following phases are proposed for interaction of the EDA engineers with the projects engineers. All of these stages will need to be supported by the BB2000 environment.
Phase I: CAE/CAD at Topology
Provide Library Control List for relevant process
Provide and demonstrate Executable Design Flow for Tools/Libraries available for process
Provide Licence Table and Tool Status Table for above
Provide Library Catalog (Cell/Device, Models, Symbols...)
Brief Engineer on availability of above and relevant docs on CAE-Web
Phase II: CAE/CAD at
Design Review (Pre-Layout)
Confer with Layout/Design Engineer on LVS/LPE/DRC programs available, use, limitations.
Review new Devices/Cells, symbols, models introduced by Engineer.
Phase III: CAE/CAD
Post-Layout Wrap Up
Review Tools/Design Flow.
Update Libs with new Devices/Cells/Symbols/Mods
Beer Check (limited) revised libraries according to Lib Check-Off list.
7. Strategy III: Mobile Agents and CORBA
The concept of Intelligent
Agents comes from the definition of an Agent as any object with internalized
state and state persistance. The attraction of Mobile Agents include: Enable
easy integration of various control levels: hardware load balancing, software allocation,
amenable to network routing and QoS, and they naturally implement Neural
Networks and Genetic Algorithms optimization paradigms. A few of the papers
which discuss these applications will be reviewed in this section. Also, we
will be taking a look at some of the Java and CORBA facilities provided for
enabling mobile agents.
We are looking at agents as a means of providing for optimizing the DDE environment on many, interdependent levels. At the lowest level, we wish to use agents for network and CPU quality of service negotiation. On the next level we may employ agents in load distribution and balancing. Similarly, the distributed simulation environment may employ agents in a market based economy of selling CPU agents and bidding job-request agents. But, in particular, we see a natural affinity between the agent paradigm and the concept of dynamic executable workflows. This concept of layering agents is somewhat in line with the OSF concept of a network protocol stack, wherein functionality is abstract to various layers, thus making management of any particular layer independent of other layers. A hypothetical layered agent protocol stack might be constructed as follows:
Layered Agents Protocol Stack
Mobile Agents for Workflows
Given that the design environment may be highly distributed over the Internet, and that there may be numerous nodes with varying degrees of redundancy of services provided by nodes, the concept of allowing mobile agents engenders a natural partitioning of the network space in order for agents to dynamically find the nearest and least used resources. This concept is based upon the behaviours of competitive market systems and ecosystems. It is suggested that if the agents may adapt to their environment, and must compete for resources, then there will occur a decentralized, emergent optimization of the global use of the resources. This paradigm will also allow quick adaptation to the dynamic environments typically found on networks and will provide a robust, fault-tolerant system not available in more rigid design environments. There are a few caveats, one being that the system may display unpredictable behavior in un-tested conditions, such as one might expect since it is impossible to test for unexpected conditions. An example of unpredictable behavior is the possibility of feedback oscillations and perhaps deadlocks. But these conditions may be provided for by building into the agents certain behaviors which minimize such conditions.
Figure 10. Agent Migrations in a Distributed Simulation Environment
Another argument for employing mobile agents is depicted by the figure above. On the left we have an arbitrary design flow, and on the right are various hosts which harbor the data objects and executables required in the traversal of the flow. As can be seen, there is likely to be some redundancy in objects between the hosts, and it is unlikely that all of the objects will reside on one host. Thus, either the executable flow must have built into it some complexity for finding the best hosts and objects, or we can delegate this work to a mobile agent system. In the right panel we see a possible path a mobile agent may take in traversing the flow on the right.
Challenger: A Multi-Agent system for Distributed Resource Allocation
This paper, by Anthony Chavez, Alexandros Moukas and Pattie Maes, of the Autonomous Agents Group, MIT Media Laboratory, provides a strong example of how agents may be employed in a market-based system for resource management. In this application, agents individually manage local resources by acting as buyers/sellers in marketplace. Through this model, they may negotiate for optimization of min mean flow time, max CPU use and min response time. There are several types of agents including Job Agents which submit jobs to market for bidding, and CPU Agents which, if idle, submit bids for job in the marketplace. After x time of bidding, a job is given to agent with lowest estimated completion time. Although the paper does not mention it, it is feasible to allow the agents to adapt their predictions by providing a reward or fine to agents according to performance on the bid. The system is purported to be robust and highly adaptive. The decentralized nature provides for local interaction to induce coherent global behaviors.
An Agent-Base Approach for Quality of Service Negotiation and Management in Distributed Multimedi Systems.
Another paper reviewed, by Luiz A G Oliveira, Paulo C. Oliveira and Eleri Cardozo of the State University of Campinas, Brazil, deals with the employment of mobile agents for QoS management. The interest in this model stems from the concept that the QoS Management of network loads may be mapped similarly to the QoS provided by CPU's and applications (by applying a graph-dual transformation of the network and its nodes). This system is rather complex in its employment of multiple classes of fixed and mobile agents. There are User-Agents which negotiate for resolution, distortion, synchronization, interactiveness levels. Likewise, the System-Agents negotiate transmit time, delay jitter, BER, and PERs. In this system there is a fixed-agent Agency placed at each node which processes mobile agents and holds contracts between agents. The various fixed agents include
Applications Agent: act as a proxy between
applications and the agency
QoS Mapper Agent: Maps user parameters to system QoS params (uses AI techniques)
Resource Estimator Agent: Map system parameters to actual resources
Resource manager Agent: Manages resources at its node (OS)
Local Negotiator Agent: Works between above two agents
QoS Monitor Agent: Verifies commitment to contracts
QoS Adaptor Agent: When a parameter defaults, renegotiates.
The Mobile Agents basically act at contract
(re)negotiators and monitors.
The Agent Based Systems attraction stated for this project include:
Mobile Agents: Service
Location Protocol (SLP), RFC2165
An article in the May 1998 magazin BYTE, touts a new protocol developed by James Kempf and Charles Perkins of Sun Microsystems' Technology Development Group. SLP is a TCP/IP based protocol which assists
in the discovery of Web servers, printers, fax machines, mail servers by employing user agents which search the
network for resources as needed, and by service agents which a device emits to declare their existance on the
network. There are also Directory Agents which serve as intermediaries between the User and Service
as traders. To enable accounting and non-sharing of devices between departments, the concept of Scopes
is implemented segmentting the network into classes. An example of a SLP address of a printer might be:
There are also Proxy Agents which discover directory-agents on startup via static config, DHCP-78 or multicast service request. Security is provided by IP Encapsulaton Security Payload (ESP) RFC1837.
The interest in this article is that this basic SLP protocol may also be used for find programs on the Internet.
Instead of just limiting the devices to printers and such, it may be expanded to include various installations
of applications such as logic synthesizers or layout generators. This protocol will need further study to
compare its merits and interoperability with respect to the CORBA dynamic invocation interface.
Finding a persistable, migrating agent env.
The following sections take a look at a primary concern with Mobile Agents which is the means by which mobility and data persistence are supported. Of interest is the CORBA persistent object service (POS) provides means of storing an objects state indefinitely. Primarily, it specifies the interface to a RDMS. Also, the CORBA externalization service provides means of exeternalizing the state of an object. There are several CORBA facilities we will look at here including: User, Task, Agent facilities. We will look at little bit at the IBM Aglets SDK, and its tahiti architecture, and finally, an overview will be given to the Mobile Agents Service Location Protocol. It is of some interest that Francis Chan of the WELD team is proposing a Java Client Persistent Object Management infrastructure "that allows not just dynamic interaction, but one that utilizes a data-backend, such as a file server or a database, to support persistent objects. The ability to manage objects across the network (WAN or Internet) is of paramount importance especially in applications such as web-based electronic design." Given their experience, this quotation is very heartening that we are on the same track.
Mobile Agents & Java
Mobile Agents here are defined as an encapsulation of code, data, and execution context that is able to migrate autonomously and purposefully within computer networks during their execution. Some properties of concern are:
Java Bean Persistence
Java Beans are in essence agents, but they are crippled without an agent run-time environment to accept them
on each server visited. Some points to consider in use of Java beans include:
But altogether, it is seen that Java Beans are just beans, the means by which mobility and persistence are provided are the same which are provided in the CORBA facilities and agent run-time environments.
CORBA Persistent Object Service (POS)
CORBA provide a specification for a Persisten Object Service with the following properties:
It is seen that this will provide a good foundation for the mobile object manager. But it needs much more development and consideration to fully employ.
CORBA Externalization Service
The CORBA Externalization
Service enables one to stream out an object's state. This is basically the same
as the Java Externalization Service. There are several Interfaces: Stream,
Stream Factory, File Stream Factory. The
Compound Services Interface includes services for controlling nodes, relationships, roles, and a propagation criteria factory.
Corba provides multiple facilities which may be classified as Vertical, or application specific, and Horizontal, or application generic. The facilities we are interested in include:
User Interface Facility
The User Interface
Facility provides means for rendering management or appearance of outputs on
It provides for compound presentation management, including:
Task Management Facility
The Task Management facility is basically a Workflow definition (with means for hierarchical, flat, and inheritable tasks). This includes operations for Agents which may be either static or mobile. There are defined various Function, Rules, Tasks, and Information Objects which are the target objectss of tasks.
The agent facility provides means for:
But it does not seem that the ORB venders fully support this facility yet...probably because it is a 3.0 specification.
IBM Aglets SDK, ATP
The primary Agent Development environment reviewed is IBM's Aglets software development kit. This system relies on the Tahiti aglet server which provides an agletsd and multiple preferences on the behavior of the daemon. It provides a scripting system for control of agent called the Agent Trasfer Protocol. Agent Transfer Protocol messages are as follows:
Figure 11. The Aglets runtime environment
When an aglet is
dispatched, cloned or deactivated, it is marshaled into a bit-stream, then
unmarshaled later. Aglets uses ObjectSerialization of Java to marshal and
unmarshal the state of agents, and tries to marshal all the objects that are
reachable from the aglet object.
The IBM system is free, well developed and bound to find much acceptance. But, it is not well tied into any CORBA environment as far as we can tell. This will take some hacking to co-exist with the ORB.
A more viable option is provided by Object Space's Voyager http://www.objectspace.com/voyager/ .
Voyager is a 100% Java agent-enhanced Object Request Broker which combines the power of mobile autonomous agents and remote method invocation and complete CORBA support, and comes complete with distributed services such as directory, persistence, and publish subscribe multicast.
Voyager claims to have been designed from the start to allow full object mobility. It provides for regular message syntax to construct remote objects, send them messages, and move them between programs. The API enables on to take any existing Java class, even without access to the source code, and use it as a CORBA server, automatically creating CORBA IDL files from Java. One can also process existing IDL for CORBA services automatically to create Java interfaces, and then use these interfaces. The version 2 beta 1 includes CORBA, Out and Inout Parameters and various other services. Best of all, one can communicate with the CORBA services using natural Java language syntax. The CORBA integration works with leading C++ and Java CORBA ORBs such as Visigenic's Visibroker.
8. Strategy IV:
MultiMedia Systems in EDA Modalities
In researching the Multimedia requirements for collaborative design, the following topics were considered. These topics are primarily included as they were the key points discussed in our ECE678 Integrated Telecommunications Networks course, and the information is derived primarily from the "Networked Multimedia Systems" book.
Java Agents for MultiMedia Systems in EDA (JAMMS-EDA)
Currently, collaboration design is done in person, through email and phone conversation. This section investigates the physiological, technical and philosophical aspects of optimally employing multimedia systems to facilitate Collaborative Electronic Design Automation. The emphasis of this investigation will be placed upon discovering possibilities for employing object oriented design over a distributed environment which is managed by CORBA middleware and with Intelligent Agents as the mediators of Quality of Service, network flow optimization and load distribution. The physiological aspect considers the human-computer interface requirements. The technological view surveys the hardware and network theory which tend to push the multimedia paradigm. The philosophical aspect addresses the question of whether such things as teleconferencing and telecommuting actually improve overall productivity and creativity.
Design Automation has recently become an attractive route of acheiving the
business goals of design and production oriented corporations.
To alleviate the situation the JAMMS-EDA sub-system to BB2000 is proposed and the technical foundations of multimedia systems required to implement it are reviewed in this section. Specifically, the JAMMS-EDA system is expected to provide a Java-based workflow management system which creates an integrated, distributed, standardized and simplified EDA environment with the goal to reduce design cycle times, cut costs and improve design capacity, as delineated above.
With lofty goals listed in the "Project Objectives" section in mind, the following conducts a survey of some of the highlights of the physiological, technical and philosophical requirements and implications one might wish to be aware of before committing to such a task.
To ask, "What is
multimedia", is paramount to asking "What is the experience of
life". It is simply the multiplicity of means of transfering
information from a source and presenting it to humans in an optimal format for
assimilation. That is, we are attempting to map real-world data through a
digital computer medium and into a wetware analog medium, which is nothing less
than that lump of grey matter between our ears.
Thus, since we are attempting to pump a maximal amount of information into this , we would like to employ as many of its sensory inputs as possible. Moreover, we must take into account the architecture and behaviour of the brain and mind in order to present the data in a form which induces a maximal learning and processing rate. In this vein, the following are some things to consider in the physiology of mind-matter transfer:
fundamental physiological properties in mind, one can design multimedia systems
which acquire, process, transmit and present multimedia data in a format which
is targeted for maximal immersion of the wetware recipients. As one can see, it
is not neccessary to transmitt 100% of the data as the human sensory modalities
are actually pretty dull compared with the electrosensory devices. In fact,
given the pattern completion nature of human assosciative neural networks, we
know that not even all of the human detectable data need be available for the
subject to acquire the (usually context sensitive) meaning in the data. In
general, each medium will require a different level of data compression or
extrapolation and each recipient subject will desire varying levels of QoS
(Quality of Service). This will all be dependent likewise on the network
tranmission medium and the ability of the software system to handle all of the
various possibilities for data presentation and management.
As will be seen below, there is a multiplicity of network protocols and computing platforms over which multimedia data is transmitted. Also, in the typical CAD environment, there will be many end-users simultaneously executing programs on remote machines and transferring data to each other over the network. In this complex and turbulent environment, to simultaneously provide optimal QoS and optimal resource allocation will require a highly adaptive and intelligent management paradigm. It is in this light that we will investigate the use of Intelligent Agents over a CORBA managed Distributed Design Environment.
Data Networks and Multimedia Systems
As anyone who has surfed the net is well aware, there are some very tangible limitations to network capacity and that this performace varies quite erratically. The following annotates some of the more popular parameters and relations which govern the transmission of multimedia over data networks, and which must me taken into consideration when planning a large, distributed multimedia system.
Data Acquisition Systems
Data are acquired through
analog devices such as microphones, cameras, video cameras and any other
devices which senses the analog physical properties and generate a related
electrical signal. The point here being that such analog sensing equipment
requires extremely complex opamps (operational amplifiers), ADCs (analog to
digital converters), and other signal conditional and filtering circuits. One of
the world leaders in this realm of design is Burr-Brown Corporation, located in
Tucson. A few of the more notable chipsets include:
Delta-Sigma ADC: This is a very popular method of quantizing an analog signal by over-sampling ( sampling at a rate 2,3,4 times the Nyquist rate). The digitization of the analog signal from its sampling (time or space discretization) is followed by its quantization ( value discretization) and coding.
xDSL: This technology promises to provide a full-duplex connection of 768Kpbs using the POTS (plain ol telephone system). Currently, Burr-Brown Corporation is the worlds leading seller of the chips used in this technology.
CCDs: Charged Coupled Devices which generate a small current when a photon strikes the photodiode. These devices are used in digital cameras and video recorders.
This self-referencing bootstrapping system is particularly intrigueing in that Burr-Brown Corp. is developing the DSP chips which will enable the remote and distributed design environment which will accelerate its ability to build even faster DSP chips... ad infinitum.
The structure of the data employed in multimedia systems not only dictates the neccessary protocols for network transmission, but also the compute hardware requirements and data management systems. The following breifly reviews the primary formats encountered and their significant properties:
Data Transmission Networks and Protocols
When combining all of
these data formats in a multimedia application, synchronization becomes
paramount in defining the network protocol and required QoS.
The synchronization modes are categorized as follows:
network transmission protocols can be divided into two modes: packet and
Packet switching protocols include: Ethernet, Token Ring, FDDI, X.25, Frame Relay and IP.
Circuit Switched networks include POTS (Plain Old Telephone Service), N-ISDN, B-ISDN and ATM.
For multimedia systems, it is widely accepted that ATM provides the optimal medium. The lack of deployment of ATM due to the complexity of its routing protocols will push its adoption out well into the next millenium. It is in this light that the author will investigate Intelligent Agent based QoS management over IP networks. The assumption will be that the network capacities will increase along their current exponential rate and that compression and data integration algorithms will continue to be discovered which provide ATM equivalent service over the Internet.
For multimedia systems, one is generally inclined to buy the best hardware on the market. And in practice this hardware is practically obsolete before it even hits the desktop. But we will summarize the components here anyway in order that a survey of one's existing hardware may be analyzed with respect to the requirements of the desired multimedia application. In this case, consideration is given to the typical setup on the desk of a design engineer who is looking to upgrade it to the minimal requirements for collabarative design.
x86 CPU: Currently we have
a majority of P60s and P100s. A few are equiped with 233Mhz processors.
RAM: The standard configuration has 32 megabytes of DRAM on each machine.
Disks: 2G drives are the standard.
CD-ROM: 24X enables DVD playback.
Displays: Standard SVGA
Needed: Video Recorders, Audio Recorders ,Mouse/Pen Recorders
The Marriage of CORBA and Java
Besides this desktop PC
configuration, there will also be a need for a client-server architecture as
most of the data being accessed resides on various servers, and the objects the
provide the suite of multimedia modalities will tend to also be distributed
across servers. In the EDA framework, there will be a mix of dedicated compute
servers and semi-open desktop workstations (the person at the desk has highest
priority). There will be several large file servers with automatic data
mirroring and data stripping. In the case of video conferencing, consideration must
be given to a MCU (Multipoint Control Unit) which enables multiple video
streams to be muxed and demuxed. And, as an aside, the conference rooms should
be enhanced with electronic white-board and computer screen projection
With such a complex and distributed environment, and if one takes account of the fact that most of these multimedia systems are developed in an object oriented fashion, it becomes apparent that this whole architecture is in need of a middleware object mediator. Of course,this is my chance to insert the big CORBA acronym which gives the whole project its zest and relevancy!
Some Philosophical/Sociological Considerations
One of the primary
objectives of multimedia is to allow the end-user to customize the presentation
of the data. Like democracy, a perfect freedom to choose anything
and everything tends to encourage individualism. This is perfectly fine except
when one is trying to develop a focus in a group or team to solve a specific
problem and come up with a cohesive solution. For telecommuters and
teleconferencing, multimedia based project management tools will need to be
build which can focus and direct the energies of the end-users (ie, like a
Human Isolationism is another primary factor to consider in assesing the impact of multimedia on society. Although it is apparent that the telephone brings people closer in that they are able to communicate more often, it is also apparent that they tend to visit each other much less often. This situation is likewise exacerbated by the brainrot of the T.V. genre. An interesting convergence of the telephone and computer is the 'chat' sites. Who would have guessed that thousands of people would become addicted to discordant, typically ignorant and rude conversations with people who are totally anonymous? In a theoretical analysis of the basal behaviours of anonymous entities in a memory free system, we will (are) witness a live experiment in the "Prisoner's Dilemna Problem". This is not only evidenced by the behaviors of people in chat rooms, but also with email and (most notably) in modern traffic etiquette. In summary, without responsibility for their actions, people tend to become more rude, and by their perceptions of other people's rudeness tends to cause them to reject society.
On a more metaphysical level, one can look at the blurring of the boundaries of reality brought on by multimedia systems. As mentioned in the beginning of this section, humans already rely on illusions of reality. When virtual reality and 3D games come into the equation, and children's brains become highly acclimated to their architectures and behaviors, one can see how a child might grow to be more comfortable in a well structured, visual appealing and predictable 3D/VR world than in the ugly and chaotic worlds of many of our cities.
Further thought needs to be directed to the questions of how multimedia systems affect our personal and global breadth of consciousness, how computer mediated Mind Control, Hypnosis and behavior modification may be employed through such things as frequency coupling to Phase-Locked Loops in Neural Systems, Strange Attractor detection, and the recent seizures brought on by some Japanimations.
Although these allusions to the philosophical aspects of multimedia's ramifications may seem tangential to our DDE project, they are still merited in that light of the fact that we cannot predict much beyond 2000 as to what the computing and multimedia capabilities will bring. Perhaps, if we could prophesize these trends, we might opt to embrace a more isolationist environment.
9. Tactics I: Development Schedule
Strategy summary: Putting them all together
The Vela project is a proof-of-concept of Very Large Distributed design workflows and encapsulated all of the same objectives as does BB2000. The Challenger project described the agent based marketplace for load distribution optimization which we will use for our distributed simulation environment. The QoS Agents provide an application management framework which may be integrated with the executable flows agents to gaurantee certain levels of response or accuracy. And, as will be further discussed, the CORBA/Java combination provides the fusing technology.
Java tools Development Schedule
The following pages briefly display the various applications which will be integrated into the BB2000 environment. The list below depicts an implementation strategy by which the development of object-oriented sub-components are completed first in order to enable a later mix in workflow engines.
Java EDIF System
Figure 12. Java EDIF System
The design from above depicts the basic process through which a design runs, with an emphasis on the verification steps (DRC, LVS, LPE, PRE). The brick wall emphasizes the inoperability of the PC based tools and the WS based tools. The EDIF (Electronic Data Interchange Format) provides a means of transferring schematics between the two realms. Currently, this is accomplished by a stand-alone java program. It is the point of this graph to show that if this flow were turned into an executable flow, the user would be much more likely to traverse it quickly and successfully.
Java System Configuration Util.
A Java System Configuation Utility will need to be generated with the following functions:
Java Library Browser
Figure 13. Java Library Browser
A Java Library Browser is specified which will enable any engineer inside the firewall to browse and process library and view all of the components online. The above figure depicts how the page will be generated from multiple sources, include text from model files, graphics from layout and schematics, and possilby real-time executables for analysis. Currently a scaled down version of this is provided as a Perl/CGI program, but is very limited in its capability and is custom coded. We would like the system to be told where the data is, and then generate the pages on the fly. Such a system could provide considerable utility to engineers in enabling them to know of all the possibile components they have at their disposal.
Java LVS Workflow
Figure 14. Java LVS Workflow
Currently, this flow is posted on the web pages with some rather oblique documentation explaning what each of the nodes above relate to. It is the goal of this project to make this into a dynamic executable flow, with tailorization to each process technology, so that the engineer need only click on the various stages to get the correct interface programs and related documentation. As can be seen above (in green) there are many nested cycles in this process which may possibly be automated through an agent scripting system.
Figure 15. Java DICE
The Java DICE is an extension of the distributed computing environment (explained below). Basically, the concept here is to extract design information from the layout, and then have the Dice Roller engine generate hundreds of simulation runs to test all the interesting possibilities and corners. This system will provide an unprecedented accuracy in that the statistical models are generated directly from Keithey process data.
Library Management System
In the current system each engineer copies libs, mutates various cells or models, and then after design is done, all good additions get lost. In the planned environment all libraries are objectified. New items may be easily added to object libraries via posting them to 'incoming' directories. The Workflow + Lib Manager will also ensure the designer is using the correct versions. The Lib-Mgr controlled libraries will make it hard for engineers to tamper with standards, but will enable them to create new devices that may quickly be appended to the standards.
Design Archiving System
In the current system, the engineer finishes the design, and then deletes brain cells, and possilby the design too.
Or ( 1 of 20), ask the EDA engineer to copy the design to archives. The planned system will not only archive all of the data needed to reconstruct the design (symbols, schematics, models, stimuli, layout, devices, etc), but will also be able to provide a record of the steps the design engineer took in the executable workflow system, and when/where files where created, and possibly even some documentation.
Java Work-Order Tracking Sys
The current situation consists of: Walk to EDA engineer?s cubicle, get in line. The future system: Start Java-WOTS, see the graphical queue of requests, get in queue.
By entering query online, various FAQs may be created and preventive measures taken.
By EDA engineer logging support online, it will show that we actually do work, and provide an ability to find the bottleneck problems instead of continually putting out fires.
Java Timing Driven Design Flow
Figure 16. Java Timing Driven Design Flow
The above flow depicts
some of the interacting components found in the Central Delay Calculator
Basically, this system enable one to simulate for timing with Verilog, then synthesize the code into logic with Synopsys, the run a Place-And-Route tools, and the extract the incurred parasitics. The parasitics are run through the CDC program which generates a SDF (standard delay file) which may be imported into the Verilog system or the Synopsys system for enhanced timing analysis. Currently none of this is done because basically no engineers are aware of the many paths they can take throught the timing flow.
CMOS Design Flow
Figure 17. CMOS DSP Design Flow
A standard CMOS digital signal processing development flow. This flow will provide the basis for all of the CMOS processes. Each independent flow will have a table of links to corresponding libraries, programs and documentation. Much of the documentation will be the same across different flows, but, to the engineers, it is totally customized and gaurantees that they are using the correct library and have the latest documentation etc.
Worlwide BB2000 EDAORB
Figure 18. The Global EDA ORB
With the IIOP and GIOP
capabilities of CORBA, these design flows may be executed by any design house
within the boundaries of the Intranet. Currently, each site has its own
versions of the libraries and much of the same work is repeated by each site
due to lack of coordination. The only caveat here is that we must have a fairly
wide network bandwidth to support transfer of some of the libraries. Another
problem of concern it that some of the licenses are supposedly constrained to
withing 50miles radii. But this may be circumvented by providing floating
licenses at each site. The agents will know to get licenses from the closest
9. Tactics II: A Distributed Simulation Environment
Classification of simulations:
This section considers two methods of simulation, the DEVS and the standard Spice environments.
Homogenous simulation: lots of objects of the
finite state machines (example PSpice simulations).
complex and heterogeneous simulation engines
A JAVA/CORBA and agent based environment for BB2000
This is an extremely complex and difficult to manage environment which continues to grow.
Figure 19. Distributed Simulation Environment
In general, we are looking at employing an ORB to organize and direct the interaction of the components used in a typical simulation session. In the above graph, the green objects represent simulation packages which contain objects defining 'who' owns the simulation, which program is to be used, the netlist is included, the operating points may be included, the stimulus data and the actual process device models may be included or referenced. The returned object (brown), contains the same information but also the CSDF results data. As can be seen above, the system is very distributed and can be rather intrigueing to organize.
Figure 20. An Agent Architecture for DSE
Using the concepts
presented above, we can consider a market based agent system to coordinate the
execution of simulation jobs. In the above figure, the fixed agents include:
Simulation Server: Makes bids for jobs when it has CPU space to allocate
Simulation Manager: When it has a job to run, it sends a request agent to the trader (Dice Roller).
Dice Roller: Does the actual matching of bids and requests, then also creates multiple corners and distributes
Contract Server: Manages the QoS contracts made by the three other fixed agents
Factor: Creates agents and has knowledge of the implementation repository.
Run-Time: Provide the environment for mobile agents to hop between machines.
Figure 21. A CORBA Architecture for DSE
The above architecture depicts the basic interelation of the components in the DDE which are further aluded to below.
The Repository Manager is the central data authority. We need a repository manager to be able to present a uniform interface to the objects in the system.
We can exemplify some of the functions that
the Repository Manager needs to fulfill. These functions are not always closely
related, but they have interconnections and interdependencies.
· The project archive. A central archive for projects.
The need for a central
archive for projects emerges both from the need for the people working on the
same project to share the parts of the project they are working on, and the
need to be able to access at any time old projects to extract useful ideas from
them and to be able to resurrect an old project and produce an old product at
The need to support multiple developers results in a need of a configuration management/version control, process management and problem management software.
· Traditional version control operation refer to:
· Process Management is the control of the software development activities.
For example, it might
check to ensure that a change request existed and had been approved for
fixing and that the associated design, documentation, and review activities
have been completed before allowing the code to be "checked in" again.
While process management and control are necessary for a repeatable, optimized development process, a solid configuration management foundation for that process is essential.
· Problem management may include call tracking, problem tracking, and change management.
It is necessary that all the tools used for a project are kept in the repository because at some point an old project might be revived and all the tools that have been used in it's design (simulators, layout, etc.) should be available. It is best to have a relation between a project and the tools used to develop it in order to consistently distribute it and use it.
A central goal of the design of the Repository manager is to provide an interface that is independent of the back-ends used to store the data.
The characteristics enumerated for all the components above have been chosen in such a way that they denote the interface for the respective component. For example the version control system should have the interface: checkin, checkout, merge and branch.
Taking this into account we define a general
interface to all the components of the Repository Manager. The objects that
implement this systems should implement the corresponding interfaces. The
advantage of this approach is that the underlying tools are abstracted, so at
any points they could be changed without any change in the design environment
(as long as the ones that are plugged back in implement the same interface).
· The Back-end implementation for the Repository Manager
The Repository Manager is an extremely complex object, and its functionality is critical to the whole system, therefore its design should be done extremely carefully. The components need a great degree of interoperability and integration.
We present here a couple of components to be
used for this system.
CVS allows multiple developers to checkout and work on the same version of the project and it provides support for merging conflicting changes (in the case when the developers change exactly the same portion of a file).
It has a client-server architecture that it can function over very low bandwidth network connections such as modem or ISDN lines.
A Relational or Object Oriented Database Management System is needed in order to maintain all the disparate data that constitute a project.
The library system for the different tools and the project data files need to be shared by multiple users. The most easy way to share them would be to put them on a file system. A good candidate for this is the CODA distributed file system.
Its main features are:
A very interesting property of CODA is the disconnected operation enabled by the persistent caching features. This would allow an engineer to start his work at the office on his laptop and use all the tools and libraries he needs and then unplug the laptop, take it home and continue using it there without noticing that he is not connected to the company's network. His files are automatically synchronized with the one on the network whenever he reconnects.
CODA filesystem is available on multiple operating systems, and work is
underway to port it other.
The Project Manager
object manages a single project.
It interfaces with the Repository Manager, using the provided chekin/checkout/merge services.
Project Manager can determine at a certain point what is the exact status of
each part of a project, and it can determine execution of any part of the
project by selecting the appropriate Flow Control object.
The DICE Roller is used to generate subprojects, partitioning the project in logical units taking into consideration a wide array of factors.
Flow Control objects
The Flow Control objects are actual implementations of executable workflows.
Each Flow Control object denotes a certain design process. As we presented before there are lots of processes in use and they are extremely complex and have very little similarities between them.
The Flow Control objects are the main
interface to the user, it is the way the user interacts with the project.
A graphical display is presented to the user at any point in the development in the project giving choices for what can he work on at that point.
The Flow Control objects run the appropriate programs at each phase of the design process.
It will also launch the agents needed for that particular phase.
The Flow Control objects will also monitor the execution status of the programs launched on the local or remote workstations and the status of the agents used.
The implementation of the Flow Control object
will be based on a Finite State Machine. Each phase that the design has to go
through is represented a state. The state transitions are the transitions in
the workflow diagram, no other transitions are allowed.
This implementation is pretty straight forward and satisfies all the needs for the Flow Control objects. It has been chosen over other implementations, like Petri nets or DEVS objects because of its simplicity.
The LEGO objects are the general interface to all the tools used during the design phase. They are named like this because they are assembled to form the design workflow executor objects.
Each LEGO object is an interface to one of the components of the design environment: VHDL/Verilog simulator, Pspice, HSpice, LVS, layout engine, DRC, etc.
The LEGO objects knows how to run the component it is responsible for, can transmit parameters to it, send the input files and get the output files.
The Translator Objects are associated with the LEGO objects. They are used to connect the LEGO objects with the environment they run in.
For example a VHDL simulator running on a PC will find its library path and all libraries in a certain place. The same tool running on a workstation will have its libraries placed in another place, and their names might be different, but the content identical. The translator object are configured so that the tools find the right libraries.
Translator objects are also used to interface components. For example in the EDA design world there are many netlist formats, and different tools use different netlist formats. A translator object needs to be interposed between them to do the necessary file conversions.
Resource Monitors are objects that run on each machine in the network that monitor the available resources on that machine.
There is exactly one Resource Monitor object running
on each machine.
The Resource Monitor object know what LEGO objects are present on that machine, can find out and estimate the execution status of objects that are executed on that machine.
This way the available resources for that machine are known at any time, so they can be given as answers to queries from the agents or from the Project Manager/DICE Roller.
The user can specify the maximum available
resources on his machine, for example a very busy designer would not want any
other processor running on his machine besides the ones he is running, but a
person that does technical writing does not need the full power of the machine
at all time, so the machine could be used to run some other tasks.
A system administrator could overwrite all this settings.
They provide bindings between objects and names in a certain naming context. Naming Services are extremely important because they provide an easy way to identify object, and a lot of other services rely on names supplied by the Naming Service. Interfaces are provided for communication with other legacy systems: Sun NIS, OSF DCE, Internet LDAP.
Life Cycle Services.
They are used for creating, copying, moving, deleting components in the CORBA environment.
The basic Life Cycle Service comprises:
Factories are objects that can create other objects of different types.
The Factory Finder supports a single operation: Find Factories that takes as a parameter a CORBA Services Object Naming Service Name for which object it will find the factories that object.
The Generic Factory offers a primitive Factory interface from which more complex factories can be created.
The Generic Factory supports an operation that returns true if it can create the desired object, and another operation that creates an object.
· Life Cycle Object it copies, moves and removes objects.
The Life Cycle Services are vital for
manipulating agents, because they provide the infrastructure needed by the
agents mobility. They are also used to make the connections to the LEGO
Persistent Object Service is a generalized set of service interfaces that
provide a variety of different types of persistent services to objects.
It includes a Persistent Object interface, a Persistent Object Manger to manage persistent objects and a Persistent Data Service that is in general an interface to a OODBMS or RDBMS.
The Persistent Object Service is heavily used for the Repository Manager, all the Repository Manager's services are defined in the terms of Persistent Object Services.
Event Services provide the means of communication between two or more
entities about the occurrence of a state transition.
Event services form the basis for transmitting the system the user's request, they are used in the implementation of the Flow Control Objects.
A user action is transmitted to the Flow Control Objects as an event, the termination of the execution of a certain LEGO object, or an error condition are also transmitted as events.
Concurrency Control Service mediates the access of multiple clients to a resource.
Some concurrent operations on the same object might conflict, so to the
access the object need to be serialized to avoid this.
The Repository Manager needs these services in order to be able to
keep its data consistent.
Transaction Services are used to assure that series of operations applied
to a certain object leave it in a consistent state even if an operation fails.
The transaction is transparent to the client, the client only needs to state when the transaction begins and when it ends.
A transaction has some properties:
· Atomic - occurs as a single unit, in case of failure all the effects are undone
· Consistent - it always end in the same way for the same data
· Isolated - the intermediate states are not visible to other transactions
· Durable - the effects are persistent.
The Transaction Services are essential for the interaction with the Repository Manager, especially when very complex operations are done on it, like for example merging the work on multiple developers on the same project.
Relationship Service, creates dynamic associations between components.
Externalization Services are used for component data I/O. They are an essential part of the migration of agents.
Object Query Service provides a predicate-based query capabilities for
ORBs. It is a superset of SQL, but for objects.
It provides the means to manage the LEGO objects.
Collection Service provides interfaces to generically create and manage
The Flow Control object use this service to manage the collection of LEGO object it uses for its tasks.
Licensing Service provides a method for metering and monitoring
Most of the simulation and design tools need licenses to run, and these licenses are in limited number, developers have to compete for them. The Licensing Service provides the framework to manage these licenses and implement policies for using them.
Security Services are used for authentication, access control lists,
In our environment the most security sensible components are the agents that have their own authentication and security framework.
Trader Services are a kind of yellow pages for objects to publish
their services and bid for jobs. The agents use these services in order to be
able to run their jobs.
DEVS, the Discrete Event Simulator is a formalism for describing discrete event systems.
Simulating DEVS models is computationally intensive, and this becomes more of a problem when the sizes of the models grow. One way to cope with this problem is to use more collaborating computers to do it.
In order to run DEVS simulation on multiple computers, the models need to be partitioned in parts that run on each computer.
The goals when partitioning are
The DEVS Architecture:
· States - the set of states that the model can be in at a certain point.
· Inputs - variables whose values are determined outside a model, the model as being fed this input through an external interface
· Outputs - variable whose values are determined by the model, the model is generating these outputs through an external interface
· External transition function - is a function defined on the cross product between the set of inputs and the set of states, this function is called when an input arrives, this determines a change of the model state.
· Output function - is a function that is called whenever an output is produced
Internal transition function - is a function that is called after
the model has been in a certain state for a period of time, this determines a
Figure 22. DEVS Atomic models
DEVS Atomic models implement the DEVS formalism.
Figure 23. DEVS Coupled Models
DEVS Coupled Models are made out by connecting atomic models, connecting the inputs of some models to the output of other models. Feedback connections are not allowed.
Figure 24. DEVS Model
Figure 25. DEVS Distributed Simulation System
Figure 26. Using CORBA for DEVS Distributed Simulation System
Issues when using CORBA in
· DEVS is implicitly object-oriented, so it maps very well on the CORBA object model
· Easy to manage
· models don't need any modifications, only the simulator needs to be modified, and the modifications are not very hard to do.
· CORBA has problems transferring huge amounts of data.
PSpice and HSpice simulation engines are based on finite state machines.
A finite state machine has a set of states, a set of inputs, a set of outputs and a set of transition functions that are called when a state transition happens. A state transition happens when a certain input arrives.
A CORBA interface for a finite state machine object is very straight forward to define, is just a set of inputs and a set of outputs.
PSpice simulations for big systems tend to run for a long time and consume a lot of computing power.
If the simulation engine is changed to
support CORBA objects, the simulation could be easily partitioned and run on
multiple machines in parallel.
10. Assessment: Cost/Benefit Predictions
The following sections will attempt to
provide some financial
consideration for the software purchases and time expenditures which will be necessary to sustain a concerted effort towards developing BB2000! (Note, the Vela project has a 1.3 Mill grant!). The actual analyis is not included here as this is typically considered sensitive information. But, it is very relevant to the whole concept of creating an organized and streamlined design environment.
The are, of course, many intangibles which cannot be metered, or at least, not until "BB2000!" is up and running. Likewise, since there is no pre-BB2000! reference point, it will be hard to prove where the design speed-ups and cost reductions come from. In all likelyhood, people will absorb the benefits (like we do salary raises), and will only notice the improvements in design environment organization.
Note: Cadence Quote = .5 Million just for an "assesment" and support! Some of the items to consider in the cost analysis include:
Time to Market Vs Expenditure on EDA
Growth rate of Design Starts vs. Engineers, EDA personnel
Size/Complexity of Designs vs. Compute Power
Industry Competitive Analysis
In order to determine a reasonable rate of EDA expenditure, we might consider our development rates as compared to our primary competitors:
BBrown, Maxim, Lin-Tech, ADI
Growth rates comparisons
Product, Technology Forecasts, Comparison
EDA Infrastructure of Burr-Brown, ADI, Linear Tech, Maxim
Cost Projection, ROI
And, in the details, one must consider the tangibles such as licenses and personnel requirements, and the intangibles such as depreciation and obsoletion of the tools.
Depreciation of Tools
Best Case, Worst Case analysis to implement BB2000! environment
12. Conclusion, Next Steps, Over the Edge
The key points to realize for BB2000 are that:
The key benefits to be realized by the implementation of BB2000! are as follows:
With the phenomenal growth of network bandwidth, desktop compute power and multimedia presentations systems, it has recently become feasible to plan and deploy a 'BB2000!' collaborative EDA framework of executable design flows over a Internet/Intranet distributed design environment. With consideration of the possibilities for optimization with respect to the physiological, technical and EDA framework components, it has been suggested that an Intelligent Agent based architecture over a CORBA mediated distributed object computing environment be employed. This "BB2000!" system will also enable dynamic distributed optimization of QoS, network flow optimization and load balancing of remote job submissions. Further, this system will be dynamically configurable and highly scalable due to its fine grained object paradigm. And finally, the author foresees this architecture as being applicable to all distributed multimedia integrated collaborative workflow environments, which, with so many trendy words, is gauranteed to be a big hit in the EDA industry. Thus it will become a standard methodology with all its advantages of economies of scale.
Future work: CORBA, Agents, GAs, Distributed Load Optimization
When a reasonable architecture is in place for BB2000!, we can try some interesting experiments: In particular, the use of Genetic Algorithms for process distribution optimization over the ORB managed distributed processing environment. In this case, the GA's will be adaptable, mobile agents.
JavaNoginn: Agents, GAs, NNs., for routing and load balancing
As with the load optimization scheme above, the same paradigm of GA encoded agents will be employed for testing Internet routing algorithms based on the natural min-energy seeking behavior of ecosystems. Although this has no bearing on improving design cycle time, it may eventually lead to novel means of distributed circuit simulation.
Object Oriented Agent Based Analog Layout Eco-System
With the CORBA/Java architecture, a distributed analog layout system may be implemented. The system will be similar to the Shape-Based IC-Craftsman tool in algorithmic behavior. In particular, there will be one Agent per device, it will be self adapting, it will know its own DRC constraints and will mutate itself to satisfy the device electrical requirements and space constraints. Similarly, it is proposed that there be one Agent per Steiner trace which finds the optimal path between to points by checking for DRC violation at each square encountered and adapt accordingly. The trace agents may spawn and perform and ant-like competetive search for the best path. In this proposed systems, all Agents compete simulaneously in Eco-System environment and local minima are avoided through Simulated-Annealing techniques. In general, the multi-agent system is very similar in concept to the proposed Javanoginn TCP router
In conclusion, the next
step in the research and development of BB2000 will entail attendance at the
presentation of the WELD system in the CAD Frameworks Session (session number
8.3) of the Design Automation Conference, in June 1998. Specifically,
they will be researching how to implement software systems that make efficient
use of complex distributed
software components that provide a dynamic set of transactional capabilities built upon networks. We plan to
expand the above-mentioned studies of CPU utilization, latency, bandwidth, and protocol overhead in the
analysis of existing systems such as CORBA, DCOM, Active X controls, and Java Beans. The deliverables of
this project will include measurements of these existing systems and an analysis of what set of infrastructure is
required for a CAD system that incorporates distributed components and the creation of generic components
that provide common services. The general idea is to create a common set of functionality, i.e. the "TCP/IP" of
distributed systems, that allows easy tool and service integration and mobility. " (from WELD)