Category : Science and technology

VHDL simulator supports 100% of the DOD standard

Silicon Compiler Systems (SCS) introduces Explorer VHDLsim, an IEEE 1076-1987 VDHL standard-compliant simulator. VHDLsim is integrated into SCS’s Explorer Lsim mixed-signal, multilevel simulation environment, giving engineers transparent access to resources including the Mach 1000 accelerator and Spice simulation. VHDLsim provides both the M and VDHL languages for engineers who choose to use analog and digital behavioral modeling. Mixing analog and digital modeling in the same simulation enables the inclusion of VHDL in mixed-signal designs. Explorer VHDLsim will be available in second quarter 1990 at a cost of $30,000 if purchased as an option to Explorer Lsim. A standalone version will be priced at $42,000.

Because of its heavy backing by the U.S. government and the IEEE, VHDL should make its mark on the CAE industry during the next few years. That’s made clear by the flurry of VHDL activity among simulation vendors. The latest to announce VHDL simulation support is Silicon Compiler Systems (SCS).

SCS supports the IEEE’s 1076-1987 VHDL standard with Explorer VHDLsim, a simulator that’s integrated into the company’s Explorer Lsim mixed-signal, multilevel simulation environment. The new simulator’s most important feature is its 100% support of the language, including behavioral, structural, and data-flow levels. Moreover, users can simulate VHDL models from any source.

Another key feature is the simulator’s meshing with the larger Lsim environment. That means, for example, users can have transparent access to resources such as the Mach 1000 accelerator and Spice simulation. Because the Explorer Lsim environment can simulate from the architectural level all the way down to the Spice level, the top-down design process need not stop at the gate level. Structural VHDL models may be simulated at the gate-, switch-, and circuit-levels in Explorer VHDLsim. This is possible because users won’t need to change simulation environments to get their designs in shape.

In cases where designers choose to use analog and digital behavioral modeling, the simulator gives them both the M and VHDL languages, respectively. M is a superset of the C language and is used for both digital and analog behavioral modeling. Because M is a full programming language, proprietary simulation algorithms may also be added. Translations from M into VHDL will be supplied where possible for those users with extensive libraries written in M and who need to document these models in VHDL.

By mixing analog and digital models in the same simulation, designers can include VHDL in mixed-signal designs as well as in pure digital designs. In addition, with VHDLsim there’s no performance penalty for using the VHDL language.

For accurate timing simulation, users will be able to run mixed-signal simulations using device-level representations where that level of accuracy is needed, and VHDL behavioral or gate-level models everywhere else. This includes mixed-signal simulation using HSpice and VHDL by means of the Lsim/HSpice interface that SCS has developed with MetaSoftware Inc., Campbell, Calif.

SCS’s Explorer Lsim environment comes with an interactive, source-level symbolic debugger that will be used for both VHDL models and models written in the M hardware-description language (see the figure). Other benefits of the tight integration with SCS’s design-automation products include schematic capture, synthesis, and other front- and back-end tools. “Integration is something that SCS can bring to the party that other standalone VHDL suppliers can’t,” claims James Griffeth, director of product marketing for SCS.

As if 100% support of the standard wasn’t enough to make VHDLsim stand out, it’s also distinguished by its performance. The simulator uses SCS’s direct-compilation technology, which is much faster than simulators using interpretive-language technology. This is because the VHDL code is compiled directly into C rather than a proprietary hardware-description language.

As a result of its alliance with CAD Language Systems Inc. (CLSI), Rockville, Md., VHDLsim includes CLSI’s VHDL Tool Integration Platform (VTIP), a set of front-end design-capture tools. As far as users are concerned, a simple pushbutton operation compiles source code into an executable circuit model. First, the code comes in through the CLSI VHDL parser, which parses it into the intermediate format. After that, the code is automatically accessed and optimized by VHDLsim, which compiles it directly into C. The resulting C object files are then further optimized by the C compilers native to the designer’s workstation.

Explorer VHDLsim supports the ability to do incremental compiling and dynamic loading. During a given simulation, designers may wish to modify one of the behavioral models in the hierarchy. An incremental compilation recompiles just the model that was changed, which is much faster than recompiling or reinterpreting the entire environment. Those recompiled models can be dynamically loaded without having to leave the simulator, which saves a great deal of time. That’s because users avoid simulation start-up time as well as the overall compile time or reinterpretation time.

According to Griffeth, SCS achieved 100% support of the VHDL language so quickly because of its four years of active participation in the IEEE’s VHDL standardization movement. As a result, the company had access to many of the leading-edge VHDL concepts as they took shape and heeded their architectural implications.


Surface mounting will sweep leaded components from market

The 1990s will see a continuation of the changeover from leaded to surface-mounted devices, which save money because of their small size, which cuts costs by saving space on computer boards, and through greater reliability and lower handling costs. Currently, surface-mounted devices account for 50 percent of the passive components used in Japan. That figure is 13 percent in North America and 10 percent in Europe. Leadless passive component use in North America will pass 50 percent over the next five years, and Europe will not be far behind. Surface-mount devices will continue to improve, offering ever-better tolerances, capacitance-voltage product, stability, and higher frequencies.

The most important component trend in the 1990s will continue to be the worldwide changeover from leaded to surface-mounted devices. As component users get smarter, they inevitably switch from leaded components to surface-mounted devices, wherever possible, for the simple reason that they save money. Because of their small size, surface-mounted devices save space on pc boards, which cuts costs. In addition, they’re more reliable and they cost less to handle. I also expect them to become less expensive than leaded components.

Not surprisingly, Japan is the leader in the use of surface-mounted devices. Whereas only 10% of the passive components used in Europe are surface-mounted devices, over 50{ of them are in Japan. In North America, the figure is about 13%. Over the next five years, I expect that leadless passive components used in North America will rise past 50%, with Europe to follow suit. The overwhelming trend is toward surface mounting, and anyone who hasn’t made the transition should plan on starting soon.

For potential users of surface-mounted devices who aren’t sure how to start, there are assembly operations that do subcontracting work. Many small and medium-sized companies go to such assemblers to get their designs established, and later they bring the work inside. Large companies also use outside contractors to smooth out the peaks and valleys in their production schedules. High-speed equipment for placing surface-mounted devices is somewhat expensive, and not even a giant corporation wants to have this gear just sitting around.

As time goes on, I expect these devices to get better and to become more popular and less expensive. For the components that I’m most familiar with–tantalum capacitors–“better” takes on many definitions. It means more reliable parts with tighter tolerances and better stability; packing more CV (capacitance-voltage product) into a given volume; lowering the cost per CV; and the ability to work well at ever higher frequencies.

When I went to school, the basic electrolytic capacitor was an aluminum device with a tolerance of -20% and +50%, or even +100%. Today, we’re routinely making tantalum units with tolerances of [+ or -]5%, even [+ or -]2% can be done on special order. Tighter tolerances can be expected in the near future. These capacitors can be used for timing and waveform-shaping applications, not just for power-supply filtering and decoupling. That was simply unthinkable for an electrolytic capacitor in the past.

We’re also making specials in which the variation of capacitance with temperature and frequency is tightly controlled. That capability is opening up new applications for tantalum capacitors–both in displacing other capacitor types and in areas that never existed before, such as fiber optic communications. We can expect other types of electrolytic capacitors to also open up even more applications.

The latest tantalum electrolytics are also maintaining their capacitive nature at increasingly high frequencies. They present a lower equivalent series resistance (ESR) and remain reactive at frequencies up to about 1 MHz, a figure that will undoubtedly increase during the next decade. Therefore, they can better serve their intended function–decoupling.

A major function of the electrolytics is to filter out power-supply noise. With today’s switches running at 100 to 500 kHz, they’re often operated near the limits of their capabilities, but they continue to deliver their intended outputs.

Working at the edge, however, can be tricky. Applying electrolytic capacitors at high frequencies requires expert knowledge, which brings us to another important element for the next decade–service. Component makers not only need to make better components, they need to deliver them precisely when the customer wants them. They also must supply all of the required technical support. To help with the latter requirement, we’re writing and compiling a series of simple capacitor application programs that can run on a PC. Our intention is to make those programs available to users along with our catalog.

Then designers, at their own leisure, can choose a part from the catalog, key in their operating parameters, and get, for example, a plot of ESR vs. frequency. The whole objective is to make it easier for people to use and understand the components.


Semi device technology will drive connections at all levels

Advances in semiconductor technology will drive interconnect technology developments at all levels, from device to network. Performance improvements in systems require high-speed connections to be treated as transmission lines. At the device level, the trend toward the small, dense interconnects of the multichip module will continue. At board level, transmission-line techniques will be used for interconnection of high-speed logic cards. The backplane will survive, but connectors will be microstrip, stripline and coaxial structures that reduce noise and speed signal flow.

Surface-mounting will be the dominant component mounting technology. Transmission-line techniques will also be used at the subsystem level to respond to increased IC and system speeds. Plastic optical fiber will be popular for applications requiring increased bandwidth, and optical cable and multiplexing will play an important part in tomorrow’s smart home, which will be bus-wired, microprocessor-controlled, and have a wide variety of programmable electronic devices.

Advances in semiconductor technology will form the driving force for all of the important interconnect technology developments from the device level to the network level. For example, at the device level, there’s the multichip module with its small and dense interconnects between chips playing a major role in systems, a trend that will continue during the 1990s. Designers will perform trade-off analysis in deciding between using an ASIC and a multichip module for circuits of the same complexity. By the end of the 1990s, ASICs packaged in multichip modules to form specialized systems on a substrate will be commonplace.

At AMP Inc., we have a system called the Microinterposer under design for connecting planar substrates with a pressure surface-mounted technique. We no longer think of these links as simply rows of ohmic contacts. The performance improvements in systems dictate that all of the high-speed connections be treated as transmission lines.

At the board-to-board level, today’s backplane will continue to survive. But the multiple-pin connectors can no longer be simply pins in a plastic housing. Transmission-line techniques must be used to interconnect high-speed logic cards. Connectors will be stripline, microstrip, and coaxial structures designed to speed signal flow and reduce noise. Today’s pin fields aren’t most efficient in moving signals, power, and ground around a system.

There’s no doubt that surface mounting will become the primary technology for component mounting. For many connections, efforts are already underway to eliminate through-holes. Smaller center lines will lead to surface-mounted backplane connectors that use pressure connections rather than solder. Similar techniques will be used for other connector types. On the other hand, some connectors for some applications will stubbornly retain their strong links to the older through hole technology well into the ’90s.

At the subsystem level (for example, connecting a disk drive to its controller card or a power supply to its loads), advanced technology will play a major role. Here again, transmission-line techniques will be needed to keep pace with advances in IC and system speeds.

Plastic optical fiber for short links with system enclosures will gain in popularity for applications that require increased bandwidth. Inexpensive plastic optical cable will be a candidate to replace copper cable in a wide variety of subsystem (wire-harness) interconnect applications. Today, it’s difficult to consider optical fiber for short links in a high-speed system. One reason is that a time-delay penalty must be paid going between the optical and electrical domains. Considering propagation delay only, an all-copper wiring system can actually be faster than an optical system, even though the optical media has much greater bandwidth.

Automobiles are an application made to order for optical cable. In fact, cars may have to use optical wiring if the electronics continues to proliferate at its current pace. Eventually, a car will contain numerous internal control networks. Engine control is now fairly common, and it will be joined by a ride-control network that adjusts the suspension to road conditions, braking and traction controls systems, and a passenger convenience and comfort network that controls temperature, entertainment, and so on. Optical cable is much smaller and lighter than a conventional wiring harness and optical media is almost immune to the critical emi/rfi noise environment of the auto. Optical cable also may eventually fit in with the concept of multiplexed control where a microprocessor divides its time among various functions within a control group.

Optical cable and multiplexing will also play a role in future office equipment and “smart homes.” A smart home will be microprocessor controlled, essentially bus-wired, and contain electronics that make it possible for the owners to program a wide variety of functions.

Though copper cable is virtually universal in local-area networks, that may change in the next few years. Optical glass fiber will certainly be required for high-speed local networks, such as the upcoming fiber distributed data interface. However, plastic fiber with its relative ease of termination and continued performance improvements may appear more and more attractive during the decade as a cost effective means to interconnect small departments within large organizations.


Resistors can offer creative solutions to design problems

Application-specific resistors (ASRs) are one of the most significant of a wealth of developments resulting from improvements in resistor materials and processing techniques. Specific resistors can offer lower cost, higher performance, and smaller size than off-the-shelf resistors. ASRs will be applied to a wide variety of design problems, including size, cost and noise reduction and temperature coefficient compensation. The 1990s will see a continuation of the trend toward tighter tolerances, as well as continued size reduction, without corresponding power rating reduction in many cases. Surface-mounted resistors will be the dominant technology, but leaded resistors will continue to be used for special requirements.

Resistors are generally perceived as the most mundane of electronic components. The typical design engineer thinks of them as simple devices that have remained unchanged since he was in school. However, there are continuing significant developments in resistors and other resistive components that deliver cost-effective solutions to new and existing circuit demands.

Improvements in resistor materials and processing techniques have led to a host of developments. I think the most exciting development is the application-specific resistor (ASR). Range extensions at both ends of the resistance scale have expanded–gigaohms at one end and fractional ohms at the other–with each incremental extensions opening up new application areas.

Specific resistor types with higher performance, lower cost, and smaller size have replaced earlier styles or more costly resistor types. In the past, for example, a close-tolerance power resistor had to be wirewound. Now, for power ratings up to 10 W, film devices are every bit as stable. During the ’90s, that 10-W figure will undoubtedly go higher.

I expect to see the continuation of tighter tolerances. Tolerances of 5% and 2% will fade away as production processes improve. The 1% tolerance will become the leading figure for film resistors.

Size reductions will continue, often with no reduction in power rating. I expect the 1/8-W size to displace the 1/4W size as the volume leader during the next decade. Surface-mounted resistors will continue to get smaller and command an increasingly large share of the market. The continued downsizing will increase the use of passive networks. The parts are becoming so small that even the best automated equipment has great difficulty handling them individually. As a result, the passive network will become the form factor of choice in mounting these units onto circuit boards. Instead of users handling 15 or 20 things that resemble little specks of pepper, they’ll handle 2 or 3 networks that have those specks on them. The networks will also get smaller, but not small enough to present a problem.

Although surface-mounted resistors will dominate the industry, leaded resistors will continue to be in demand for special requirements, such as high power, fuseability, ultra-precision, and customized impedance.

The ASR will emerge as an extremely cost-effective solution to many knotty design problems. In the past, design engineers didn’t usually think of ordering custom resistor products and resistor makers didn’t push them very much. The reason was that we didn’t have much to offer. Thanks to various developments in ceramics, filming techniques and materials, and coating techniques and materials, we can now offer inexpensive ways out of numerous design difficulties.

One case that exemplifies this point concerns an engineer who built a prototype of a piece of high-frequency communications equipment with standard off-the-shelf resistor at a critical point in the circuit. When it came to volume production, similar resistors caused the circuit to malfunction. Although it apparently was identical, the production resistor had a slightly different capacitance, and hence as slightly different rf impedance. We could adjust the geometry of our standard product to give that designer the impedance his design needed.

Design engineers are constantly pressured to reduce the cost and size of their circuits. ASR components, where appropriate, are an effective means to achieve both objectives.

We’ve built resistors with a dab of nickel-bearing epoxy painted over the regular coating to form a resistor with a bypass capacitor connected to one lead. That’s an extremely cost-effective way to reduce noise in a sensitive high-gain amplifier. Another ASR was built with a particular temperature coefficient (TC) to compensate for a capacitor’s TC. The RC combination then maintained a very stable time constant over temperature. The solution was elegant and much cheaper than the brute-force approach of trying to get both TCs down to zero.

There’ll be more such innovation, whether it’s special resistor in a spark plug or a lamp ballast. Engineers must be aware of the possibility of solving problems with a custom resistor. Most of them won’t consider discussing problems with resistor application engineers. And that’s too bad, because these products won’t be found in any catalog.


Advanced ADCs in digital oscilloscopes will enhance design awareness

The continuing trend toward faster, cheaper, more accurate analog-to-digital converters will top the list of advances in instrumentation in the 1990s. Digitizing oscilloscopes may be the most significant product, combining the speed of analog oscilloscopes with the ability to simultaneously examine high-speed transient events on multiple channels. By enabling engineers to get a clearer picture of what is going on in high-speed digital systems, digitizing oscilloscopes will speed design implementation. A side effect of the oscilloscope changeover will be the gradual disappearance of logic analyzers, which are essentially one-bit digitizing oscilloscopes. Advances in a-d convertors will also spur development of the instrument-on-a-card concept, which envisions medium performance instrumentation in the form of cards that fit in a VXI bus-based mainframe.

The most exciting technological advance in instrumentation in the ’90s will be the continuing improvement in analog-to-digital converters, as they continue to get faster, less expensive, and more precise. As a result, a whole host of changes in instrumentation may be anticipated. Probably the most important change will be the changeover in oscilloscopes from analog to digital in all but the lowest-cost, lowest-performance areas.

The new breed of digitizing oscilloscopes will have all of the speed that was formerly available only from analog instruments. But they’ll also offer a capability that average designers could never access before: the ability to examine high-speed transient events on multiple channels simultaneously.

Not surprisingly, that capability is exactly what’s needed to analyze the behavior of the types of circuits that average designers will be building. I say “not surprisingly” because those new circuits will be based on the same advances in semiconductors that will make the new oscilloscopes possible. To be specific, those circuits will be very fast digital designs, such as systems based on 50-MHz microprocessors. These systems will have sub-nanosecond rise times and will require timing resolutions of tens of picoseconds.

Today, if engineers have to troubleshoot a design of that type, much of what they do will consist of guesswork. They use their knowledge and experience to guess what the problem might be, fix it based on that assumption, and check to see whether the problem went away. Eventually they fix the problem, but rarely is it known whether the problem was what was originally thought. The higg-speed single-shot events that cause the problems simply can’t be seen on today’s conventional instruments.

The new breed of reasonably-priced oscilloscopes will give average engineers the ability to really understand what’s happening with their high-speed digital systems. I can’t say precisely what effect that capability will have on design methodologies, but it’s sure to be considerable. It will certainly enable engineers to implement designs more quickly. In other words, it’s a productivity tool.

It can also be a learning aid. When you truly understand what went wrong and why your fix worked, you may have learned something that will give you a hint of what to avoid in the future.

Another interesting outcome of the oscilloscope changeover will be the disappearance of the logic analyzer as a separate piece of instrumentation. A logic analyzer, after all, is merely a one-bit digitizing oscilloscope. As the price of a-d converters continues to drop, a point will be reached where it makes sense to build, say, 80-channel digitizing oscilloscopes. With such instruments, there’d be no need for a simple logic analyzer.

As a result of advances in oscilloscopes, I expect substantial changes in microwave engineering design methodologies. Today, most microwave design work is done in the frequency domain because the dominant measurement tools available to microwave designers–spectrum analyzers and network analyzers–work in this domain.

But given a choice, a majority of engineers would prefer to work mostly in the time domain. At lower frequencies, where there has long been a choice, both design and analysis are done in the time domain because it’s easier to spot most problems there. For example, if your amplifier is clipping a sine wave, it can easily spotted on an oscilloscope. In the frequency domain, however, all you’d see are some second and third harmonics, which may also be caused by crossover distortion or some other nonlinearity. With a spectrum analyzer, you know there’s a problem and you know when it’s been solved, but you don’t necessarily know what you did to fix it.

Aside from fomenting what amounts to a revolution in oscilloscopes, advances in a-d converters will also give a powerful boost to the instrument-on-a-card concept. I expect that much of the medium-performance instrumentation produced toward the end of the decade will be in the form of cards that will fit into an instrumentation mainframe based on the VXI bus. This type of packaging, however, will be of more interest to manufacturing and test engineers–to whom size and configurability are very important–than to designers. But wherever it’s applied, the instrumentation card cage will offer lots of very neat solutions.

 


Diffusion welding at low temperatures suits the fine lines of power MOSFETs

Siemens AG, Munich, West Germany, has developed a new technique for bonding silicon power devices to molybdenum substrates. The three-step process uses relatively low temperatures, avoiding the stresses of conventional alloying processes while achieving high thermal-cycling stability and electrically stable contacts with negligible resistance.

The high temperatures used in alloying can cause warping of device surfaces during cooling because of the different thermal expansion coefficients of molybdenum and silicon, making the process unsuitable for finely wired devices such as power MOSFETs. The new process places less mechanical strain on the devices because the lower temperatures eliminate the danger of warping during cool-down. And the process is highly reproducible, yielding the same results consistently, making it ideal for production line use.

A diffusion welding technique developed at Siemens AG, Munich, West Germany, welds silicon power devices to molybdenum substrates at comparatively low temperatures. Consequently, the technique avoids the drawbacks of alloying processes, and achieves high thermal-cycling stability and electrically stable contacts that have negligible resistance.

MOSFET showing gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (white)

Power devices generally call for a low-cost current supply, good heat-radiation characteristics, and high mechanical stability. To achieve these properties, the silicon chips are firmly attached to a substrate. What’s usually done in this chip-to-substrate attachment operation is to insert a aluminum foil between the chip and a molybdenum disk and then, in an alloying process, affix the device to a 2- to 3-mm-thick molybdenum substrate.

The high temperature, up to 700[degrees]C, encountered in this process imposes limits on its use, however. For one, different thermal expansion coefficients of molybdenum and silicon can warp device surfaces as they cool. For that reason, the alloying process can’t be applied to finely structured devices like power MOSFETs.

The new technique, developed at Siemens’ Munich-based research laboratories, takes a different tack. In the first of three steps, the silicon and molybdenum surfaces to be joined are supplied with a sinterable layer of silver, for example. In the second step, silver particles with a diameter of about 10 [micrometer] and suspended in a solvent are deposited on the molybdenum. The solvent is evaporated by momentary hearing, leaving a silver layer behind.

Finally, after the silicon chip is put on top of the silver layer, the silicon-silver-molybdenum sandwich is sintered for two minutes at a relatively low temperature, about 240[degrees]C, and a pressure of about 4000 [N/cm.sup.2].

According to Reinhold Kuhnert, head of the development team, the new technique leads to a porous silicon-molybdenum connecting cayer with high thermal and electrical stability, as well as thermal and electrical conductivity values comparable to those gotten with conventional high-temperature alloying processes.

Kuhnert points out that one advantage of the new diffusion is that it puts low mechanical stress on the device; that’s because it doesn’t warp after cooling. Another advantage is that it’s higly reproducible, which means that the results are consistent time after time–a prerequisite for use on any production line. Moreover, careful control of the silver-deposition process compensates for several microns of substrate unevenness that may occur.

Further, in contrast to alloying processes, no silicon is consumed in diffusion welding. Also, the silicon doping level remains unaffected. Because the technique is based on solid-state reactions–there’s no liquid phase involved–the temperature stability of the chip-to-substrate connection is far higher than the temperature encountered in device fabrication. As a result, although the device is made at law process temperatures, it can withstand the high temperatures developed under surge-current loads.

The technique, which may soon go to work on the production line, isn’t limited to attaching discrete devices to substrates. In one process, many small elements (such as all those on a 4-in. wafer) can be contacted to their substrates.

In addition, by using suitable multilayer substrates made of materials with thermal conductivity higher than that of molybdenum, heat can be removed faster, according to Kuhnert.


Communication among processors sustains fast, massively parallel computer

Designers at Maspar Computer Corp in Sunnyvale, CA have developed two chips intended to break the communications bottleneck that often occurs in highly parallel computers. One chip serves to simplify and accelerate global interprocessor communications. The second chip carries 32 highly interconnected processing elements (PEs). A fully configured MasPar system can yield performance rates of 10,000 MIPS and 1,000 MFLOPS. Two independent communication schemes are built into the PE chips. A neighborhood mesh connects each PE to its eight nearest neighbors, while a multistage crossbar hierarchy allows each PE to connect to any other PE in the array. The 32 processors on each PE chip are formed from 500,000 transistors. RISC-style load-store architecture with local cache memory keeps each PE as small as possible.

Highly parallel computers employ hundreds, even thousands, of small processors to achieve astounding computational rates. Execution rates can reach billions of operations per second. At the same time, however, the interprocessor communication needed for sending commands and transferring data can become a bottleneck when so many processors run simultaneously.

To break that bottleneck, designers at MasPar Computer Corp., Sunnyvale, Calif., developed two custom chips: One simplifies and accelerates global interprocessor communications, and the other supplies 32 highly interconnected processing elements (PEs).

The MasPar system can harness from 1024 to 16,384 processor elements and, when fully configured, deliver 10,000 MIPS (millions instructions per second) and 1000 MFLOPS (million floating-point operations per second). The system employes a single-instruction-stream and multiple-data (SIMD) architecture manipulated by an array-control unit (ACU).

MasPar at NASA/GSFC

The ACU fetches and decodes instructions and issues control signals to all PEs. All PEs execute the same instruction stream, but each can also have local autonomy over execution, allowing for localized calculations.

To achieve the high bandwidth needed for thousands of PEs to communicate among themselves, MasPar designers built two independent communication schemes into the system. One is a neighborhood mesh, or X-net local interconnection that ties each PE to its eight nearest neighbors. The other is a multistage crossbar hierarchy that lets each PE connect to any other PE in the array. The X-net forms a 2D grid that wraps around East to West and North to South to form a torus-like pattern (see the figure below).

Within each PE chip are packed 500,000 transistors that form 32 processors interconnected in a 2d grid. Multiple PE chips are interconnected in the same way that processors are within a chip.

According to Jeff Kalb, MasPar Computer’s founder and president, each PE is kept as small as possible by using a RISC-style, load-store architecture with local cache memory for each PE.

What’s more, only four paths per PE are needed to communicate in eight directions. That’s because the X-shaped crossing points of the communication paths are three-state nodes that switch the data to one of two paths. All of the interprocessor X-net connections use bit-serial communications, so just 24 pins per PE chip are required for the X-net interface.

Some computational problems don’t map well onto an X-net topology, however, and require arbitrary interconnections among PEs. To tackle that problem, a team headed by Tom Blank, director of architecture and application development, designed the multiple custom router chips so they could form a multistage interconnection network, which is somewhat like a hierarchy of crossbar switches. Each router chip has 64 datapath inputs and 64 datapath outputs, and can switch any input to any output.

When a PE chip sends data to a router, it first multiplexes the output from 16 PEs onto one outgoing router path and one incoming router path. Router paths are bit serial, so only four pins are needed on each PE chip to multiplex 32 PEs.

Once a connection is established, the router’s bidirectional data paths send or fetch data. In a 16,384-PE system, up to 1024 simultaneous connections can be established, thus giving an aggregate data rate of over 1 Gbyte/s.

The multistage interconnection network is well matched to SIMD architectures because all connections occur with the same timing and sequence in each PE. The common timing and sequencing greatly simplifies the logic compared to a hypercube-type architecture.

In a hypercube, different path lengths can cause messages to arrive at each PE at different times, raising hard-to-solve timing considerations. In contrast, the router network in MasPar’s system keeps the message delays identical, eliminating timing problems. In addition, the router network tracks parity for all data and addresses to ensure highly reliable transfers–a critical task when thousands of processors are involved. Furthermore, each router chip includes diagnostic logic that detects open wires in the network.