Semi device technology will drive connections at all levels

Advances in semiconductor technology will drive interconnect technology developments at all levels, from device to network. Performance improvements in systems require high-speed connections to be treated as transmission lines. At the device level, the trend toward the small, dense interconnects of the multichip module will continue. At board level, transmission-line techniques will be used for interconnection of high-speed logic cards. The backplane will survive, but connectors will be microstrip, stripline and coaxial structures that reduce noise and speed signal flow.

Surface-mounting will be the dominant component mounting technology. Transmission-line techniques will also be used at the subsystem level to respond to increased IC and system speeds. Plastic optical fiber will be popular for applications requiring increased bandwidth, and optical cable and multiplexing will play an important part in tomorrow’s smart home, which will be bus-wired, microprocessor-controlled, and have a wide variety of programmable electronic devices.

Advances in semiconductor technology will form the driving force for all of the important interconnect technology developments from the device level to the network level. For example, at the device level, there’s the multichip module with its small and dense interconnects between chips playing a major role in systems, a trend that will continue during the 1990s. Designers will perform trade-off analysis in deciding between using an ASIC and a multichip module for circuits of the same complexity. By the end of the 1990s, ASICs packaged in multichip modules to form specialized systems on a substrate will be commonplace.

At AMP Inc., we have a system called the Microinterposer under design for connecting planar substrates with a pressure surface-mounted technique. We no longer think of these links as simply rows of ohmic contacts. The performance improvements in systems dictate that all of the high-speed connections be treated as transmission lines.

At the board-to-board level, today’s backplane will continue to survive. But the multiple-pin connectors can no longer be simply pins in a plastic housing. Transmission-line techniques must be used to interconnect high-speed logic cards. Connectors will be stripline, microstrip, and coaxial structures designed to speed signal flow and reduce noise. Today’s pin fields aren’t most efficient in moving signals, power, and ground around a system.

There’s no doubt that surface mounting will become the primary technology for component mounting. For many connections, efforts are already underway to eliminate through-holes. Smaller center lines will lead to surface-mounted backplane connectors that use pressure connections rather than solder. Similar techniques will be used for other connector types. On the other hand, some connectors for some applications will stubbornly retain their strong links to the older through hole technology well into the ’90s.

At the subsystem level (for example, connecting a disk drive to its controller card or a power supply to its loads), advanced technology will play a major role. Here again, transmission-line techniques will be needed to keep pace with advances in IC and system speeds.

Plastic optical fiber for short links with system enclosures will gain in popularity for applications that require increased bandwidth. Inexpensive plastic optical cable will be a candidate to replace copper cable in a wide variety of subsystem (wire-harness) interconnect applications. Today, it’s difficult to consider optical fiber for short links in a high-speed system. One reason is that a time-delay penalty must be paid going between the optical and electrical domains. Considering propagation delay only, an all-copper wiring system can actually be faster than an optical system, even though the optical media has much greater bandwidth.

Automobiles are an application made to order for optical cable. In fact, cars may have to use optical wiring if the electronics continues to proliferate at its current pace. Eventually, a car will contain numerous internal control networks. Engine control is now fairly common, and it will be joined by a ride-control network that adjusts the suspension to road conditions, braking and traction controls systems, and a passenger convenience and comfort network that controls temperature, entertainment, and so on. Optical cable is much smaller and lighter than a conventional wiring harness and optical media is almost immune to the critical emi/rfi noise environment of the auto. Optical cable also may eventually fit in with the concept of multiplexed control where a microprocessor divides its time among various functions within a control group.

Optical cable and multiplexing will also play a role in future office equipment and “smart homes.” A smart home will be microprocessor controlled, essentially bus-wired, and contain electronics that make it possible for the owners to program a wide variety of functions.

Though copper cable is virtually universal in local-area networks, that may change in the next few years. Optical glass fiber will certainly be required for high-speed local networks, such as the upcoming fiber distributed data interface. However, plastic fiber with its relative ease of termination and continued performance improvements may appear more and more attractive during the decade as a cost effective means to interconnect small departments within large organizations.


Resistors can offer creative solutions to design problems

Application-specific resistors (ASRs) are one of the most significant of a wealth of developments resulting from improvements in resistor materials and processing techniques. Specific resistors can offer lower cost, higher performance, and smaller size than off-the-shelf resistors. ASRs will be applied to a wide variety of design problems, including size, cost and noise reduction and temperature coefficient compensation. The 1990s will see a continuation of the trend toward tighter tolerances, as well as continued size reduction, without corresponding power rating reduction in many cases. Surface-mounted resistors will be the dominant technology, but leaded resistors will continue to be used for special requirements.

Resistors are generally perceived as the most mundane of electronic components. The typical design engineer thinks of them as simple devices that have remained unchanged since he was in school. However, there are continuing significant developments in resistors and other resistive components that deliver cost-effective solutions to new and existing circuit demands.

Improvements in resistor materials and processing techniques have led to a host of developments. I think the most exciting development is the application-specific resistor (ASR). Range extensions at both ends of the resistance scale have expanded–gigaohms at one end and fractional ohms at the other–with each incremental extensions opening up new application areas.

Specific resistor types with higher performance, lower cost, and smaller size have replaced earlier styles or more costly resistor types. In the past, for example, a close-tolerance power resistor had to be wirewound. Now, for power ratings up to 10 W, film devices are every bit as stable. During the ’90s, that 10-W figure will undoubtedly go higher.

I expect to see the continuation of tighter tolerances. Tolerances of 5% and 2% will fade away as production processes improve. The 1% tolerance will become the leading figure for film resistors.

Size reductions will continue, often with no reduction in power rating. I expect the 1/8-W size to displace the 1/4W size as the volume leader during the next decade. Surface-mounted resistors will continue to get smaller and command an increasingly large share of the market. The continued downsizing will increase the use of passive networks. The parts are becoming so small that even the best automated equipment has great difficulty handling them individually. As a result, the passive network will become the form factor of choice in mounting these units onto circuit boards. Instead of users handling 15 or 20 things that resemble little specks of pepper, they’ll handle 2 or 3 networks that have those specks on them. The networks will also get smaller, but not small enough to present a problem.

Although surface-mounted resistors will dominate the industry, leaded resistors will continue to be in demand for special requirements, such as high power, fuseability, ultra-precision, and customized impedance.

The ASR will emerge as an extremely cost-effective solution to many knotty design problems. In the past, design engineers didn’t usually think of ordering custom resistor products and resistor makers didn’t push them very much. The reason was that we didn’t have much to offer. Thanks to various developments in ceramics, filming techniques and materials, and coating techniques and materials, we can now offer inexpensive ways out of numerous design difficulties.

One case that exemplifies this point concerns an engineer who built a prototype of a piece of high-frequency communications equipment with standard off-the-shelf resistor at a critical point in the circuit. When it came to volume production, similar resistors caused the circuit to malfunction. Although it apparently was identical, the production resistor had a slightly different capacitance, and hence as slightly different rf impedance. We could adjust the geometry of our standard product to give that designer the impedance his design needed.

Design engineers are constantly pressured to reduce the cost and size of their circuits. ASR components, where appropriate, are an effective means to achieve both objectives.

We’ve built resistors with a dab of nickel-bearing epoxy painted over the regular coating to form a resistor with a bypass capacitor connected to one lead. That’s an extremely cost-effective way to reduce noise in a sensitive high-gain amplifier. Another ASR was built with a particular temperature coefficient (TC) to compensate for a capacitor’s TC. The RC combination then maintained a very stable time constant over temperature. The solution was elegant and much cheaper than the brute-force approach of trying to get both TCs down to zero.

There’ll be more such innovation, whether it’s special resistor in a spark plug or a lamp ballast. Engineers must be aware of the possibility of solving problems with a custom resistor. Most of them won’t consider discussing problems with resistor application engineers. And that’s too bad, because these products won’t be found in any catalog.


Resonant technology will spur power-supply design developments

Power supplies for the 1990s will require a new type of technology to become faster and smaller like the VLSI devices they support. A new power conversion concept, resonant technology has twin goals of increasing the power density of supplies and improving performance by operating at higher frequencies than current pulse-width modulation (PWM) supplies can handle.

The trend is also away from large centralized power supplies to distributed supplies designed as modular or board-mounted types. Board-mounted supplies will need to have the same low profile as ICs, necessitating improvements in component technology,

For most of the 1970s and 80s, switching power-supply design was based mainly on pulse-width modulation (PWM) circuits. But the 1990s will require a new type of technology. Because VLSI advances make electronic systems faster and smaller, future power supplies must follow suit.

VLSI technology shrinks everything except the demand for power. Over the past five years, a new power-conversion idea called resonant technology was under development in private companies and universities. Its aim is smaller and more efficient power supplies than what’s possible with PWM methods. In the long run, perhaps by the end of this decade, resonant technology and new component and circuit fabrication techniques could reduce certain types of power supplies to commodity products, much like ICs.

All resonant designs share common goals. First is to increase the power density of supplies. A second goal is to improve performance by operating at higher frequencies than is possible under PWM technology. With PWM techniques, 100-200 kHz is about the upper frequency limit. Resonant converters will operate at several MHz and above. A commercial 1-MHz resonant converter is now available.

The major trend is the move away from large centralized supplies to distributed supplies. Distributed supplies are more suitable for the smaller, faster systems resulting from the rapid advancement of VLSI technology. They’re designed as either modular supplies or board-mounted types. Increasingly decentralized supplies will be built with resonant technology.

Resonant technology and distributed power aren’t new ideas, but rather ideas whose time has come because we’re now gaining the technology to implement them. In addition to work at the Virginia Power Electronics Center at Virginia Polytechnic Institute and State University, MIT, AT&T, Bell Laboratories, General Electric, and Unisys are working on resonant converters.

Whereas conventional PWM supplies for computer systems have power densities of 1 or 2 [W/in..sup.3.], experimental resonant technologies are at least an order of magnitude higher. A design target at GE and at VPI’s Center is 50 [W/in..sup.3] intially, and 100 [W/in..sup.3] before the end of the decade. Bell Labs has demonstrated a resonant supply that can deliver 50 W and runs at 20 MHz. But most 50-W output, [50-W/in..sup.3] power converters operate from 2 to 4 MHz. Some of these types of supplies will be practical by the early 1990s.

Much of the impetus for developing high-density, 50-W supplies comes from the military. VHSIC requirements call for compact distributed supplies that mount directly on a logic board, deliver up to 50 W of 5-V power, with superior transient response to power logic that can be running at 100 MHz and greater. These converters must have a power density of 50 [W/in..sup.3] and operate at several MHz. The long-term objective–towards the end of the decade–is to build supplies in chip form and mount them on pc boards.

It should be no surprise that units with power densities of 50 [W/in..sup.3] must be highly efficient to reduce the amount of head produced. A 1-MHz resonant converter should be able to operate at 90% efficiency, considerably higher than the 80% figure of today’s PWM supplies. It’s almost an axiom in the design of high-frequency resonant supplies that you don’t have a technology until you can deal with the heat.

In addition to high power density, the coming generation of board-mounted supplies will have to have extremely low profiles, much like ICs. This means that improvements in component technology are necessary to flatten out the package. Magnetic elements in particular are receiving lots of attention in an attempt to reduce their size. A thin magnetic plate or substrate can have windings printed on it, and then be covered with a low-profile core to make a transformer. Multilayer pc-board techniques are already being applied to inductors and transformers. Capacitor technology is also advancing. It’s now possible to integrate many capacitors on one chip in a dual-in-line package.

Another reason to reduce magnetics to an IC-like technology is to take most of the labor out of the manufacturing process. If a transformer or inductor can be built like an IC, the process can be automated. In fact, this is the ultimate goal; manufacture a power supply just like you make an IC, and reduce it to a commodity product.


IC packaging must undergo a facelift to meet user needs

Chip technology will continue to drive packaging technology in the 1990s. The development of multichip modules will be one of the most significant changes in packaging in the next decade. Changes in chip technology including increased I/O leads on a single chip, increased mils per chip side, higher clock rates and faster signal-edge rates will contribute to the push toward multichip modules.

These modules could be built by semiconductor houses, by end users, or by third-party vendors, and all three modes are currently in process. Use of multichip modules will impact board and end-equipment manufacturers by reducing the number of interconnection levels, connectors, cables and sockets, and increasing the thermal management requirements of devices.

Chip technology has been the driving force behind packaging technology for a long time. Increases in a chip size, input/output pin counts, power dissipation, and speed have continually put new pressure on package developers to keep pace. This pressure will continue to grow in the future.

We have computer houses, for example, telling us that they can no longer afford signal delays that go out of a package, through a trace on a board, and back into another package. When it’s technologically and economically practical to integrate the functions onto one chip, this is the preferable route. Otherwise, the best alternative is to put the discrete chips into the same package.

There’s also concern by some customers about stresses that occur on large chips as they’re mounted in a package. For these reasons, I see the development of multichip modules to be one of the biggest changes in packaging during the next decade.

Increasing the I/O on a given chip to over a thousand leads is one of the things that will drive us into a multichip module–getting the I/O on a chip-to-chip level rather than a coming-to-the-outside-world level. Multichip modules are going to become increasingly important in performance for the systems user.

But who will produce these multichip modules, which will be largely custom or semicustom? A semiconductor house could make the module and sell it to end users, or users could buy chips and make the module themselves. Or end users could hire a third party to produce the module.

Currently, all three cases are happening. A few highly upward-integrated companies (for example, some Japanese companies and IBM) are already making modules. As subsystems shrink from pc boards to multichip modules, pc-board houses will follow that market. Moreover, because semiconductor houses can make the silicon and can interconnect chips at very fine levels, I think they’re going to be much more involved in the module business.

When the interconnection is driven back inside the package, it’s done at a level that the semiconductor house is accustomed to operating with. We’re accustomed to operating with wire bonding and tape-automated bonding (TAB), and with very fine leads and lines on a wafer. So there are a number of those pieces of technology that we’re very well equipped to handle.

Use of multichip modules will affect pc-board and/or end-equipment manufacturers in several ways. The number of interconnection levels that they have in a system is likely to decrease. They may have fewer connectors, cables, and sockets. However, use of these modules could require additional thermal management, because now we’re putting a lot more power into a more confined area. Packages dissipating up to 90 W will significantly affect the thermal aspect of their system.

By driving interconnections inside the package, you can partition the system to reduce the I/O count of the multichip module. We’ve already heard from some customers who have partitioned their system, reducing the number of I/O pins for the module to less than that used by one of the chips in the module.

Some major changes in chip technology are expected by the mid-90s: Chips will go from 380 mils to around 800 mils on a side. The maximum number of I/O leads will go from about 360 to about 1250. High-power chips that handle 30 W will triple that number. Clock rates will go from 30 MHz to 300 MHz, and signal-edge rates from the 600-ps range to 300 ps. All of these modifications will place big demands on package design, which will lead to the widespread use of multichip modules.

To increase the I/O while keeping the size of the package to a minimum, leads must get closer together. The standard pitch of 100 mils will soon go to 50 mils, and by the mid-90s, it could drop to 30 mils.

There will be an increase in lead arrays, either pin-grid or pad-grid, for surface mounting. The pad-grid array carrier’s spacing is currently as low as 50 mils, and will fall to 30 mils in the future.

At 30 mils, techniques must be developed to solder pad arrays. The proprietary technique that we’re presently using at 50 mils may not work as well at 30 mils. If it does, it’s going to take an additional degree of sophistication. Routing into these fine spaces will also require additional levels in the pc boards. But with surface-mounting, these levels would be connected with vias, rather than through-holes, which require much more area.


Advanced ADCs in digital oscilloscopes will enhance design awareness

The continuing trend toward faster, cheaper, more accurate analog-to-digital converters will top the list of advances in instrumentation in the 1990s. Digitizing oscilloscopes may be the most significant product, combining the speed of analog oscilloscopes with the ability to simultaneously examine high-speed transient events on multiple channels. By enabling engineers to get a clearer picture of what is going on in high-speed digital systems, digitizing oscilloscopes will speed design implementation. A side effect of the oscilloscope changeover will be the gradual disappearance of logic analyzers, which are essentially one-bit digitizing oscilloscopes. Advances in a-d convertors will also spur development of the instrument-on-a-card concept, which envisions medium performance instrumentation in the form of cards that fit in a VXI bus-based mainframe.

The most exciting technological advance in instrumentation in the ’90s will be the continuing improvement in analog-to-digital converters, as they continue to get faster, less expensive, and more precise. As a result, a whole host of changes in instrumentation may be anticipated. Probably the most important change will be the changeover in oscilloscopes from analog to digital in all but the lowest-cost, lowest-performance areas.

The new breed of digitizing oscilloscopes will have all of the speed that was formerly available only from analog instruments. But they’ll also offer a capability that average designers could never access before: the ability to examine high-speed transient events on multiple channels simultaneously.

Not surprisingly, that capability is exactly what’s needed to analyze the behavior of the types of circuits that average designers will be building. I say “not surprisingly” because those new circuits will be based on the same advances in semiconductors that will make the new oscilloscopes possible. To be specific, those circuits will be very fast digital designs, such as systems based on 50-MHz microprocessors. These systems will have sub-nanosecond rise times and will require timing resolutions of tens of picoseconds.

Today, if engineers have to troubleshoot a design of that type, much of what they do will consist of guesswork. They use their knowledge and experience to guess what the problem might be, fix it based on that assumption, and check to see whether the problem went away. Eventually they fix the problem, but rarely is it known whether the problem was what was originally thought. The higg-speed single-shot events that cause the problems simply can’t be seen on today’s conventional instruments.

The new breed of reasonably-priced oscilloscopes will give average engineers the ability to really understand what’s happening with their high-speed digital systems. I can’t say precisely what effect that capability will have on design methodologies, but it’s sure to be considerable. It will certainly enable engineers to implement designs more quickly. In other words, it’s a productivity tool.

It can also be a learning aid. When you truly understand what went wrong and why your fix worked, you may have learned something that will give you a hint of what to avoid in the future.

Another interesting outcome of the oscilloscope changeover will be the disappearance of the logic analyzer as a separate piece of instrumentation. A logic analyzer, after all, is merely a one-bit digitizing oscilloscope. As the price of a-d converters continues to drop, a point will be reached where it makes sense to build, say, 80-channel digitizing oscilloscopes. With such instruments, there’d be no need for a simple logic analyzer.

As a result of advances in oscilloscopes, I expect substantial changes in microwave engineering design methodologies. Today, most microwave design work is done in the frequency domain because the dominant measurement tools available to microwave designers–spectrum analyzers and network analyzers–work in this domain.

But given a choice, a majority of engineers would prefer to work mostly in the time domain. At lower frequencies, where there has long been a choice, both design and analysis are done in the time domain because it’s easier to spot most problems there. For example, if your amplifier is clipping a sine wave, it can easily spotted on an oscilloscope. In the frequency domain, however, all you’d see are some second and third harmonics, which may also be caused by crossover distortion or some other nonlinearity. With a spectrum analyzer, you know there’s a problem and you know when it’s been solved, but you don’t necessarily know what you did to fix it.

Aside from fomenting what amounts to a revolution in oscilloscopes, advances in a-d converters will also give a powerful boost to the instrument-on-a-card concept. I expect that much of the medium-performance instrumentation produced toward the end of the decade will be in the form of cards that will fit into an instrumentation mainframe based on the VXI bus. This type of packaging, however, will be of more interest to manufacturing and test engineers–to whom size and configurability are very important–than to designers. But wherever it’s applied, the instrumentation card cage will offer lots of very neat solutions.

 


Diffusion welding at low temperatures suits the fine lines of power MOSFETs

Siemens AG, Munich, West Germany, has developed a new technique for bonding silicon power devices to molybdenum substrates. The three-step process uses relatively low temperatures, avoiding the stresses of conventional alloying processes while achieving high thermal-cycling stability and electrically stable contacts with negligible resistance.

The high temperatures used in alloying can cause warping of device surfaces during cooling because of the different thermal expansion coefficients of molybdenum and silicon, making the process unsuitable for finely wired devices such as power MOSFETs. The new process places less mechanical strain on the devices because the lower temperatures eliminate the danger of warping during cool-down. And the process is highly reproducible, yielding the same results consistently, making it ideal for production line use.

A diffusion welding technique developed at Siemens AG, Munich, West Germany, welds silicon power devices to molybdenum substrates at comparatively low temperatures. Consequently, the technique avoids the drawbacks of alloying processes, and achieves high thermal-cycling stability and electrically stable contacts that have negligible resistance.

MOSFET showing gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (white)

Power devices generally call for a low-cost current supply, good heat-radiation characteristics, and high mechanical stability. To achieve these properties, the silicon chips are firmly attached to a substrate. What’s usually done in this chip-to-substrate attachment operation is to insert a aluminum foil between the chip and a molybdenum disk and then, in an alloying process, affix the device to a 2- to 3-mm-thick molybdenum substrate.

The high temperature, up to 700[degrees]C, encountered in this process imposes limits on its use, however. For one, different thermal expansion coefficients of molybdenum and silicon can warp device surfaces as they cool. For that reason, the alloying process can’t be applied to finely structured devices like power MOSFETs.

The new technique, developed at Siemens’ Munich-based research laboratories, takes a different tack. In the first of three steps, the silicon and molybdenum surfaces to be joined are supplied with a sinterable layer of silver, for example. In the second step, silver particles with a diameter of about 10 [micrometer] and suspended in a solvent are deposited on the molybdenum. The solvent is evaporated by momentary hearing, leaving a silver layer behind.

Finally, after the silicon chip is put on top of the silver layer, the silicon-silver-molybdenum sandwich is sintered for two minutes at a relatively low temperature, about 240[degrees]C, and a pressure of about 4000 [N/cm.sup.2].

According to Reinhold Kuhnert, head of the development team, the new technique leads to a porous silicon-molybdenum connecting cayer with high thermal and electrical stability, as well as thermal and electrical conductivity values comparable to those gotten with conventional high-temperature alloying processes.

Kuhnert points out that one advantage of the new diffusion is that it puts low mechanical stress on the device; that’s because it doesn’t warp after cooling. Another advantage is that it’s higly reproducible, which means that the results are consistent time after time–a prerequisite for use on any production line. Moreover, careful control of the silver-deposition process compensates for several microns of substrate unevenness that may occur.

Further, in contrast to alloying processes, no silicon is consumed in diffusion welding. Also, the silicon doping level remains unaffected. Because the technique is based on solid-state reactions–there’s no liquid phase involved–the temperature stability of the chip-to-substrate connection is far higher than the temperature encountered in device fabrication. As a result, although the device is made at law process temperatures, it can withstand the high temperatures developed under surge-current loads.

The technique, which may soon go to work on the production line, isn’t limited to attaching discrete devices to substrates. In one process, many small elements (such as all those on a 4-in. wafer) can be contacted to their substrates.

In addition, by using suitable multilayer substrates made of materials with thermal conductivity higher than that of molybdenum, heat can be removed faster, according to Kuhnert.


“Infinite” programmable gate array makes incremental changes on the fly

Ken Austin of Pilkington Microelectronics Ltd (PMel), Cheshire, UK, has designed the Dynamically Programmable Logic Device (DPLD), a gate array whose 10,000 physical gates’ mode of operation and layout of interconnections can be changed on the fly by software. This allows the chip to emulate virtually any logic circuit, while able to be partitioned to perform several different tasks simultaneously. The basis of the DPLD is a cell containing two circuits in a master-slave relationship.

Each of the circuits can be configured as a NAND gate, a register or a latch. PMel will make a specially designed CAD software package available with the chip, which it will license to semiconductor manufacturers. The chip can be produced by any CMOS process, making chip size and performance dependent on the capabilities of the semiconductor maker. Plessey Semiconductors Ltd has already licensed the DPLD for redesign as an ASIC cell it calls an electronically reprogrammable array (ERA) that should be available in February 1990.

An infinite number of gates on one chip is how Ken Austin of Pilkington Microelectronics Ltd. (PMeL), Cheshire, U.K., describes his latest uncommitted logic architecture. In reality, the device he designed has just 10,000 physical gates, but how the gates operate and the layout of interconnections among them can be changed on the fly by software. As a result, the chip can emulate virtually any logic circuit.

Further, it can be partitioned to perform several different tasks at once. One of those tasks can even be to reprogram and reconfigure parts of the chip itself. Either the whole chip or just a small section of it can be altered, and when only a small section is reconfigured, the rest of the chip can remain in operation. Austin says that the ability to partition and reprogram small sections of the chip can remain in operation. Austin says that the ability to partition and reprogram small sections of the chip while its still running is what distinguishes the Dynamically Programmable Logic Device (DPLD) from other PLDs. It also substantiates his claim that it can appear to contain an infinite number of gates.

Austin expects the DPLD to wind up in a broad range of applications, from systems that operate in inaccessible or remote locations–like satellites, where it’s impractical to physically make repairs or updates–to artificial-intelligence machines. He expects most users to first apply it to prototyping conventional ASIC devices, using the chip as a way to verify–or change–their designs, because it allows for continuous revision and updating during design and operation.

The basic element of the DPLD is a cell containing two circuits that can each be configured as a 2-input NAND gate, a register, or a latch. One of the gate circuits is defined as a master, the other as a slave. The master acts as a storage register or as a routing switch to determine how the slave gate interconnects with surrounding cells.

A series of predefined tracks on the chip “visit” each cell location, and among them allow for the definition of a hierarchy of routes. The arrangement of possible connections is similar to a telephone network, in which local links connect users to switches, which in turn are connected to other switches, and all are held together with a backbone of long-distance trunk interconnections.

The basic level of connection is what Austin calls a local interconnection. It makes it possible to connect about 30 or 40 adjacent gates, configuring them into a Boolean function. Other tracks define near connections between groups of locally interconnected cells, and finally, long distance global buses carry instructions and data to all parts of the chip (see the figure).

Each interconnection can be turned on or off using a configuration data file. As a result, the actual layout of the connections among cells varies depending on the function implemented at any given time. This variation in cell arrangements occurs in a manner that’s analogous to brain cells, says Austin.

Several methods are available for holding the controlling software. The simplest is to have the DPLD read configuration data directly from an external ROM, with the DPLD supplying the necessary addressing and control signals. This technique is useful for bootstrapping the device to an initial configuration. More flexibility can be achieved, though, by connecting an external counter to generate ROM addresses.

A more sophisticated method is to assign the job of configuration control to an external microprocessor, which selects the start address for a configuration held in ROM. Subsequent selections can implement any new functions or subfunctions as conditions and applications demand.

One interesting approach that Austin describes is to use the DPLD to change itself on-the-fly. For example, a portion of the chip could be configured on startup as an RS-232-C serial interface, which could then load the program data used to change other sections of the chip for specific applications. Yet another option would be to copy initial data configuration files from ROM into RAM. Then the application could modify itself as it receives data from a process being controlled.

Writing the software could be a chore, but Austin feels he has that problem covered. PMeL will have a specially designed CAD software package available. The software will take designers from capturing a schematic through to generating device-configuration data.

The software will be compatible with standard, high-level ASIC development languages, such as VHDL and Edith-2, according to Austin, and will allow simulation that verifies functionality and timing. Using the CAD package, designers will be able to build a library of software macros that can control the DPLD as a standalone chip, or as a megacell on a semicustom device.

PMeL’s parent company, Pilkington Group plc, St. Helens, Cheshire, U.K., while the world’s largest manufacturer of glass, doesn’t plan to make the silicon wafers. As a result, Austin is negotiating with about six international semiconductor companies who want to license the DPLD. He says that the chip can be made with any CMOS process, so chip size and performance will depend on the capabilities of the semiconductor makers.

So far, only one licensee has come forward: Plessey Semiconductors Ltd., Swindon, U.K., says that it intends to adapt the DPLD design as an ASIC cell that it calls an electronically reprogrammable array (ERA). Doug Dunn, the company’s managing director, sees it as “reusable silicon” and says that he will have it ready for sale around February. Built with Plessey’s 1.2-[micrometer] CMOS process, it should clock at 40 to 50 MHz, allowing reconfiguration to be carried out in about 1 ms.


Communication among processors sustains fast, massively parallel computer

Designers at Maspar Computer Corp in Sunnyvale, CA have developed two chips intended to break the communications bottleneck that often occurs in highly parallel computers. One chip serves to simplify and accelerate global interprocessor communications. The second chip carries 32 highly interconnected processing elements (PEs). A fully configured MasPar system can yield performance rates of 10,000 MIPS and 1,000 MFLOPS. Two independent communication schemes are built into the PE chips. A neighborhood mesh connects each PE to its eight nearest neighbors, while a multistage crossbar hierarchy allows each PE to connect to any other PE in the array. The 32 processors on each PE chip are formed from 500,000 transistors. RISC-style load-store architecture with local cache memory keeps each PE as small as possible.

Highly parallel computers employ hundreds, even thousands, of small processors to achieve astounding computational rates. Execution rates can reach billions of operations per second. At the same time, however, the interprocessor communication needed for sending commands and transferring data can become a bottleneck when so many processors run simultaneously.

To break that bottleneck, designers at MasPar Computer Corp., Sunnyvale, Calif., developed two custom chips: One simplifies and accelerates global interprocessor communications, and the other supplies 32 highly interconnected processing elements (PEs).

The MasPar system can harness from 1024 to 16,384 processor elements and, when fully configured, deliver 10,000 MIPS (millions instructions per second) and 1000 MFLOPS (million floating-point operations per second). The system employes a single-instruction-stream and multiple-data (SIMD) architecture manipulated by an array-control unit (ACU).

MasPar at NASA/GSFC

The ACU fetches and decodes instructions and issues control signals to all PEs. All PEs execute the same instruction stream, but each can also have local autonomy over execution, allowing for localized calculations.

To achieve the high bandwidth needed for thousands of PEs to communicate among themselves, MasPar designers built two independent communication schemes into the system. One is a neighborhood mesh, or X-net local interconnection that ties each PE to its eight nearest neighbors. The other is a multistage crossbar hierarchy that lets each PE connect to any other PE in the array. The X-net forms a 2D grid that wraps around East to West and North to South to form a torus-like pattern (see the figure below).

Within each PE chip are packed 500,000 transistors that form 32 processors interconnected in a 2d grid. Multiple PE chips are interconnected in the same way that processors are within a chip.

According to Jeff Kalb, MasPar Computer’s founder and president, each PE is kept as small as possible by using a RISC-style, load-store architecture with local cache memory for each PE.

What’s more, only four paths per PE are needed to communicate in eight directions. That’s because the X-shaped crossing points of the communication paths are three-state nodes that switch the data to one of two paths. All of the interprocessor X-net connections use bit-serial communications, so just 24 pins per PE chip are required for the X-net interface.

Some computational problems don’t map well onto an X-net topology, however, and require arbitrary interconnections among PEs. To tackle that problem, a team headed by Tom Blank, director of architecture and application development, designed the multiple custom router chips so they could form a multistage interconnection network, which is somewhat like a hierarchy of crossbar switches. Each router chip has 64 datapath inputs and 64 datapath outputs, and can switch any input to any output.

When a PE chip sends data to a router, it first multiplexes the output from 16 PEs onto one outgoing router path and one incoming router path. Router paths are bit serial, so only four pins are needed on each PE chip to multiplex 32 PEs.

Once a connection is established, the router’s bidirectional data paths send or fetch data. In a 16,384-PE system, up to 1024 simultaneous connections can be established, thus giving an aggregate data rate of over 1 Gbyte/s.

The multistage interconnection network is well matched to SIMD architectures because all connections occur with the same timing and sequence in each PE. The common timing and sequencing greatly simplifies the logic compared to a hypercube-type architecture.

In a hypercube, different path lengths can cause messages to arrive at each PE at different times, raising hard-to-solve timing considerations. In contrast, the router network in MasPar’s system keeps the message delays identical, eliminating timing problems. In addition, the router network tracks parity for all data and addresses to ensure highly reliable transfers–a critical task when thousands of processors are involved. Furthermore, each router chip includes diagnostic logic that detects open wires in the network.


Wanted: world-class designers

Any consideration of the direction of design engineering over the next ten years must start with a look back; to know where we are going, we must know where we have been. Science and technology were once regarded as the solution to all problems.

Events such as Hiroshima and Nagasaki, Three Mile Island and Chernobyl have led people not merely to doubt this, but in many cases to believe that technology is actually the cause of all our problems. The truth of the matter is that technology is an integral part of everyday life. Changes wrought by technology are more extreme and happening more quickly every day. Such massive change calls for a special class of design engineer. To cope with the rapid changes, today’s design engineer must make innovation a standard. Further, to be successful, the design engineer must be able to read the market, to find a need and fill it. And, above all, the new breed of design engineer must be a team player, for teamwork will be essential to meet the challenges of the next decade and beyond.

The last decade of the century has begun. Instead of addressing a specific technology, which is the norm for this column, we take a longer, broader focus to consider the new demands that designers will face in the coming years. It’s unwise, though, to speculate about the next decade–facing the next millennium as we are–without looking back. In making predictions, perspective is everything–to know where you’re going, you must know where you are. It also helps to know how you got there. Such perspective is as important for design engineering as it is for hiking on trails or navigating the seas.

Where have we come from? Much of our professional heritage flows from the philosophy of Positivism, a nineteenth century discourse of thought that tended to put science and technology on a pedestal. In that mode of popular thinking, all problems were subordinate to the power of reason, the scientific process, and the application of technology. In Positivism, science and technology were held as the saviors of humanity. Religion was snubbed as a problem solver; God and mysticism were out. Scientists became the new priests; their laboratories, the new shrines. With the splitting of the atom, however, that thinking began to change.

From the moment we exploded the atom bomb, doubts have crept into our certainty about the absolute benefit of science. Consider the U.S. nuclear-energy industry after Three Mile Island and Chernobyl, a shambles with almost no public support; or the way that sophisticated medical technology has backfired into soaring health costs; or how the environment–our very source of life–faces irreversible damage under the strain of what’s called progress. Worst of all, consider the fact that, as satellites and the Space Shuttle circle the sky, forty-thousand people–most of them children–die each day from chronic hunger and malnutrition. And in New York City, one of the largest cities in the world’s richest nation, 40% of the children live in poverty. Clearly technology isn’t the solution to all our problems. Indeed, some would say–albeit naively–it’s the source.

At the same time, we can’t live without technology. It is solidly a part of our everyday life. More than that, as each day passes, technology is transforming our world, linking our lives as never before. Indeed, technology is one of the driving forces creating a global economy. Telecommunications and computers are changing business on a scale not seen since–and likely greater than–the Industrial Revolution.

The changes are radical and happening fast. Such changes, and the demands they bring, call for a new breed of design engineer: a world-class designer. Innovation, once an occasional part of the design process, is now a necessary ingredient for success. The increasing pace of innovation has shrunk the design time and life cycle of our industry’s output. As a result, to create tomorrow’s successful products, designers must develop the art of innovation, not as a sometimes things, but as an everyday practice. Companies that don’t build the best technology into their products, as well as into their business practices, will be surpassed by those that do.

Tomorrow’s successful designers will also need to be students of the marketplace. They must help find what is missing in the market and produce it. Technical competence alone won’t be enough. The force of emerging and unexpected opportunities in a global arena coupled with increasingly complex technology means one thing at least: Teamwork will be essential. No individual, acting alone, will be able to decide the what, when, and how of a successful product. Only an innovative and skilled team, with each member–worldclass designers included–familiar with the others’ discipline, can even hope to ride the wave of tomorrow.