Category : Electrical and electronics industries

Models squeeze the most speed from submicron cells

Accurate circuit simulation is necessary to get the most speed from submicron chip technology. This is also true for NCR Corp’s newest cell library which include submicron-feature devices that fall and rise 40 percent quicker than larger-feature devices. Accurate model techniques are needed to simulate the gate delays of the new Application-specific integrated circuits. The new VS700 line of cells are also 40 percent denser than NCR’s VS1500 cell library. Designers can simulate gate delays caused by rise and fall times down to 0.1 ns with the precise modeling technology.

Accurate circuit simulation is essential to pulling the highest possible speed from submicron chip technology. This is no less true for the latest cell library from NCR Corp., Fort Collins, Colo., which boasts submicron-feature devices that have much faster rise and fall times than their large-feature counterparts. And because gate-delay times vary with the rise and fall time, precise model technology is needed to accurately simulate the gate delays of these new ASICs.

Specifically, NCR’s new VS700 family of cells are 40% faster and denser than the company’s VS1500 cell library. These increases promise small, fast, and powerful end products. The submicron process produces features that are 0.95 [micrometer] drawn and have a 0.7-[micrometer] effective-channel length.

Simulation becomes difficult with submicron technology because second-order effects in a large-feature process assume first-order proportions at the submicron level. With this library, workstation models calculate each cell’s delay time as a function of rise and fall time and load capacitance. Accounting for load capacitance is a feature that makes the library especially valuable for improving critical-path performance.

With the precise modeling technology, designers can simulate gate delays caused by rise and fall times down to 0.1 ns (see the figure). In contrast, its competitors model delays from 1- to 1.5-ns edge times–a significant difference for submicron chips.

Lightly loaded critical paths have fast rise and fall times, and precise models accurately represent the faster delay times that result. Models are accurate enough to unmask circuit race conditions, giving designers a chance to correct them.

Another helpful feature of the new cell family’s design system is its set of high-level macros and compilers. Memory compilers, for example, are so flexible that designers can select word width as well the number of words. For example, designers can configure a RAM compiler to an exact bit width, and not have to settle for predefined blocks that may be larger and consume more die area than needed.

Customized symbols and models for simulation are automatically generated from the designer’s workstation input; there’s a faster time to market and no waiting for the vendor to configure high-level functions. And post-layout as well as prelayout routing capacities can be simulated. As a result, chip real-estate is assigned only for the amount of memory actually needed.


Automated assembly line builds power modules singly or by the hundreds

Vicor Corp, Andover, MA, uses a robotic assembly and overall automation to produce power-converter modules on printed circuit boards. Each module consists of a printed circuit board with small passive, active and magnetic components. A metal base plate that is automatically assembled holds the TO-220-housed FETs and diodes. Some stations along the assembly line still require off-line mechanized handling, including insertion of the chokes and transformers and coil-winding. Automation of these functions will come later, increasing the number of robots to nine. When the automation is completed, the line will produce approximately two modules per minute.

Robotic assembly and overall automation have set one computer-controlled production line apart from others in the power-supply industry. Over the past few months, this line–which produces power-converter modules–was phased into production at Vicor Corp., Andover, Mass. It handles any mix of Vicor’s modular units, from lots of hundreds to “batches of one,” according to Patrizio Vinciarelli, founder, president, and CEO of the fast-growing firm.

Each of Vicor’s modular converters consists of a pc board that carries small active, passive, and magnetic components, plus a metal baseplate that holds the TO-220-housed FETs and diodes. The baseplate is automatically assembled in a final assembly cell, where one of the line’s six robots joins it to the pc board.

Some stations along the way still require off-line mechanized handling, including coil-winding and insertion of the chokes and transformers that form the module’s magnetic components. Automating the production and insertion of those components will come late this year. For that, the number of robots on the line will increase to nine.

When completed, the line will churn out most variations of the company’s products at a rate of two modules/min. A duplicate line, to be installed late this year, will boost the rate to four modules/min. (The first line, which stretches 155 ft. across the back of one building at the Andover headquarters, underwent extensive shakedown tests at a systems-integration firm before being installed.)

A dedicated HP 9000 minicomputer controls the line, but takes its cues from an HP 3000 business computer. Orders are entered on the business computer, which tells the HP 9000 the amount and type of models to produce, and what components to select for the various lots.

Vinciarelli says that Vicor’s eventual goal is to manufacture all of its modules on the automated line with little or no operator intervention. Once each step is automated, he expects the line to process orders from start to finish in about four hours. Comparably, a manual assembly line that’s now running in tandem with the automated setup takes up to a week. The manual line will eventually be phased out.

Vicor’s wide product mix and the need to turn out high volumes precluded offshore assembly. For example, the VI-200 power-conversion module alone comes in about 300 variations.

“Our long-term strategy of high-volume and high-mix lots lends itself to automation,” says Vinciarelli. That’s why he committed $5 million to implementing the line, an investment that will roughly double with the second setup.

The first station in the line swages 40- or 80-mil pins through the small circuit board. The pins supply an electrical path when the finished module is in use. A robot then positions the pc board and the pins (see the figure). Then the robot transfers the pinned board to a second station, where the board is placed on a pallet that will convey it (pins facing up) along a moving belt until the pc board is mated to its baseplate.

A solder-dispensing station is the next stop on the line. Using a video inspection system, the station checks the board’s alignment against fiducial marks. Then it dispenses solder paste onto the pads where chip components are to be placed, inspecting the dispensed solder using gray-scale techniques to assure that no bubbles are present.

Next on the line, at the surface-mounting device (SMD) station, a robot takes active and passive components from movable vehicles. The vehicles get the components from as many as 300 feeders. Each component is tested and those that pass are placed on the solder-pasted pads, as directed by the computer, using five vacuum SMD pipettes for placement. Two such SMD stations work side by side.

A second visual inspection assures that all of the components are properly placed before the pallet-borne printed-circuit module travels to the reflow-soldering station. At the reflow-soldering station, solder paste is heated to the reflow temperature to complete the component-bonding process.

After the magnetic assemblies are placed on the board, the loaded assembly passes through a cleaning station before final assembly. In the final assembly cell, the separately traveling metal baseplate is automatically fitted with alumina pads. These pads serve as thermal conductors for the TO-220-housed power devices that are also mounted to the baseplate.

The final assembly cell also trims and forms the TO-220 leads. In addition, the alumina pads are cemented to the baseplate. Finally, the pc board is carefully mated to the baseplate so that the TO-220 leads feed through the pc board. A reflow-solder step follows to secure the pc board to the baseplate.

The finished module is then ready for electrical testing, epoxy encapsulation, and final testing. These three phases are accomplished automatically in the test, encapsulation, and test (TET) machine. All of the steps before the TET machine take just 15 min., including testing the TO-220-housed power FETs and diodes, and checking the entire assembly for characteristic outputs compared to what the lot order calls for.

Most of the TET machine’s processing time is for oven-curing the encapsulation material. A second TET machine will be installed in the second quarter of this year, with each machine drawing assemblies upstream of the line as needed. A final station prints bar-code and other identification information for the module.


Too soon for deep-UV steppers

Because Perkin-Elmer Corp may be bought by Japanese interests, the situation for VLSI integrated circuit production technology in Japan and the US is becoming more interesting. Japanese manufacturers have bet on deep-UV technology for manufacturing VLSI integrated circuits, and have pursued development of deep-UV steppers, photoresists and lasers.

Meanwhile, US manufacturers’ gradual transition to I-line for submicron technology has proved unexpectedly successful. The Japanese had maintained that I-line was not practical to use. The key for US companies is to push I-line technology as far as possible before the Japanese make deep-UV equipment ready and available. Perkin-Elmer Corp, which may be bought by a Japanese firm, is a US company developing a deep-UV stepper.

Now that Perkin-Elmer Corp., Menomonee Falls, Wisc., is for sale and may be bought by Japanese interests, it makes for an interesting situation regarding VLSI IC production technology in the U.S. and Japan. While equipment manufacturers in both countries press on toward the next technology level, Japanese chip makers may have waited too long to pursue an interim step that many U.S. companies took. The result: U.S. chip makers may find themselves, for the meantime, with an unexpected technology edge.

As recently as 1985, 1-[micrometer] design rules were considered state of the art for advanced VLSI production. Optical equations predicted that 0.5 [micrometer] was the resolution limit. Today, however, manufacturing is shifting from G-line lithography, with its 436-nm exposure source, to I-line, which sports a 365-nm source.

These new frontiers in line widths for VLSI circuits are the result of polymer chemistry as well as the associated optics. Polymer chemists are now placing the limit at 0.25 [micrometer]. They’re making no promises, though, because theoretically, the line widths can’t go beyond the wavelength of the I-line exposure source, which is 365 nm, or 0.36 [micrometer]. The next level is a deep-UV exposure source, which is 254 nm. In theory, this could take lines and spaces down to about 0.25 [micrometer]. But it’s probably a year or two from viability.

Apparently, Japanese manufacturers of VLSI ICs are looking to 0.25 [micrometer] for next-generation VLSI devices, which is a factor-of-four density increase compared with 0.5-[micrometer] I-line. Those companies have bet heavily on deep-UV and have hotly pursued development of deep-UV steppers, lasers, and photoresists. Meanwhile, I-line has become unexpectedly successful and gained a new life. Essentially, it was written off too soon, especially by the Japanese chip makers who maintained that I-line was unpractical due to its 0.5-[micrometer] limit. Meanwhile, deep-UV became more difficult to develop than was believed.

Concurrent with their drive toward deep-UV, the Japanese tried holding their technology edge by pushing G-line lithography to its limit by improving the optics. Indeed, they took G-line farther than anyone thought possible and achieved much tighter production parameters than U.S. companies. But because U.S. companies weren’t able to push G-line as far as Japan, they promoted the more gradual transition to I-line as a viable strategy for submicron lithography. As a result, U.S. companies gained a small technology edge. The key, now, will be to get as far ahead as possible with I-line before deep-UV is ready and viable, and before the Japanese shift gears and catch up in I-line.

But Japanese manufacturers are already on the move. Nikon, San Bruno, Calif., is one Japanese-based company that got to market with an I-line stepper by recently introducing its NSR-1755i7 A. Moreover, Nikon claims to have the largest exposure area available in an I-line stepper (17.5 mm square) as well as a production resolution of 0.58 [micrometer]. European companies are also joining the I-line parade. ASM Lithography, based in Tempe, Ariz., but owned by Philips, the Netherlands, rolled out its PAS 5000/50 I-line stepper and claims 0.50-[micrometer] production resolution and 0.35-[micrometer] resolution for research work. ASM also has a deep-UV machine in progress.

Among U.S. manufacturers, Perkin-Elmer is developing a deep-UV stepper. GCA, Andover, Mass., is marketing its AutoStep 200 I-line stepper and is working on a deep-UV machine. Sematech, the Austin, Texas-based consortium, is concentrating on I-line lithography, but is committed to transferring a well-characterized 0.5-[micrometer] process to its member companies by 1991.

KTI Chemicals, Sunnyvale, Calif., (a Union Carbide subsidiary) already offers its 895i I-line photoresist product. It’s betting that I-line will be the way to go because deep-UV will require a massive equipment investment. But just in case, KTI has a deep-UV photoresist in the works.


Surface mounting will sweep leaded components from market

The 1990s will see a continuation of the changeover from leaded to surface-mounted devices, which save money because of their small size, which cuts costs by saving space on computer boards, and through greater reliability and lower handling costs. Currently, surface-mounted devices account for 50 percent of the passive components used in Japan. That figure is 13 percent in North America and 10 percent in Europe. Leadless passive component use in North America will pass 50 percent over the next five years, and Europe will not be far behind. Surface-mount devices will continue to improve, offering ever-better tolerances, capacitance-voltage product, stability, and higher frequencies.

The most important component trend in the 1990s will continue to be the worldwide changeover from leaded to surface-mounted devices. As component users get smarter, they inevitably switch from leaded components to surface-mounted devices, wherever possible, for the simple reason that they save money. Because of their small size, surface-mounted devices save space on pc boards, which cuts costs. In addition, they’re more reliable and they cost less to handle. I also expect them to become less expensive than leaded components.

Not surprisingly, Japan is the leader in the use of surface-mounted devices. Whereas only 10% of the passive components used in Europe are surface-mounted devices, over 50{ of them are in Japan. In North America, the figure is about 13%. Over the next five years, I expect that leadless passive components used in North America will rise past 50%, with Europe to follow suit. The overwhelming trend is toward surface mounting, and anyone who hasn’t made the transition should plan on starting soon.

For potential users of surface-mounted devices who aren’t sure how to start, there are assembly operations that do subcontracting work. Many small and medium-sized companies go to such assemblers to get their designs established, and later they bring the work inside. Large companies also use outside contractors to smooth out the peaks and valleys in their production schedules. High-speed equipment for placing surface-mounted devices is somewhat expensive, and not even a giant corporation wants to have this gear just sitting around.

As time goes on, I expect these devices to get better and to become more popular and less expensive. For the components that I’m most familiar with–tantalum capacitors–“better” takes on many definitions. It means more reliable parts with tighter tolerances and better stability; packing more CV (capacitance-voltage product) into a given volume; lowering the cost per CV; and the ability to work well at ever higher frequencies.

When I went to school, the basic electrolytic capacitor was an aluminum device with a tolerance of -20% and +50%, or even +100%. Today, we’re routinely making tantalum units with tolerances of [+ or -]5%, even [+ or -]2% can be done on special order. Tighter tolerances can be expected in the near future. These capacitors can be used for timing and waveform-shaping applications, not just for power-supply filtering and decoupling. That was simply unthinkable for an electrolytic capacitor in the past.

We’re also making specials in which the variation of capacitance with temperature and frequency is tightly controlled. That capability is opening up new applications for tantalum capacitors–both in displacing other capacitor types and in areas that never existed before, such as fiber optic communications. We can expect other types of electrolytic capacitors to also open up even more applications.

The latest tantalum electrolytics are also maintaining their capacitive nature at increasingly high frequencies. They present a lower equivalent series resistance (ESR) and remain reactive at frequencies up to about 1 MHz, a figure that will undoubtedly increase during the next decade. Therefore, they can better serve their intended function–decoupling.

A major function of the electrolytics is to filter out power-supply noise. With today’s switches running at 100 to 500 kHz, they’re often operated near the limits of their capabilities, but they continue to deliver their intended outputs.

Working at the edge, however, can be tricky. Applying electrolytic capacitors at high frequencies requires expert knowledge, which brings us to another important element for the next decade–service. Component makers not only need to make better components, they need to deliver them precisely when the customer wants them. They also must supply all of the required technical support. To help with the latter requirement, we’re writing and compiling a series of simple capacitor application programs that can run on a PC. Our intention is to make those programs available to users along with our catalog.

Then designers, at their own leisure, can choose a part from the catalog, key in their operating parameters, and get, for example, a plot of ESR vs. frequency. The whole objective is to make it easier for people to use and understand the components.


Digital technology will govern power control

Digital technology will be the focus for power control in the 1990s. Digital control is nearly twice as efficient as linear control and offers savings in size and weight of end products. Transformers required for linear control are practically eliminated in digitally controlled off-line switchmode supplies.

Digital control of brushless motors is much more efficient than linear control. Integration of a control IC and a MOS power transistor on a single chip, called ‘smart power’, could lead to the development of a complete switchmode power supply on one chip by the end of the decade. High-voltage ICs will bring about major changes in components such as motors, relays and sensors that have remained unchanged for years. The 1990s will see the development of smarter electrical products, such as electrical outlet boxes in homes that can be programmed from a central computer, and household products with built-in temperature sensors to prevent dangerous heat levels.

Power control in the 1990s will see design emphasis shift decidedly from linear to digital technology. One reason is that digital control of power devices is about twice as efficient as linear. Second, digital techniques allow for a tremendous reduction in the size and weight of the end product. By the end of the decade, digital control will dominate–just a few isolated linear applications will remain.

Digital technology impacts power-supply design and load control. For example, the big, heavy step-down transformers required by linear supplies can be virtually eliminated in off-line switchmode supplies under digital control. A 100-W supply that fits into a desktop PC would be impossible to build using linear technology. Beyond switchmode supplies, IC technology can power chips directly from the ac line.

Control of brushless motors is made more efficient by digitally switching the motor voltage on and off rather than varying it linearly. Digital control also can result in a dc-variable stepwise approximation to a sinewave. Varying the power to any kind of load is more efficient than applying a steady ac voltage. Ample savings in system power consumption can be achieved by applying one level of power to actuate a relay or solenoid, and then reducing the power to hold it. With less power, the voltage to the device can be increased above its rating, enabling faster operation. This can increase the productivity of machinery without adding capital investment.

An idea that’s taking hold is the integration of a control IC and MOS power transistor on one chip, a concept now called “smart power.” By the end of the 1990s, smart power will probably lead to a complete switchmode power supply contained in one IC.

At low voltage–up to about 60 V–a smart-power IC is more cost effective than an IC plus a discrete power transistor. A perfect example in bipolar technology is one of the original smart-power devices called a three-terminal regulator.

To operate at higher voltages (200 V and more), you need a mixed process–an MOS output with either a bipolar or CMOS input. Up to now, it wasn’t easy to build these smart-power structures. You need large silicon areas to isolate high- and low-voltage devices and you must use IC processing, which is more expensive than discrete processing. Moreover, you can’t use some of the processing tricks with ICs that can be employed in discrete processing.

At Power Integrations, we’ve developed a high-voltage processing technology that overcomes the aforementioned problems. We can build a high-voltage smart-power IC using the same processing steps that are utilized in a conventional IC, and the integrated power transistor is not larger than an equivalent discrete. This process finally makes high-voltage ICs for power control practical.

With this advanced high-voltage process, designers can control power directly from the ac line without the need to step-down the voltage in the power section. One such application is in stepper-motor drives, which currently require a 40-V power supply that’s larger than the motor and its controller combined. The process eliminates that supply by allowing the stepper to be powered straight from the ac line voltages. A PC that controls a burglar-alarm system can also be considered. Designers can now get an IC that links directly from the wall outlet’s 120-V ac to the computer’s 5-V dc control logic.

Historically, new markets are created by each new innovation in semiconductors: transistors were responsible for compact radios and audio products, and microprocessors brought about the age of computing. High-voltage ICs will revolutionize the way we think about the electro-mechanical world. Such components as motors, relays, and sensors have remained the same over the years, but applying modern electronics to them will create major changes.

As we proceed into the 1990s, we’ll see smarter electrical products. For example, electrical outlet boxes in future smart homes will be programmable from a central computer. Not only will they turn on appliances and lighting fixtures, but they’ll have built-in intelligence to protect themselves against unsafe power drains from appliances. Temperature sensors will be built into common products, such as cooking utensils, to protect them from dangerous heat levels that could cause a fire. Using digital techniques to control power will lead to many innovations that we haven’t yet thought about.


Increased complexity in computers will breed advanced equipment

Widely available computing power will both create and solve problems in the 1990s. Testing problems will be created by the increasingly complex chip and board designs that computer power makes possible, but new types of computer-based test equipment will be developed to solve these problems. Test equipment for both software and hardware will see great advances due to increased computing power.

The trend toward modular designs for electronic components will be accelerated as improvements in test equipment ease the burden of testing modular components. Current ASIC verification equipment that compares results achieved in simulation with results achieved in tests of the real device and highlights differences will serve as a model for new test systems for assembled circuit boards. This equipment will allow engineers to design massively complex systems that were inconceivable before because of the impossibility of testing them. The test environment in the 1990s will see built-in self-test and external test equipment working in conjunction to improve the overall performance of electronic components.

The next decade will see the influence of the computer reach new plateaus: Every aspect of electronic engineering, from the first stages of design, through manufacturing, final test, and field service will be affected. The widespread availability of computing power will create and solve difficult problems. It will create testing problems with the increased complexity of chips and boards that can be designed. Consequently, new types of application-focused test equipment to analyze the chips and boards will arrive, which use computing power and communications to combat these obstacles.

More computing power will also forge great improvements in software test equipment. The equipment will include instruments to functionally verify software, measure software performance, and determine whether it was fully and correctly executed.

This software test equipment will be closely coupled to the new test equipment for hardware. Single measurement platforms will analyze any characteristic of an electronic system, from the analog characteristics to microprocessor and software analysis. This will be a true “test station,” which will be used in conjunction with the “workstations” of the future to supply a consistent design and test environment.

One consequence of these improvements will be an acceleration of the design modularity trend for almost all electronic systems. Modular design is attractive because engineers can quickly develop extremely well-focused products. But it puts a heavy burden on test engineers to ensure that software and hardware modules work properly in all of their intended environments and combinations. For example, if your product is a plug-in board for the IBM PC/AT, you want to ensure that it will work in every PC/AT that IBM has shipped and in the boxes they will produce in the future. Improvements in test equipment will ease that testing job and thus encourage this focus on modularity.

Current ASIC verification test equipment gives us a glimpse of what the future holds in this area. Such equipment is linked to a design system, from which it gains access to the test vectors that were applied to the ASIC during the simulation phase of the design process. It applies those vectors to the real device during test, compares the actual results with the simulated results, and highlights any differences.

In the ’90s, that same approach will be applied to assembled circuit boards, which are even more complex than ASICs because they can hold from 6 to 20 complex ICs. Design equipment capable of full board-level simulations will be widely available, and links will be supplied between board testers and design. As a result, engineers will be free to design systems of enormous complexity–systems that previously were unthinkable because they would have been impossible to test.

Most electronic products will have some built-in self-test circuitry that can verify the equipment’s functionality but can’t make reliable time-domain measurements. External test instrumentation will always be needed to ensure that the equipment meets system specifications.

In fact, built-in self test and instrumentation will work together in the next decade. Diagnostic techniques, such as boundary scan, will give rise to test equipment that can clock the preset values into a circuit and record the outputs of the scan circuitry. This will enable rapid scan-test development, and make it possible for circuit defects to be quickly isolated.

This same instrumentation will also measure the timing margin of the circuit under test to ensure that an electronic module will work in all of the intended environments. In future manufacturing facilities, continuous tracking of the timing margins will offer an early warning of system problems, which could shut down a production line. If the system timing margins are deteriorating, individual component performance can be checked and the cause determined, while the systems are still meeting specifications.

Meeting all specifications is one measure of quality that American electronics companies are striving to achieve. Tektronix and its customers are trying to make sure that our verification of performance is rigorous–that every function in a complex IC works, that every branch of a program executes correctly, and that plug-in boards function at every clock speed it may encounter.


ASIC and FET innovations will dominate power device technology

Innovations in power devices in the 1990s will be in the areas of power application-specific integrated circuits (ASICs) and cost-effective, high-voltage field effect transistors (FETs) with low on-state resistance. Power ASICs will make it possible to integrate solid state relays (SSRs) with temperature sensors, or to put special timing or decision-making logic on an SSR.

FETS will be used in devices where standard SSRs cannot be used because of a lack of room for a heat sink or other cooling device. They will also be used wherever power dissipation is a major concern. The 1990s will also see the development of closer relationships between designers and device suppliers, opening a way to solving a number of long-standing performance problems.

As a manufacturer of solid-state and time-delay relays, the most important technological innovations I expect to see in the ’90s will be in the areas of ASICs and FETs. Innovations in ASICs will enable us to do things we couldn’t do before, and FET innovations will make it possible for our devices to fit into areas that were previously closed to solid-state power switches.

Equally important will be the benefits that designers will realize from the closer relationships they must develop with their device suppliers. This trend can be seen throughout the industry as electronic technology becomes more complex–and more potent. Unless designers develop these relationships, much of the available technology may not translate into products that help solve their design problems.

As power ASICs become a reality–that is, as it becomes possible to develop semiconductor devices for specific power applications at acceptable cost–manufacturers similar to ourselves will be able to offer functions that previously were never considered. For example, if a maker of heat pumps typically uses solid-state relays (SSRs) in combination with temperature sensors, we could possibly combine the two into one device, integrating a temperature sensor into an SSR. Or we could put special timing or decision-making logic into an SSR, if it seems to be sensible.

The question is: How are we going to know whether that’s a sensible thing to do? And if it is, how are we to know exactly what type of timing or decision-making circuitry to include?

There’s only one source for that information–the design engineers who use our products. Unfortunately, those designers aren’t in the habit of discussing the details of their designs with component manufacturers. The traditional way of buying power switching devices, such as SSRs, is to choose them from a catalog. Rarely do designers sit down with a relay manufacturer to discuss how, where, and why they’re using that manufacturer’s products. Yet, unless designers start doing that, the full benefits of power ASIC technology will go unrealized.

What sort of device improvements might come out of such improved communication? One improvement is a means for dealing with a common objection to the use of SSRs–when they fail, they fail closed. Users would prefer that they fail open and leave their loads unenergized.

It’s possible to build circuitry into an SSR to detect when the output switch has failed. Such circuitry would compare the state of the output with the state of the input. Then, if there were an output without an input, the circuitry would give a failure indication and also open a particular one-shot electromechanical device say, a fuse, to shut the system down. The details are less important at this point than the fact that until now, such devices would have been too large and too costly to be worth serious consideration. Now we know that they can be made small enough, and we’re fairly sure they’ll be economical in next few years.

I look forward to the advent of the cost-effective, high-voltage FET with a low on-state resistance. Presently, typical SSRs that switch ac loads utilize thyristors as their switching elements. Those devices, whether they’re back-to-back SCRs or triacs, typically have an on-state drop of 1.0 to 1.5 V, which means they dissipate some 20 to 30 W with even a moderate 20-A load.

A thermal load of 20 to 30 W can’t be ignored and requires either a heat sink or some form of cooling. The need to dissipate significant power militates against using SSRs in many applications where there simply isn’t enough room for a heat sink or other cooling apparatus.

Today, high blocking voltage and low on-state resistance are almost a contradiction in terms for FETs–at least at affordable prices. Nevertheless, I believe that high-voltage FETs with very low on-state resistances–a few milliohms–will become available at acceptable prices during the 90s, making it possible to build high-voltage, high-current SSRs that dissipate very little power.

Those relays will be used in close quarters where SSRs can’t be used today. In addition, designers will prefer them over today’s devices even in applications where there’s sufficient room for proper cooling, simply because it’s always a good idea to minimize power dissipation.

But, in my opinion, the interesting effects of technological change in my area of interest in the ’90s will be on “how” we do business with each other, rather than on the details of “what” we do.


Resonant technology will spur power-supply design developments

Power supplies for the 1990s will require a new type of technology to become faster and smaller like the VLSI devices they support. A new power conversion concept, resonant technology has twin goals of increasing the power density of supplies and improving performance by operating at higher frequencies than current pulse-width modulation (PWM) supplies can handle.

The trend is also away from large centralized power supplies to distributed supplies designed as modular or board-mounted types. Board-mounted supplies will need to have the same low profile as ICs, necessitating improvements in component technology,

For most of the 1970s and 80s, switching power-supply design was based mainly on pulse-width modulation (PWM) circuits. But the 1990s will require a new type of technology. Because VLSI advances make electronic systems faster and smaller, future power supplies must follow suit.

VLSI technology shrinks everything except the demand for power. Over the past five years, a new power-conversion idea called resonant technology was under development in private companies and universities. Its aim is smaller and more efficient power supplies than what’s possible with PWM methods. In the long run, perhaps by the end of this decade, resonant technology and new component and circuit fabrication techniques could reduce certain types of power supplies to commodity products, much like ICs.

All resonant designs share common goals. First is to increase the power density of supplies. A second goal is to improve performance by operating at higher frequencies than is possible under PWM technology. With PWM techniques, 100-200 kHz is about the upper frequency limit. Resonant converters will operate at several MHz and above. A commercial 1-MHz resonant converter is now available.

The major trend is the move away from large centralized supplies to distributed supplies. Distributed supplies are more suitable for the smaller, faster systems resulting from the rapid advancement of VLSI technology. They’re designed as either modular supplies or board-mounted types. Increasingly decentralized supplies will be built with resonant technology.

Resonant technology and distributed power aren’t new ideas, but rather ideas whose time has come because we’re now gaining the technology to implement them. In addition to work at the Virginia Power Electronics Center at Virginia Polytechnic Institute and State University, MIT, AT&T, Bell Laboratories, General Electric, and Unisys are working on resonant converters.

Whereas conventional PWM supplies for computer systems have power densities of 1 or 2 [W/in..sup.3.], experimental resonant technologies are at least an order of magnitude higher. A design target at GE and at VPI’s Center is 50 [W/in..sup.3] intially, and 100 [W/in..sup.3] before the end of the decade. Bell Labs has demonstrated a resonant supply that can deliver 50 W and runs at 20 MHz. But most 50-W output, [50-W/in..sup.3] power converters operate from 2 to 4 MHz. Some of these types of supplies will be practical by the early 1990s.

Much of the impetus for developing high-density, 50-W supplies comes from the military. VHSIC requirements call for compact distributed supplies that mount directly on a logic board, deliver up to 50 W of 5-V power, with superior transient response to power logic that can be running at 100 MHz and greater. These converters must have a power density of 50 [W/in..sup.3] and operate at several MHz. The long-term objective–towards the end of the decade–is to build supplies in chip form and mount them on pc boards.

It should be no surprise that units with power densities of 50 [W/in..sup.3] must be highly efficient to reduce the amount of head produced. A 1-MHz resonant converter should be able to operate at 90% efficiency, considerably higher than the 80% figure of today’s PWM supplies. It’s almost an axiom in the design of high-frequency resonant supplies that you don’t have a technology until you can deal with the heat.

In addition to high power density, the coming generation of board-mounted supplies will have to have extremely low profiles, much like ICs. This means that improvements in component technology are necessary to flatten out the package. Magnetic elements in particular are receiving lots of attention in an attempt to reduce their size. A thin magnetic plate or substrate can have windings printed on it, and then be covered with a low-profile core to make a transformer. Multilayer pc-board techniques are already being applied to inductors and transformers. Capacitor technology is also advancing. It’s now possible to integrate many capacitors on one chip in a dual-in-line package.

Another reason to reduce magnetics to an IC-like technology is to take most of the labor out of the manufacturing process. If a transformer or inductor can be built like an IC, the process can be automated. In fact, this is the ultimate goal; manufacture a power supply just like you make an IC, and reduce it to a commodity product.


IC packaging must undergo a facelift to meet user needs

Chip technology will continue to drive packaging technology in the 1990s. The development of multichip modules will be one of the most significant changes in packaging in the next decade. Changes in chip technology including increased I/O leads on a single chip, increased mils per chip side, higher clock rates and faster signal-edge rates will contribute to the push toward multichip modules.

These modules could be built by semiconductor houses, by end users, or by third-party vendors, and all three modes are currently in process. Use of multichip modules will impact board and end-equipment manufacturers by reducing the number of interconnection levels, connectors, cables and sockets, and increasing the thermal management requirements of devices.

Chip technology has been the driving force behind packaging technology for a long time. Increases in a chip size, input/output pin counts, power dissipation, and speed have continually put new pressure on package developers to keep pace. This pressure will continue to grow in the future.

We have computer houses, for example, telling us that they can no longer afford signal delays that go out of a package, through a trace on a board, and back into another package. When it’s technologically and economically practical to integrate the functions onto one chip, this is the preferable route. Otherwise, the best alternative is to put the discrete chips into the same package.

There’s also concern by some customers about stresses that occur on large chips as they’re mounted in a package. For these reasons, I see the development of multichip modules to be one of the biggest changes in packaging during the next decade.

Increasing the I/O on a given chip to over a thousand leads is one of the things that will drive us into a multichip module–getting the I/O on a chip-to-chip level rather than a coming-to-the-outside-world level. Multichip modules are going to become increasingly important in performance for the systems user.

But who will produce these multichip modules, which will be largely custom or semicustom? A semiconductor house could make the module and sell it to end users, or users could buy chips and make the module themselves. Or end users could hire a third party to produce the module.

Currently, all three cases are happening. A few highly upward-integrated companies (for example, some Japanese companies and IBM) are already making modules. As subsystems shrink from pc boards to multichip modules, pc-board houses will follow that market. Moreover, because semiconductor houses can make the silicon and can interconnect chips at very fine levels, I think they’re going to be much more involved in the module business.

When the interconnection is driven back inside the package, it’s done at a level that the semiconductor house is accustomed to operating with. We’re accustomed to operating with wire bonding and tape-automated bonding (TAB), and with very fine leads and lines on a wafer. So there are a number of those pieces of technology that we’re very well equipped to handle.

Use of multichip modules will affect pc-board and/or end-equipment manufacturers in several ways. The number of interconnection levels that they have in a system is likely to decrease. They may have fewer connectors, cables, and sockets. However, use of these modules could require additional thermal management, because now we’re putting a lot more power into a more confined area. Packages dissipating up to 90 W will significantly affect the thermal aspect of their system.

By driving interconnections inside the package, you can partition the system to reduce the I/O count of the multichip module. We’ve already heard from some customers who have partitioned their system, reducing the number of I/O pins for the module to less than that used by one of the chips in the module.

Some major changes in chip technology are expected by the mid-90s: Chips will go from 380 mils to around 800 mils on a side. The maximum number of I/O leads will go from about 360 to about 1250. High-power chips that handle 30 W will triple that number. Clock rates will go from 30 MHz to 300 MHz, and signal-edge rates from the 600-ps range to 300 ps. All of these modifications will place big demands on package design, which will lead to the widespread use of multichip modules.

To increase the I/O while keeping the size of the package to a minimum, leads must get closer together. The standard pitch of 100 mils will soon go to 50 mils, and by the mid-90s, it could drop to 30 mils.

There will be an increase in lead arrays, either pin-grid or pad-grid, for surface mounting. The pad-grid array carrier’s spacing is currently as low as 50 mils, and will fall to 30 mils in the future.

At 30 mils, techniques must be developed to solder pad arrays. The proprietary technique that we’re presently using at 50 mils may not work as well at 30 mils. If it does, it’s going to take an additional degree of sophistication. Routing into these fine spaces will also require additional levels in the pc boards. But with surface-mounting, these levels would be connected with vias, rather than through-holes, which require much more area.


Advanced ADCs in digital oscilloscopes will enhance design awareness

The continuing trend toward faster, cheaper, more accurate analog-to-digital converters will top the list of advances in instrumentation in the 1990s. Digitizing oscilloscopes may be the most significant product, combining the speed of analog oscilloscopes with the ability to simultaneously examine high-speed transient events on multiple channels. By enabling engineers to get a clearer picture of what is going on in high-speed digital systems, digitizing oscilloscopes will speed design implementation. A side effect of the oscilloscope changeover will be the gradual disappearance of logic analyzers, which are essentially one-bit digitizing oscilloscopes. Advances in a-d convertors will also spur development of the instrument-on-a-card concept, which envisions medium performance instrumentation in the form of cards that fit in a VXI bus-based mainframe.

The most exciting technological advance in instrumentation in the ’90s will be the continuing improvement in analog-to-digital converters, as they continue to get faster, less expensive, and more precise. As a result, a whole host of changes in instrumentation may be anticipated. Probably the most important change will be the changeover in oscilloscopes from analog to digital in all but the lowest-cost, lowest-performance areas.

The new breed of digitizing oscilloscopes will have all of the speed that was formerly available only from analog instruments. But they’ll also offer a capability that average designers could never access before: the ability to examine high-speed transient events on multiple channels simultaneously.

Not surprisingly, that capability is exactly what’s needed to analyze the behavior of the types of circuits that average designers will be building. I say “not surprisingly” because those new circuits will be based on the same advances in semiconductors that will make the new oscilloscopes possible. To be specific, those circuits will be very fast digital designs, such as systems based on 50-MHz microprocessors. These systems will have sub-nanosecond rise times and will require timing resolutions of tens of picoseconds.

Today, if engineers have to troubleshoot a design of that type, much of what they do will consist of guesswork. They use their knowledge and experience to guess what the problem might be, fix it based on that assumption, and check to see whether the problem went away. Eventually they fix the problem, but rarely is it known whether the problem was what was originally thought. The higg-speed single-shot events that cause the problems simply can’t be seen on today’s conventional instruments.

The new breed of reasonably-priced oscilloscopes will give average engineers the ability to really understand what’s happening with their high-speed digital systems. I can’t say precisely what effect that capability will have on design methodologies, but it’s sure to be considerable. It will certainly enable engineers to implement designs more quickly. In other words, it’s a productivity tool.

It can also be a learning aid. When you truly understand what went wrong and why your fix worked, you may have learned something that will give you a hint of what to avoid in the future.

Another interesting outcome of the oscilloscope changeover will be the disappearance of the logic analyzer as a separate piece of instrumentation. A logic analyzer, after all, is merely a one-bit digitizing oscilloscope. As the price of a-d converters continues to drop, a point will be reached where it makes sense to build, say, 80-channel digitizing oscilloscopes. With such instruments, there’d be no need for a simple logic analyzer.

As a result of advances in oscilloscopes, I expect substantial changes in microwave engineering design methodologies. Today, most microwave design work is done in the frequency domain because the dominant measurement tools available to microwave designers–spectrum analyzers and network analyzers–work in this domain.

But given a choice, a majority of engineers would prefer to work mostly in the time domain. At lower frequencies, where there has long been a choice, both design and analysis are done in the time domain because it’s easier to spot most problems there. For example, if your amplifier is clipping a sine wave, it can easily spotted on an oscilloscope. In the frequency domain, however, all you’d see are some second and third harmonics, which may also be caused by crossover distortion or some other nonlinearity. With a spectrum analyzer, you know there’s a problem and you know when it’s been solved, but you don’t necessarily know what you did to fix it.

Aside from fomenting what amounts to a revolution in oscilloscopes, advances in a-d converters will also give a powerful boost to the instrument-on-a-card concept. I expect that much of the medium-performance instrumentation produced toward the end of the decade will be in the form of cards that will fit into an instrumentation mainframe based on the VXI bus. This type of packaging, however, will be of more interest to manufacturing and test engineers–to whom size and configurability are very important–than to designers. But wherever it’s applied, the instrumentation card cage will offer lots of very neat solutions.