Quantcast

Digital Product Development

September 6, 2008

By Burgess, Mark A

We’ve come a long way since the days of the paper drawing. Advances in computing power are multiplying the capabilities of design engineers. The world of digital product development is changing-and changing fast. Just look at the engineering workplace today and compare it to, say, 20 years ago. It’s quite different, and in 20 years, it will be quite different from today.

Probably the most significant driver of change in the engineering world is the rapid advance in information technology. The pace of IT advances in all fields has been staggering. In my mind, the most amazing example of this is seen in greeting cards, of all things. A greeting card that has flashing lights or plays a catchy melody contains a small microprocessor. In fact, the greeting card computer is significantly more powerful than some of the first room-size vacuum tube machines. The most astonishing thing is that the microprocessor in the greeting card never breaks, and it basically runs until we tire of the melody and throw it away. Disposable computers are now commonplace.

So how do these advances in IT manifest themselves in the engineering world? Consider mechanical design. Twenty years ago, the standard medium for mechanical design was the paper engineering drawing. It fundamentally consists of a picture, usually depicting an object in three views, and some callouts describing critical or unique features of the design. Anyone who has tried to build a part from a three-view drawing knows there can be multiple ways to interpret the information. These different interpretations and ambiguities can lead to engineering errors.

As design systems developed, a major breakthrough came in the form of 3-D wire-frames. Wire-frames gave a precise geometric location of a few discrete points on the surface of a part. These advances represented a paradigm shift in capability. Instead of using a picture and callouts to represent the designer’s ideas, the wire-frames modeled the part itself, albeit in a crude way. Although the modeling allowed for greater accuracy and reduced ambiguity, there were still differing ways to interpret the designer’s intent. Interpretive errors became fewer, but still persisted.

Information technology has advanced at a tremendous pace. Developers of design systems have exploited this capability with sophisticated mathematics and today’s systems are capable of producing very complex designs in much higher definition than ever before.

In today’s world, 3-D parametric modeling of solids is commonplace. While a wire-frame provides a precise mathematical definition of a limited number of points on the surface of a part, a 3-D solid precisely and uniquely identifies every point that makes up a part.

The difference is quite profound. Solids provide an absolute and complete description of the designer’s concept. Since every point on a part is modeled, the level of ambiguity, and hence of interpretive error, has been driven to zero-not merely reduced, but down to zero.

Of course, one could argue that designers are human and they will make mistakes. That is true, but the cause of error is not the ambiguity of the rendering.

THREE MODELS

A useful approach in examining trends in digital product development is to look at three different categories: how technologies are modeled, how geometry is modeled, and how processes are modeled. Technology modeling considers the scientific principles on which a design is based. Geometry modeling represents the physical product, and process modeling examines the manufacture or use of a product.

There was a time when these different kinds of modeling, if they were done at all, were carried out by hand on paper and progressed through trial-and-error refinement of designs. Computers made more calculations possible, but for the most part, technology, geometry, and processes were modeled separately.

As computer-based systems grew in sophistication and availability, single-discipline analysis gave way to multidisciplinary analysis. In the aerospace industry, for example, computational fluid dynamics are now coupled with guidance, navigation, and control analyses, or with structural analyses. In satellite design, orbit mechanics are coupled with radiation thermal analysis.

Each advance in IT provided more computing power, more memory, and more throughput. It was exploited by engineers and computer scientists with more advanced numerical algorithms to provide more grid points in CFD, more finite elements in structural analysis, and more realistic physics in analysis and design.

Advances in geometric modeling have made it possible to represent 3-D solids in minute detail. In aerospace, vehicles containing millions of parts are reviewed in digital form as complete integrated products. Process modeling, which began with the study of a single manufacturing process, eventually gave way to complete factory flow simulations.

Advances in IT enabled crossing the boundaries among technology, geometry, and process modeling with integrated computer-aided engineering, computer-aided design, and process planning. Current trends have now extended process modeling throughout the integrated supply chain and the extended enterprise.

Historically, engineering has developed through a single discipline by focusing on:

* Accurate representation of the physical product, or

* Accurate representation of the physics involved, or

* Accurate representation of the product’s functional characteristics.

As computing power has advanced, engineers have been able to consider more-realistic representations of a design’s geometry, physics, and function. The state of the art has proceeded from one of considering simplified interactions to one in which engineers can achieve simulations in considerable high fidelity-in general, as long as they consider two of the three representations.

In a notional sense, we can graphically show how engi- neering has developed in these spaces. For example, con- sider a two-axis chart in which the ? axis indicates in- creasingly accurate detail in the dominant physics of a particular design, while the y axis represents the fidelity in which the physical product is modeled. The lower left-hand corner would represent simplified approxima- tions both of the dominating physics and of the geome- try. Moving up y axis on the chart indicates higher fideli- ty in the representation of the physical product, while movement along the horizontal axis graphs a truer repre- sentation of the physics.

In the field of aerodynamics, for example, one would place simple parametric methods applied to conic sections in the lower left-hand corner. One doesn’t have to search the archives too deeply to find many examples of this form of aerodynamics analysis. In fact, until the 1970s or 1980s, parametric analyses of simplified geometry represented by conic sections were the prevalent design techniques throughout the aerospace industry.

As one moves to the right along the horizontal axis, one would find closed-form solutions of the Navier-Stokes equations in the lower righthand corner. Closed-form solutions of the Navier-Stokes equations provide highly accurate representation of the physics of aerodynamics but exist for only a very few simple geometries.

Taking the Navier-Stokes equations, assuming no viscosity and irrotational flow, results in linear, partial-differential equations that have been known as potential flow equations. This simplified physics lends itself well to relatively simply solutions of complex geometries through finite-element numerical methods. Analyzing sophisticated, realistic geometry with potential flow, linear approximations to the Navier-Stokes equations would be placed in the upper left-hand section of our notional plot. Finally, fullviscous 3- D Navier-Stokes solvers, CFD, of realistic geometries would place us in the upper right-hand corner.

Another issue to consider is the function of a product. To do this, let the y axis represent not the physical product, but the product’s functional characteristics-that is, how the product behaves in use.

Consider the design of a hydraulic system. Handbook methods would occupy the lower left-hand corner of the graph. Component bench testing would be in the lower right-hand corner. Integrated design and simplified analyses would place us in the upper left-hand corner. Finally, full-scale functional development and verification rigs, such as an “iron bird”-a functionally and geometrically accurate, ground-based rig designed to verify the operational characteristics of an aircraft’s system, which is a common tool in aerospace-would occupy the upper right-hand corner of the plot.

CHOOSE TWO

To date, with few exceptions, mechanical engineering analysis and design methods development have been confined to one of three coordinate planes of this sort. That is, they have focused on either the physics and geometry, the geometry and functionality, or the functionality and physics, with engineering judgment being the primary means to determine the appropriateness of each investigation.

One can start contemplating how trends in digital product development may likely unfold if you combine the two 2-D plots into a single 3-D chart. By doing so, we are able to examine simultaneously the dominant physics, the physical product, and its function, all in accurate detail. More than 40 years ago, Gordon Moore, who would go on to become a founder of Intel Corp., wrote a paper in which he predicted that the number of transistors on a circuit board would double every 18 months or so. Loosely translated, that means the computer power doubles every 18 months. The prediction, which has become known as Moore’s Law, proved accurate. This characteristic growth in computing power is exactly what has enabled engineers and computer scientists to move from the lower left-hand corner of our graphs to the upper right-hand corner.

Now consider, if you will, 20 years from now. If Gordon Moore’s prediction continues to hold true, then the computer capability available to mechanical engineers will be almost 10,000 times what it is today. That’s four orders of magnitude.

With 10,000 times increased computing capability, what will our design and analysis systems be like? It’s really hard to predict. It’s quite possible that mechanical engineering design and analysis will begin to develop methods and techniques that truly embrace simultaneous design in this tri-variant space.

In aircraft design, the use of 3-D full viscous NavierStokes CFD is commonplace. It’s not unusual to perform analysis on complicated geometry such as landing gear. That would be an example of analysis that remains confined to the physics and geometry plane.

A more interesting analysis would be of the landing gear while it proceeds through the execution of its function, deployment. So if one were to analyze the landing gear, in real time, starting from the clean flow associated with the gear in a stowed position and proceeding to the dynamics of the landing gear door opening and the associated rush of air into the wheelwell, followed by the rotation of the gear into the full force of the oncoming free stream, then one starts to appreciate the simultaneous analysis of function, geometry, and physics. The coup de grace would be to accurately predict the vibration and acoustic impact experienced by the person in seat 14D while the gear is deployed.

Is this possible today? Not really. But in 20 years, with four orders of magnitude more computer horsepower and associated advances in numerical methods and the computer sciences, who knows?

If one were to speculate on how the world of mechanical engineering will evolve in the next 20 years, it might be useful to return to the technology, geometry, and process modeling taxonomy. In this author’s opinion, sophisticated, seamless multidisciplinary design and analysis will take new directions, which even include embedding intelligence within individual parts and products.

Early examples of this intelligence can be seen in today’s design systems. Designers routinely use parametric and relational design systems to allow parts and design components to morph into new forms as designs evolve. To do so saves significant engineering labor.

On the Boeing 787 Dreamliner, for example, designers used sophisticated relational design concepts to connect the aerodynamic contours of the vertical fin to skin thickness, stringer sizes, webs, frames, and spars. By planning how the design components relate to one another, the designers then unleashed the full potential of the computer to assist in design changes. In the end, complex assemblies of components semi-automatically redesigned themselves when faced with evolving requirements. Design updates were accomplished in a matter of weeks with a handful of engineers; those updates, on previous designs, required hundreds of engineers and several months of work.

Although the Boeing 787 example is a powerful demonstration of the capability of today’s design systems, it still required the planning and intervention of thoughtful engineers. From this point, however, it’s only a short reach to predict that as design systems evolve in the future, they will not require this level of human planning or interaction at all. For example, if we have part definitions that adjust, morph, and redesign themselves today, how far off can it be to have self-designing parts? That is, design systems that, given a rough definition, create a part completely autonomously.

Extending the thought process further, one can imagine a time in the future when an engineer merely tells the computer that a load has to be carried between two adjacent parts. Then the design system would define a series of parts capable of transferring such a load, sort through the options, select the best candidate and perform detailed design and analysis, and finally deliver the part to the engineer-hence self-defined and self-designed parts.

If, in the future, design systems are capable of conceiving and designing parts on their own, why couldn’t they design a whole product? Considering where we’ve been, is it unrealistic to envision a time when design systems will be able to understand fundamental needs, requirements, and problems, then design and manufacture complete products?

For one final thought, the words of the great American philosopher Yogi Berra come to mind: “It’s tough to make predictions, especially about the future.” Whatever the future holds for us, I’m sure it’s going to be quite interesting.

The Boeing 787 Dreamliner factory in Everett, Wash. Complex assemblies semi-automatically redesigned themselves when they were faced with evolving requirements.

The fifth Boeing 787 Dreamliner undergoing final assembly. By planning how the design components relate to one another, designers enlisted the power of computers to assist in design changes.

Mark A. Burgess is chief engineer of Boeing Phantom Works, the central R&D unit of the Boeing Co.

Copyright American Society of Mechanical Engineers Aug 2008

(c) 2008 Mechanical Engineering. Provided by ProQuest LLC. All rights Reserved.




comments powered by Disqus