146 K.L. Van Buren and F.M. Hemez finite element (FE) models have proven useful to explore stress distributions generated from various shapes of dental implants [1], impact energy that vehicles can withstand due to rolling over [2], and material stacking arrangements in the structural design of a composite wind turbine blade [3]. As such, optimization techniques have been used to efficiently search for the design with the best performance. Here, “performance” is defined as the design criterion used to evaluate the extent to which the system analyzed meets its requirements. While it is important for these optimization techniques to identify designs that optimize performance, it has also been acknowledged that the design should be robust to uncertainties encountered in both the operating conditions of the system and development of the numerical model. For example, in the structural design of buildings, uncertainties can arise due to the difference between what is considered in design and what is experienced in implementation, such as idealized load cases compared to in-service loads and differences of the manufactured structure from design specifications [4]. The basic motivation for robust design is to improve the quality of a product by ensuring that target performance requirements are met even when the design parameters vary away from their best-case specifications [5]. Most techniques developed for simulation-based robust design optimize input parameters to determine the design that minimizes the standard deviation of performance and produces its most desirable mean value [6]. The most straightforward way to estimate statistics, such as the mean and standard deviation, is through Monte Carlo sampling of the probability distribution law of input parameters. Clearly, Monte Carlo simulations can be computationally expensive. In addition, these input probability distributions often time need to be assumed or approximated. To mitigate these limitations and, specifically, the need for unnecessary assumptions, one alternative is to evaluate the degradation of performance at bounded levels of uncertainty as the design deviates from its nominal setting. It is the approach that we adopt in this work, proposed in the context of info-gap decision theory (IGDT). IGDT defines a convenient framework, which describes the sources of uncertainty either probabilistically or not, to explore the trade-offs between performance and robustness [7]. Approaches to robust design can be computationally expensive due to the need to evaluate the model at multiple settings around its nominal definition. For high-dimensional and/or computationally expensive models, it is useful to screen, or remove from analysis, parameters that are of low influence to the model output. A powerful tool is to combine a designof-experiments (DOE) with a screening analysis and, possibly, surrogate modeling [8]. The size of a DOE often scales exponentially, or faster, as the number of model parameters increases. For example, a full-factorial DOE defines discrete values, or “levels” for each parameter, and evaluates all possible combinations of parameters levels. Statistical effect screening, such as analysis of variance, can be used to determine parameters that most significantly change the numerical predictions. For large DOE, parallel computing is a convenient resource to decrease the time-to-solution of the entire DOE [9]. Another alternative is to pursue frameworks that require smaller numbers of model evaluations to understand the predictions. One such method is the Morris one-at-a-time (OAT) sensitivity analysis, which provides an effect screening whereby the DOE scales linearly as the number of model parameters increases. For high-dimensional problems that might depend on several dozen parameters, combining the Morris-based screening with parallel computing offers a real potential for significant computational savings. One issue with statistical effect screenings for robust design is that parameters identified as influential are not guaranteed to be influential as uncertain variables are varied from their nominal setting. As discussed further in Sect. 14.2, this is particularly true in the case of multiple performance criteria, where compliance of the design pursued is evaluated against many different metrics that must each meet their requirement. For example, the design of automotive performance features multiple criteria, which typically include fuel efficiency, capacity, reliability, cost, maneuverability, vehicle weight and driving comfort [10]. When screening input parameters of numerical models, it is especially important to ensure that the parameters found to be most significant are influential robustly, rather than simply being influential. Robustness means, here, that results of the sensitivity analysis should remain unchanged by potential interactions between the set of input parameters analyzed and other algorithms or variables that, while they may be uncertain, are not the direct focus of the analysis. Another issue is that sensitivity analysis often makes no formal distinction between design parameters and calibration variables. Design parameters are defined, here, as dimensions of the design space controlled by the analyst, while calibration variables are due to environmental conditions or introduced by modeling choices. For example, design parameters of an aerodynamic simulation of flight test include the flow velocity and pitch angle of the object tested. Calibration variables, on the other hand, might include the air temperature and variables such as artificial viscosity settings of the fluid dynamics solver and structural damping coefficients of a finite element model. The first one (air temperature) is a calibration variable because its value is highly uncertain in the real-world application. The others (artificial viscosity and structural damping) are calibration variables because they originate from specific modeling choices. While a screening analysis seeks to identify which one of the design parameters (flow velocity or pitch angle) most influences the model predictions, it needs to do so in a manner that is robust to our ignorance of calibration variables. Our point-of-view is that the design parameter uncertainty should be treated differently from uncertainty originating from calibration variables, despite the fact that interactions between the two sets might change the predictions. A different treatment is justified because the two types of uncertainties are fundamentally different. Design parameters are uncertain because many choices are possible; however, once a design is selected they become known exactly. Uncertainty originating
RkJQdWJsaXNoZXIy MTMzNzEzMQ==