Model Validation and Uncertainty Quantification, Volume 3

River Rapids Conference Proceedings of the Society for Experimental Mechanics Series Model Validation and Uncertainty Quantification, Volume 3 H. Sezer Atamturktur Babak Moaveni Costas Papadimitriou Tyler Schoenherr Proceedings of the 32nd IMAC, A Conference and Exposition on Structural Dynamics, 2014 River Publishers

Conference Proceedings of the Society for Experimental Mechanics Series Series Editor TomProulx Society for Experimental Mechanics, Inc., Bethel, CT, USA

River Publishers H. Sezer Atamturktur • Babak Moaveni • Costas Papadimitriou Tyler Schoenherr Editors Model Validation and Uncertainty Quantification, Volume 3 Proceedings of the 32nd IMAC, A Conference and Exposition on Structural Dynamics, 2014

Published, sold and distributed by: River Publishers Broagervej 10 9260 Gistrup Denmark www.riverpublishers.com ISBN 978-87-7004-891-0 (eBook) Conference Proceedings of the Society for Experimental Mechanics An imprint of River Publishers © The Society for Experimental Mechanics, Inc. 2014 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, or reproduction in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Preface Model Validation and Uncertainty Quantification, Volume 3represents one of the eight volumes of technical papers presented at the 32nd IMAC, A Conference and Exposition on Structural Dynamics, 2014, organized by the Society for Experimental Mechanics, and held in Orlando, Florida, February 3–6, 2014. The full proceedings also include volumes on Dynamics of Coupled Structures; Nonlinear Dynamics; Dynamics of Civil Structures; Structural Health Monitoring; Special Topics in Structural Dynamics; Topics in Modal Analysis I; and Topics in Modal Analysis II. Each collection presents early findings from experimental and computational investigations on an important area within structural dynamics. Model Validation and Uncertainty Quantification (MVUQ) is one of these areas. Modeling and simulation are routinely implemented to predict the behavior of complex dynamical systems. These tools powerfully unite theoretical foundations, numerical models, and experimental data which include associated uncertainties and errors. The field of MVUQ research entails the development of methods, with associated metrics, for the rigorous test of model prediction accuracy and robustness considering all relevant sources of uncertainties and errors through systematic comparisons against experimental observations. The organizers would like to thank the authors, presenters, session organizers, and session chairs for their participation in this track. Clemson, SC, USA H. Sezer Atamturktur Medford, MA, USA Babak Moaveni Volos, Greece Costas Papadimitriou Albuquerque, NM, USA Tyler Schoenherr v

Contents 1 Calibration of System Parameters Under Model Uncertainty ..................................................... 1 Ghina N. Absi and Sankaran Mahadevan 2 On the Aggregation and Extrapolation of Uncertainty from Component to System Level Models ............ 11 Angel Urbina, Richard G. Hills, and Adam C. Hetzler 3 Validation of Strongly Coupled Models: A Framework for Resource Allocation................................................................................................. 25 S. Atamturktur and G. Stevens 4 Fatigue Monitoring in Metallic Structures using Vibration Measurements ...................................... 33 D.C. Papadioti, D. Giagopoulos, and C. Papadimitriou 5 Uncertainty propagation in Experimental Modal Analysis......................................................... 41 Bart Peeters, Mahmoud El-Kafafy, Patrick Guillaume, and Herman Van der Auweraer 6 Quantification of Prediction Bounds Caused by Model Form Uncertainty....................................... 53 Lindsey M. Gonzales, Thomas M. Hall, Kendra L. Van Buren, Steven R. Anton, and François M. Hemez 7 Composite Fuselage Impact Testing and Simulation: A Model Calibration Exercise ........................... 67 Lucas G. Horta, Mercedes C. Reaves, Karen E. Jackson, Martin S. Annett, and Justin D. Littell 8 Noise Sensitivity Evaluation of Autoregressive Features Extracted from Structure Vibration................. 79 Ruigen Yao and Shamim N. Pakzad 9 Uncertainty Quantification and Integration in Multi-level Problems ............................................. 89 Chenzhao Li and Sankaran Mahadevan 10 Reliability Quantification of High-Speed Naval Vessels Based on SHM Data .................................... 99 Mohamed Soliman and Dan M. Frangopol 11 Structural Identification Using Response Measurements Under Base Excitation................................ 107 Suparno Mukhopadhyay, Raimondo Betti, and Hilmi Lus¸ 12 Bayesian FE Model Updating in the Presence of Modeling Errors ................................................ 119 Iman Behmanesh and Babak Moaveni 13 Maintenance Planning Under Uncertainties Using a Continuous-State POMDP Framework ................. 135 Roland Schöbi and Eleni Chatzi 14 Achieving Robust Design through Statistical Effect Screening..................................................... 145 Kendra L. Van Buren and François M. Hemez 15 Automated Modal Parameter Extraction and Statistical Analysis of the New Carquinez Bridge Response to Ambient Excitations ...................................................................................... 161 Yilan Zhang, Moritz Häckell, Jerome P. Lynch, and Raimund Rolfes vii

viii Contents 16 Evaluation of a Time Reversal Method with Dynamic Time Warping Matching Function for Human Fall Detection Using Structural Vibrations ............................................................. 171 Ramin Madarshahian, Juan M. Caicedo, and Diego Arocha Zambrana 17 Uncertainty Quantification of Identified Modal Parameters Using the Fisher Information Criterion ........ 177 Eric M. Hernandez and Néstor R. Polanco 18 Modal Parameter Uncertainty Quantification Using PCR Implementation with SMIT........................ 185 Minwoo Chang and Shamim N. Pakzad 19 Excitation Related Uncertainty in Ambient Vibration Testing of Bridges ........................................ 195 Kirk A. Grimmelsman, James D. Lindsey, Ryan T. Dufour, and James T. Norris 20 Experiment-Based Validation and Uncertainty Quantification of Coupled Multi-Scale Plasticity Models ... 203 Garrison Stevens, Sez Atamturktur, Ricardo Lebensohn, and George Kaschner 21 Model Calibration and Uncertainty of A600 Wind Turbine Blades ............................................... 215 Anders T. Johansson, Andreas Linderholt, and Thomas Abrahamsson 22 Validation Assessment for Joint Problem Using an Energy Dissipation Model .................................. 229 Adam Hetzler, Angel Urbina, and Richard Hills 23 A Bayesian Damage Prognosis Approach Applied to Bearing Failure ............................................ 237 Zhu Mao and Michael Todd 24 Sensitivity Analysis of Beams Controlled by Shunted Piezoelectric Transducers................................ 243 G. Matten, M. Collet, S. Cogan, and E. Sadoulet-Reboul 25 A Principal Component Analysis (PCA) Decomposition Based Validation Metric for Use with Full Field Measurement Situations .............................................................................. 249 Randall Allemang, Michael Spottswood, and Thomas Eason 26 FEM Calibration with FRF Damping Equalization................................................................. 265 Thomas J.S. Abrahamsson and Daniel C. Kammer 27 Evaluating Initial Model for Dynamic Model Updating: Criteria and Application.............................. 279 Qingguo Fei and Dong Jiang 28 Evaluating Convergence of Reduced Order Models Using Nonlinear Normal Modes .......................... 287 Robert J. Kuether, Matthew R. Brake, and Mathew S. Allen 29 Approximate Bayesian Computation for Finite Element Model Updating ....................................... 301 F.A. DiazDelaO, H.M. Gomes, and J.E. Mottershead 30 An Efficient Method for the Quantification of the Frequency Domain Statistical Properties of Short Response Time Series of Dynamic Systems ................................................................ 307 M. Brehm and A. Deraemaeker 31 Quantifying Uncertainty in Modal Parameters Estimated Using Higher Order Time Domain Algorithms .. 317 S. Chauhan 32 Testing and Model Correlation of a Plexiplate with a Water Boundary Condition..................................................................................................... 327 C.D. Van Karsen, J.P. De Clerck, and S. Dhabe 33 Detection of Stress-Stiffening Effect on Automotive Components ................................................. 335 Elvio Bonisoli, Gabriele Marcuccio, and Stefano Tornincasa 34 Approach to Evaluate Uncertainty in Passive and Active Vibration Reduction.................................. 345 Roland Platz, Serge Ondoua, Georg C. Enss, and Tobias Melz 35 Project-Oriented Validation on a Cantilever Beam Under Vibration Active Control ........................... 353 Qintao Guo, Rongmei Chen, and Zhou Jin

Contents ix 36 Inferring Structural Variability Using Modal Analysis in a Bayesian Framework.............................. 363 H.M. Gomes, F.A. DiazDelaO, and J.E. Mottershead 37 Including SN-Curve Uncertainty in Fatigue Reliability Analyses of Wind Turbines ............................ 375 Jennifer M. Rinker and Henri P. Gavin 38 Robust Design of Notching Profiles Under Epistemic Model Uncertainties ...................................... 383 Fabien Maugan, Scott Cogan, Emmanuel Foltête, Fabrice Buffe, and Gaëtan Kerschen 39 Optimal Selection of Calibration and Validation Test Samples Under Uncertainty ............................. 391 Joshua Mullins, Chenzhao Li, Sankaran Mahadevan, and Angel Urbina 40 Uncertainty Quantification in Experimental Structural Dynamics Identification of Composite Material Structures...................................................................................................... 403 Marcin Luczak, Bart Peeters, Maciej Kahsin, Simone Manzato, and Kim Branner 41 Analysis of Numerical Errors in Strongly Coupled Numerical Models ........................................... 409 Ismail Farajpour and Sez Atamturktur 42 Robust Expansion of Experimental Mode Shapes Under Epistemic Uncertainties .............................. 419 A. Kuczkowiak, S. Cogan, M. Ouisse, E. Foltête, and M. Corus

Chapter1 Calibration of System Parameters Under Model Uncertainty Ghina N. Absi and Sankaran Mahadevan Abstract This paper investigates the quantification of errors and uncertainty in Bayesian calibration of structural dynamics computational models, affected by choices in model fidelity. Since Bayesian calibration uses an MCMC approach for sampling the updated distributions, using a high-fidelity model for calibration can be prohibitively expensive. On the other hand, use of a low-fidelity model could lead to significant error in calibration and prediction. This paper investigates model parameter calibration with a low-fidelity model corrected using higher fidelity simulations, and the trade-off between accuracy and computational effort. Different fidelity models may have different mesh resolutions, physics assumptions, boundary conditions, etc. The application problem used is of a curved panel located in the vicinity of a hypersonic aircraft engine, subjected to acoustic, thermal and aerodynamic loads. Two models are used to calibrate the damping characteristics of the panel: frequency response analysis and full time history analysis, and the trade-off between accuracy and computational effort is examined. Keywords Multi-fidelity • Bayesian calibration • Hypersonic vehicle • Model uncertainty • Surrogate model 1.1 Introduction Finite element analysis (FEA) is commonly used in the dynamic simulation of engineering structures with complicated geometry and under complex loading conditions. However, construction of the FEA model is subjective, affected by the engineer’s assumptions. High-fidelity dynamic finite element analysis of complex systems is quite expensive, and considerable research has been done to construct cheaper and simpler surrogate models, equivalent static models, or reducedorder models. However, the errors and uncertainties increase with the reduction in model fidelity. Two principal qualities are desired in a functional finite element model [1]: (1) physical significance: the model should correctly represent how the mass, stiffness and damping are distributed, and (2) correctness, where the response from dynamics experiments is accurately predicted by the model. Mottershead and Friswell [2] group modeling errors into three types: (a) model form errors (due to assumptions regarding the underlying physics of the problem, especially with strongly nonlinear behavior), (b) model parameter uncertainty (due to assumptions regarding boundary conditions, parameters distributions and simplifying assumptions), and (c) model order errors (arising from the discretization of complex geometry and loading). Computationally efficient models have to be cheap enough to allow multiple repetitions of the simulations, but also retain precious information available from rigorous more expensive models. Many studies have concentrated on developing reduced-order models (ROM) to replace full fidelity dynamic analyses. McEwan [3, 4] proposed the Implicit Condensation (IC) method that included the non-linear terms of the equation of motion, but restricted the nonlinear function to cubic stiffness terms, and can only predict the displacements covered by the bending modes. Other methods explicitly include additional equations to calculate the membrane displacements in the ROM, such as those by Rizzi et al. [5, 6] and Mignolet et al. [7, 8]. Another direction of research to deal with inaccurate FEA is model updating. Direct updating methods have been proposed by computing closed-form solutions for the global stiffness and mass matrices using the structural equations of motion G.N. Absi • S. Mahadevan ( ) Department of Civil and Environmental Engineering, Vanderbilt University, Nashville, TN, USA e-mail: sankaran.mahadevan@vanderbilt.edu H.S. Atamturktur et al. (eds.), Model Validation and Uncertainty Quantification, Volume 3: Proceedings of the 32nd IMAC, A Conference and Exposition on Structural Dynamics, 2014, Conference Proceedings of the Society for Experimental Mechanics Series, DOI 10.1007/978-3-319-04552-8__1, © The Society for Experimental Mechanics, Inc. 2014 1

2 G.N. Absi and S. Mahadevan [9, 10]. The generated matrices are faithful to modal analyses, but do not always maintain structural connectivity, and may not always retain physical significance. Iterative methods study the changes in model parameterization to evaluate the type and the location of the erroneous parameters. and try to minimize the difference between the experimental data and the FE model predictions by varying these parameters [11]. In these cases, the mathematical model used in the model updating can sometimes be ill conditioned. Liang and Mahadevan [12] replaced the expensive computational model with a surrogate model using Polynomial Chaos Expansion (PCE), and developed a systematic error quantification methodology. This approach facilitates running inexpensive simulations, while taking into account the resulting errors and uncertainties. This paper considers Bayesian calibration of model parameters with experimental data, using a corrected low-fidelity model. It uses the information available in high-fidelity simulation to adjust the low-fidelity model, for better agreement with experimental results. The aim is to reduce the uncertainty in the parameters ahead of the final calibration (especially when a small number of experimental data is available), thus providing a stronger prior that takes into account additional high-fidelity information that may be missing in the low-fidelity model (such as non-linearity, additional variables, etc.). In the same way, information available in the low-fidelity model and missing from the high-fidelity one is retained, such as modeling the full domain (i.e. the full time history vs. a small segment). The corrected low-fidelity model is inexpensive, and becomes more accurate. This is particularly useful when limited experimental data are available, and the need of a reliable, yet fast model is essential. 1.2 Multi-fidelity Calibration Method In this section, the concept of calibration is extended from a simple calibration using experimental data with a single model, to a sequential one, combining models of different fidelities. Assume that we have two models G1(X) and G2(X) of different fidelities. In order to achieve computational efficiency in Bayesian calibration, each model is replaced by fast running surrogate models S1(X) andS2(X). In order to build these surrogate models, the original models need to be evaluated multiple times. Assuming higher fidelity models run much slower than lower fidelity ones, time constraints will only allow fewer higher fidelity simulations. 1.2.1 Surrogate Model: Polynomial Chaos Expansion Many surrogate modeling techniques have been developed in the literature, such as linear/quadratic polynomial-based response-surface [13], artificial neural networks [14], support vector machine (SVM) [15], polynomial chaos expansion (PCE) [16], and Gaussian process (GP) interpolation (or Kriging) [17]. In this paper, a PCE is used to replace the original models for inexpensive sampling in the calibration process. PCE is a regression-based surrogate model that represents the output of a model with a series expansion in terms of standard random variables (SRVs). Consider a model yDf (x) where xDfx1, x2, : : : , xkg T is a vector of input random variables. We construct a PCE to replace f (x) using n multi-dimensional Hermite polynomials as basis functions: y D n X jD0 j j . / D T'. / C"surr (1.1) where is a vector of independent standard normal random variables which correspond to the original input x [18]. '(.)Df 0(.), 1(.), : : : , n(.)g T are the Hermite polynomial basis functions, and ™Df 0, 1, : : : , n,g T are the corresponding coefficients that can be estimated by the least squares method. A collocation point method can be used to efficiently select training points where the original model is evaluated [19]. Suppose that mtraining points ( i, yi), i D1, 2, : : : , mare available. Under the Gauss-Markov assumption [20], the surrogate model error "surr asymptotically follows a normal distribution with zero mean and variance given by Var Œ"surr s 2 Cs2'. /T ˆTˆ 1 '. / (1.2) where ˆDf'. 1/ ;'. 2/ ; : : : ;'. m/g T and s2 D 1 m n m X iD1 yi T'. i / 2.

1 Calibration of System Parameters Under Model Uncertainty 3 f(X) ymax PDF 0 y X Fig. 1.1 Simple implementation of slice sampling 1.2.2 Uncertainty Quantification The experimental output Yobs is expressed in terms of the experimental input X, the errors and the model output as follows: Yobs C"obs DG.XC"in; / C"surr C"mf (1.3) where "in: input experimental error "mf : model form error (calculated automatically within the code) "surr: surrogate model error "obs: output experimental error In order to get accurate calibration results, these errors need to be included in the calibration [21]. The experimental measurement errors "in and "obs are represented as random variables, with known (or assumed) distributions. The surrogate model error "surr reflects the uncertainty we have regarding the replacement of the original model with a response surface model, as is shown in Eq. 1.2. As for the model form error "mf , it is calibrated along with model parameters using experimental data, following Eq. 1.3. 1.2.3 Bayesian Calibration Three approaches are available for calibration: least squares, maximum likelihood, and Bayesian calibration. This paper uses Bayesian calibration since it is the most comprehensive approach, allowing uncertainty quantification of the calibration result. The Bayesian calibration is based on Bayes’ theorem: fX.xj D/ /fX.x/:L.x/ (1.4) wherefX(xj D) is the posterior distribution of the variableXafter calibration using the data D, fX(x) is the prior distribution of X(assumed by the user), andL(x) is the likelihood function (probability of observing the data D, given a calibration parameter value). Sampling from the posterior is done using a Markov Chain Monte Carlo (MCMC) algorithm. Several algorithms are available for MCMC sampling: Metropolis-Hastings [22], Gibbs sampling [23], slice sampling [24], etc. Slice sampling is used in the numerical example to evaluate Eq. 1.4. It is based on the observation that to sample a random variable, one can sample uniformly from the region under the PDF. The simplest implementation (for a uni-variate distribution without the need to reject any points) consists of first sampling a random value y between 0 and the maximum PDF value ymax. The next step is to sample x from the slice under the PDF, as shown in Fig. 1.1. Slice sampling requires many simulations of the model used. Low-fidelity models are fast, but inaccurate. High-fidelity models are more reliable, but time-consuming. Therefore we propose correcting the low-fidelity model first with high-fidelity simulations, and using the corrected model for calibration with experimental data.

4 G.N. Absi and S. Mahadevan Higher Fidelity Lower Fidelity... esu emf Fig. 1.2 Variation of "sur and "mf with model fidelity 1.2.4 Validation Model validation is used to determine the degree of agreement of a model prediction with observed experimental data, and compare multiple models to determine which one is better supported by the data. Many methods have been proposed that include parametric uncertainty [25]. For two models (denoted byMi andMj) with prior probabilities of acceptance P(Mi) and P(Mj), the relative posterior probabilities can be computed using Bayes’ rule [26, 27]: P Mi ˇ ˇ ˇ Observation P Mjˇ ˇ ˇ Observation D P Observationˇ ˇ ˇ Mi P Observationˇ ˇ ˇ Mj P .Mi / P Mj (1.5) The likelihood ratio P Observationˇ ˇ ˇ Mi ! P Observationˇ ˇ ˇ Mj ! is referred to as “Bayes factor”, and is used as the metric to assess the data support toModel Mi relative to model Mj. If the Bayes factor is greater than 1.0, then it can be concluded that the model Mi ismore supported by the data. 1.2.5 Multi-fidelity Implementation Let N1 and N2 denote the number of simulations available for each original model G1 and G2, uniformly distributed over the problem domain. These realizations are used to build the corresponding surrogate models. Because of time and budget constraints, we assume that higher the fidelity of the model, the lower the number of simulations available, i.e.: N2 . N1. This results in a surrogate model error for G2 larger than that for G1. However, since G2 is of a higher fidelity than G1, the model form error inG1 is larger than that for G2, i.e., "mf(1) "mf(2). Figure 1.2 shows a notional diagram of how the surrogate model error and model form error might vary with the fidelity of the model. The proposed approach avoids the need to build a surrogate model for the high-fidelity simulations, and thus avoids the high surrogate model error that comes with it. It uses the available simulations of the high-fidelity model to correct the low-fidelity surrogate model, and uses the latter in the calibration process. The multi-fidelity calibration algorithm is as follows: (i) Run the Low(G1) andHigh(G2) fidelity models to obtain N1 andN2 sets of outputs, respectively. (ii) Build S1, the surrogate model replacing G1. In this step, the variance of S1 is calculated to account for the surrogate model error. (iii) Define the priors of the calibration parameters, and the discrepency between the models D2,1. (iv) Calibrate the parameters of low-fidelity model as well as the discrepency with the high-fidelity simulations.

1 Calibration of System Parameters Under Model Uncertainty 5 (v) Set the corrected low-fidelity model as: LFcorr DLFCD’2,1, whereD’2,1 is the posterior of D2,1 obtained from step iv. (vi) Assume a prior distribution for the model form error "mf . (vii) Re-calibrate the dynamics model parameters along with "mf with the available experimental data. Use the posteriors of the dynamics model parameters from step iv. as priors (D’2,1 is fixed in this step based on the result in step v). This approach allows the use of the information from both fidelity models, with minimal surrogate model error. 1.3 Numerical Example 1.3.1 Problem Description The application problem is of a hypersonic airplane fuselage panel located next to the engine, subjected to dynamic acoustic loading (AL). The panel is curved, as shown in Fig. 1.3, and is modeled using the FEA software ANSYS. The strain at seven different locations of the panel is recorded. The damping—to be calibrated—is modeled as frictional damping for a width of “1” around the perimeter of the panel, and material damping for the rest of the panel area. The boundary fixity is also a calibration variable, and is described by a fixity ratioFRDlength of plate perimeter fixed/total boundary length (see Fig. 1.4). Two models of different fidelities were considered: – Model 1: A power spectral density analysis, which consists of a linear combination of mode shape effects – Model 2: A full transient analysis where the acoustic loading is applied as a dynamic time history input. Model 1 and Model 2 have the finite element mesh, and the materials, the boundary conditions and the mesh resolution are the same. They differ in the application of loads, the analysis method, and the output type. In Model 1, the input acoustic load applied is the Welch power spectral density (PSD) [28] of the experimental 140 dB acoustic load. The PSD describes how the power of a signal or time series is distributed over the different frequencies. For a signal X(t), it is calculated as follows: Fig. 1.3 Curved panel dimensions

6 G.N. Absi and S. Mahadevan Fig. 1.4 Example of FRD1 and 0.5 0.08 LF exp HF exp LF exp LFcorr exp PDF Posteriors for Model Form Error emf 0.06 0.04 0.02 0 -10 -8 -6 -4 -2 0 2 x 10-4 4 Fig. 1.5 Model form errors posteriors Sxx .!/ D lim T!1 2 4 EhjFXT .!/j 2i 2T 3 5 (1.6) where E[] is the expected value, andFXT .!/ D 1 Z 1 XT.t/e jwt dt the Fourier transform of X(t). The PSD is calculated with the entire 140 dB signal, for the full duration of 60 s. However, it is not unique to the signal it is calculated from, because the phase component is discarded. For different phase relationships, different time-domain signals fit the same PSD. Also, the spectrum analysis in ANSYS is a linear combination of the mode effects on the structure. Finally, even though Model 1 is inexpensive to run (one simulation takes about 9.5 min to complete), it does not allow the user to add initial conditions on the structure such as initial stress resulting from fixing the plate on the test-rig (a uniform load has a PSD of 0, and will not contribute to the RMS strain output). Model 2, in contrast, is a full transient dynamic analysis of the panel. The model run is quite time-consuming, allowing only 0.2 s of the input signal (of the full 60 s of data available) to be simulated, and each simulation takes 5.5 h (35 times slower than Model 1). However, Model 2 does allow us the addition of initial stress on the model, and preliminary test runs have shown that a uniform initial load on the panel is consistent with the experimental observation (see Fig. 1.5).

1 Calibration of System Parameters Under Model Uncertainty 7 0.05 Posteriors for Frictional Damping LF exp HF exp LF HF LFcorr exp Frictional Damping PDF 0.04 0.03 0.02 0.01 0 0 1 2 3 4 5 6 x 10-3 7 8 Fig. 1.6 Frictional damping posteriors The hypothesis of this paper is that initial correction of the low-fidelity model with the higher fidelity model allows us to use information from both in the final calibration of the system parameters, i.e., the full length of the signal in the form of the PSD, and the transient behavior as well as the initial conditions incorporated in the full dynamic analysis. 1.3.2 Results Four calibration results are being compared in this section: The low-fidelity model calibrated with experimental data (LF_exp), the high-fidelity model calibrated with experimental data (HF_exp), the low-fidelity model calibrated with the high-fidelity simulations (LF_HF) and the corrected low-fidelity model calibrated with the experiments data (LFcorr_exp). The following priors are assumed for the calibration parameters: "mf U[ 10 1, 10 1] FR_DC U[10 4, 10 2]—Frictional damping coefficient MT_DC U[5.10 7, 10 4]—Material damping coefficient FR U [0.7, 1]—Fixity ratio "obs N (0, obs) where the prior of obs follows a Jeffreys distribution [29] with bounds [10 5; 10 2]: with a fixed obs D0, the prior of obs is assumed as ( ) /1/ . The low-fidelity model was run 40 times, whereas the high-fidelity was run nine times. We only have one set of experimental data available (one observation at each strain gage location). Four strain gage outputs are used for calibration, and a fifth is used to compute the Bayes factor (likelihood ratio) to compare the results of the different calibration options. The following figures show the posteriors of the model form errors (Fig. 1.5), frictional (Fig. 1.6) and material (Fig. 1.7) damping and the fixity ratio (Fig. 1.8). Using the fifth (validation) data set, the likelihood ratios among the three calibration options were computed as LFW LFcorr W HF D1 W 1:31 W 1:29 1.3.3 Discussion The main difference between Models 1 and 2, besides the analysis type, is the initial stress added in Model 2, in the form of a distributed uniform load. Preliminary testing of Model 2 with and without initial stress showed that, without adding a uniform load on the panel, the strain amplitude was much lower than the experimental output, as seen in Fig. 1.9.

8 G.N. Absi and S. Mahadevan 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0 0.5 1 Material Damping LF exp HF exp LF HF LFcorr exp Posteriors for Material Damping PDF 1.5 2 x 10-4 2.5 Fig. 1.7 Material damping posteriors 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 0.7 0.75 0.8 Fixity Ratio PDF Posteriors for Fixity Ratio LF exp HF exp LF HF LFcorr exp 0.85 0.9 0.95 1 Fig. 1.8 Fixity ratio posteriors 1.5 x 10 -4 1 0.5 0 -0.5 -1 -1.5 0 0.005 0.01 Experimental Output Model no initial stress Model with initial stress Time Strain Strain Signal for SG1 0.015 0.02 Fig. 1.9 Preliminary model testing

1 Calibration of System Parameters Under Model Uncertainty 9 The calibration results show the effect of ignoring the initial stress. The posteriors from the calibration with Model 1 (the low-fidelity model) underestimate the damping and the fixity ratio, in an attempt to recover the energy under the PSD curve (strain RMS) which is directly proportional to the signal amplitude. This loss of accuracy is also shown in the model form error, which is much greater (in magnitude) than the rest of the models. Not only were the results in favor of the use of the corrected low-fidelity model in the calibration process, but its convergence was faster than both low-fidelity and high-fidelity calibrations with experimental data. Slice sampling was required to give 8,000 accepted samples of the parameters posterior distributions. The corrected low-fidelity model needed 25,284 samples to accept 8,000, whereas using the high-fidelity model alone for calibration needed 40,352 samples, and using the low-fidelity model alone needed 95,327 samples. Furthermore, the likelihood ratio calculated in Sect. 1.3.2 shows that the calibration result of the corrected low-fidelity model is almost as acceptable as the high-fidelity model. When this likelihood comparison is considered along with the computational efficiency of the two-step approach, it appears that the proposed methodology offers a reasonable trade-off between accuracy and computational effort. 1.4 Conclusion This paper investigated the use of multi-fidelity models in the calibration of model parameters. Time-consuming high-fidelity simulations were used to correct the inexpensive low-fidelity model, and the corrected low-fidelity model was used for parameter calibration with experimental data. This efficient two-step method allows the fusion of information from both model fidelities, and is particularly useful when only a small number of experimental data points is available. Future work needs to investigate the efficacy of this approach when more than two models are available, and find a systematic quantitative way of using the available information. The calibration becomes more complicated when the damping is load-dependent, and future work needs to account for this dependence. The extension to multiple physics with the application of thermal and aerodynamic loads also needs to be studied in the future. Acknowledgement The research reported in this paper was supported by funds from the Air Force Office of Scientific Research (Project Manager: Dr. Fariba Fahroo) through subcontract to Vextec Corporation (Investigators: Dr. Robert Tryon, Dr. Animesh Dey). The support is gratefully acknowledged. The computations in the numerical example were partly conducted using the resources at the Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University. References 1. Caesar B (1987) Updating system matrices using modal test data. In: Proceedings of the fifth IMAC, pp 453–459 2. Mottershead JE, Friswell MI (1993) Model updating in structural dynamics: a survey. J Sound Vib 167:347–375 3. McEwan M, Wright JR, Cooper JE, Leung YT (2001) A finite element/modal technique for nonlinear plate and stiffened panel response prediction, AIAA-2001-1595 4. McEwan M (2001) A combined modal/finite element technique for the non-linear dynamic simulation of aerospace structures. PhD. dissertation, University of Manchester, England 5. Muravyov A, Rizzi S (2003) Determination of nonlinear stiffness with application to random vibration of geometrically nonlinear structures. Comput Struct 81:1513–1523 6. Rizzi S, Przekop A (2008) System identification-guided basis selection for reduced-order nonlinear response analysis. J Sound Vib 315:467–485 7. Mignolet M, Radu A (2003) Validation of reduced order modeling for the prediction of the response and fatigue life of panels subjected to thermo-acoustic effects, Structural dynamics: recent advances. In: Proceedings of the eighth international conference, University of Southampton, England 8. Kim K, Wang XQ, Mignolet MP (2003) Nonlinear reduced order modeling of functionally graded plates, AIAA-2008-1873 9. Baruch M, Bar-Itzhack IY (1978) Optimal weighted orthogonalization of measured modes. AIAA J 16:346–351 10. Berman A, Nagy EJ (1983) Improvement of large analytical model using test data. AIAA J 21:1168–1173 11. Link M (1999) Updating of analytical models—review of numerical procedures and application aspects. Structural dynamics forum SD2000, LosAlamos 12. Liang B, Mahadevan S (2011) Error and uncertainty quantification and sensitivity analysis in mechanics computational models. Int J Uncertain Quantif 1(2):147–161 13. Faravelli L (1989) Response-surface approach for reliability analysis. J Eng Mech 115(12):2763–2781 14. Hurtado JE, Alvarez D (2001) Neural-network-based reliability analysis: a comparative study. Comp Methods Appl Mech Eng 191(1–2): 113–132 15. Rocco C, Moreno J (2002) Fast Monte Carlo reliability evaluation using support vector machine. Reliab Eng Syst Saf 76(3):237–243

10 G.N. Absi and S. Mahadevan 16. Ghanem RG, Spanos P (1991) Stochastic finite elements: a spectral approach. Springer, Berlin 17. Romero V, Swiler L, Giunta A (2004) Construction of response surfaces based on progressive-lattice-sampling experimental designs with application to uncertainty propagation. Struct Saf 26(2):201–219 18. Isukapalli SS (1999) Uncertainty analysis of transport-transformation. Ph.D. thesis, The State University of New Jersey, Rutgers 19. Huang SP, Mahadevan S, Rebba R (2007) Collocation-based stochastic finite element analysis for random field problems. Probab Eng Mech 22(2):194–205 20. Seber GAF, Wild CJ (1989) Nonlinear regression. Wiley, New York 21. Kennedy MC, O’Hagan A (2001) Bayesian calibration of computer models. J R Stat Soc 63(3):425–464 22. Hastings W (1970) Monte Carlo sampling methods using Markov chains and their application. Biometrika 57(1):97–109 23. Casella G, George E (1992) Explaining the Gibbs sampler. Am Stat 46(3):167–174 24. Neal R (2003) Slice sampling. Ann Stat 31(3):705–741 25. Oberkampf WL, Trucano TG (2002) Verification and validation in computational fluid dynamics. Progr Aerosp Sci 38:209–272 26. Berger JO, Pericchi LR (1996) The intrinsic Bayes factor for model selection and prediction. J Am Stat Assoc 91:109–122 27. Leonard T, Hsu JSJ (1999) Bayesian methods: an analysis for statisticians and interdisciplinary researchers. Cambridge University Press, Cambridge 28. Welch PD (1967) The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans Audio Electroacoust AU-15:70–73 29. Kass RE, Wasserman L (1996) The selection of prior distributions by formal rules. J Am Stat Assoc 91(435):1343–1370

Chapter2 On the Aggregation and Extrapolation of Uncertainty from Component to System Level Models Angel Urbina, Richard G. Hills, and Adam C. Hetzler Abstract The use of computational models to simulate the behavior of complex mechanical systems is ubiquitous in many high consequence applications such as aerospace systems. Results from these simulations are being used, among other things, to inform decisions regarding system reliability and margin assessment. In order to properly support these decisions, uncertainty needs to be accounted for. To this end, it is necessary to identify, quantify and propagate different sources of uncertainty as they relate to these modeling efforts. Some sources of uncertainty arise from the following: (1) modeling assumptions and approximations, (2) solution convergence, (3) differences between model predictions and experiments, (4) physical variability, (5) the coupling of various components and (6) and unknown unknowns. An additional aspect of the problem is the limited information available at the full system level in the application space. This is offset, in some instances, by information on individual components at testable conditions. In this paper, we focus on the quantification of uncertainty due to differences in model prediction and experiments, and present a technique to aggregate and propagate uncertainty from the component level to the full system in the applications space. A numerical example based on a structural dynamics application is used to demonstrate the technique. Keywords Aggregation • Extrapolation • Uncertainty propagation Nomenclature F Force fn Frequency of interest m Mass of test article — Instantaneous damping 2.1 Introduction The reliability of high consequence systems, such as aerospace components, has been traditionally established by testing individual systems and verifying their performance is within some acceptable limits. Although full scale testing is currently not feasible for some systems under actual use environments, some limited testing is often available for components, subsystems (i.e. groups of components) and a very limited number of tests of the full system in other use environments. Modeling and simulation attempt to fill the gap left by the lack of full scale testing for the actual use environments. Because component level data are usually cheaper and easier to obtain relative to the system data, it is advantageous to have the ability to build individual models of the component and/or subsystems using available data and incorporate them into a system level model. This leads to a hierarchical approach to building system level models and consequently the uncertainty in the system model is a function of the component level data and of the knowledge not captured in the component or subsystem level data. A. Urbina ( ) • R.G. Hills • A.C. Hetzler Sandia National Laboratories, P.O. Box 5800, MS 0828, Albuquerque, NM 87185, USA e-mail: aurbina@sandia.gov H.S. Atamturktur et al. (eds.), Model Validation and Uncertainty Quantification, Volume 3: Proceedings of the 32nd IMAC, A Conference and Exposition on Structural Dynamics, 2014, Conference Proceedings of the Society for Experimental Mechanics Series, DOI 10.1007/978-3-319-04552-8__2, © The Society for Experimental Mechanics, Inc. 2014 11

12 A. Urbina et al. Furthermore, because tests cannot be performed for many actual use environments, the model is required to extrapolate beyond the data it was developed from. To establish confidence in an extrapolated model prediction, sources of uncertainty must be identified, quantified and propagated to the response quantity of interest at the system model. Recently, there has been an emphasis on developing models of components using first principles, calibrating them from simple exploratory experiments, validating them relative to a different set of experiments and then using them within a more complex model. For example, one could investigate the behavior of mechanical joint using simple experiments, develop a model that explains some phenomenon of interest, validate its performance on a different environment and use it as part of a larger system. What was described above is defined as a hierarchical approach to building a system level model. It is basically a construction of a complex system model by using a building block approach that incorporates simpler component based models and couples them together. This hierarchical model building approach was described in several published papers [1, 2]. To quantify uncertainty, multiple tests of these components should be available from which an estimate of the variability in the components could be obtained. Adding to the uncertainty is the possibility that the interactions of the various components was never tested, thus no information on the coupling of components will be available. In addition, interactions of components could have been tested at excitation levels that are not comparable to those of the full system, thus giving rise to another source of uncertainty. In this paper, we focus on the quantification of uncertainty due to differences in model prediction and experiments, and present a technique to aggregate and propagate these sources from the component level to the applications space. A numerical example based on a structural dynamics application is used to demonstrate the technique. 2.2 Example Problem Description The example problem has the following features: • It is a multi-component problem which involves a mechanical joint which provides an energy dissipating mechanism. • It is a multi-level problem where the phenomena observed at the lowest level is assumed to be present at subsequent levels, i.e. damping in the joints is assumed similar at all levels. This might turn out to be an incorrect assumption. • Experimental data consists of repeated tests on several, nominally identical hardware systems. These are intended to quantify the variability inherent in a physical system. • Simple finite element models are built and calibrated to simulate a particular behavior of the physical hardware. The model parameters have been calibrated from simple, discovery experiments aimed at isolating the particular physical phenomenon that the model is trying to represent. Parametric uncertainty is explored in this paper. The levels of complexity in this problem are defined as follows: Level 1 • Dumbbell configuration: 45 joint samples tested with an impulse type excitation. Level 2 • Three leg configuration (wavelet input): 27 joint samples tested using a wavelet-type. Application level • Three leg configuration (shock input): 27 joint samples tested using a shock-type input excitation. In all levels, acceleration time histories were recorded and energy dissipation was calculated for each experiment and for each model prediction. The particular details of each level are described in the following sections. 2.2.1 Dumbbell Configuration (Level 1) This configuration has two, 30 lb masses bolted at the ends of a single leg (or joint) creating a “dumbbell” looking hardware. This is shown in Fig. 2.1. This configuration is supported by bungee cords to simulate a free-free environment and it is subjected to an impulse excitation, on one of the end masses, provided by an instrumented hammer. The acceleration response of the dumbbell on the side opposite to the excitation is recorded and also shown in Fig. 2.1. From this response, the free decay time history of the response is obtained and used to estimate the energy dissipation of the system at a particular force

2 On the Aggregation and Extrapolation of Uncertainty from Component to System Level Models 13 Fig. 2.1 Level 1 test configuration (left picture) and example transient ring down to hammer input (right figure) Fig. 2.2 Schematic of finite element model for test hardware level. A total of 45 experiments were conducted to characterize unit-to-unit and test-to-test variability. A simplified finite element model of the dumbbell hardware is schematically represented in Fig. 2.2. This includes an energy dissipation model denoted as “Joint”. Further details on this model are given later in the paper. 2.2.2 Three-Leg Configuration (Level 2 and Application Space) The experimental system at these levels is a truncated conic shell supported on legs at three approximately symmetric locations. The support structure beneath the legs is a cylindrical shell—relatively thin on its top, and transitioning into a thicker section. The conic shell is attached to the support structure via three screws, each of which passes through a hole in a thin, flat plate at the top of a leg. Three nominally identical replicates of the conic shell were fabricated, along with three nominally identical support structures. A schematic of this is shown in Fig. 2.3. The nine combinations of shells and support structures were tested with two distinct environments: (1) a wavelet type excitation and (2) a shock type excitation. The excitations are also shown in Fig. 2.3. Each shell-base combination was assembled-disassembled-reassembled three times, and tested each time. The average acceleration structural responses at the tops of the shells are shown in Fig. 2.4 for the shock environment. There are 27 time histories—nine structures times three tests each. The finite element model for the physical system used in the current analysis is the lumped-mass representation shown in Fig. 2.3. The energy dissipation mechanisms are noted as Jin the figure. Predictions, with uncertainty, of the response under a shock environment are the focus of our paper. 2.2.3 Energy Dissipation Model: Iwan Model The element critical in this is study is a nonlinear energy dissipating mechanism (denoted “Joint” in Fig. 2.2 andJ in Fig. 2.3), and is modeled using the framework of the so-called Iwan element. The element is described in [3]. The four parameters of

14 A. Urbina et al. Mcone Kcorr J J J Kattachment acceleration input 500 Wavelet excitation Shock excitation 400 300 200 100 0 500 400 300 200 100 0 -100 -200 -300 -400 -500 0 2 4 6 8 10 Half Cycles Acceleration, g’s Acceleration, g’s Time, sec 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 12 14 16 18 20 -100 -200 -300 -400 Fig. 2.3 Schematic representation of tested system (top figure) and the inputs used to excite it (bottom figures) the nonlinear, Iwan spring element were identified based on experiments in which individual joint-simulators were excited sinusoidally. Multiple systems were tested, and they are stochastic, therefore, the parameters of the Iwan model are described in a probabilistic framework. An approach to estimating the parameters of the Iwan model and their uncertainty is described in [4]. Because the geometry and boundary conditions of the system used to identify Iwan model parameters differ from the geometry and boundary conditions of the three-legged system, correction stiffness, Kcorr and attachment stiffness, Kattachment needed to be added to the lumped mass model to render its predictions accurate. The attachment stiffness was calibrated by matching the axial frequency of a monolithic structure and assuming that the stiffness of the cone was essentially rigid when compared to the rest of the structure. The correction stiffness was calculated and inserted into the lumped-mass model. Analysis of this model was done in Salinas, a primarily linear structural dynamics analysis code written at Sandia [5].

2 On the Aggregation and Extrapolation of Uncertainty from Component to System Level Models 15 0 0.01 0.02 -100 0 100 Time, sec Accel, g Fig. 2.4 Acceleration responses at the top of the test structures for the shock excitation Fig. 2.5 Calculation of the energy dissipation quantity 2.2.4 Quantity of Interest: Energy Dissipated (Ed) The energy dissipation from the transient ring down experiment is chosen as the quantity of interest to use in this study. The process to calculate this metric is shown in Fig. 2.5 and more details can be found in [6]: 2.3 Uncertainty Quantification, Aggregation and Propagation 2.3.1 Training Data and Partial Least Squares Regression The goal of this research is to develop a correction to model based predictions of energy dissipation for the three-leg configuration under shock loading (application level), given the model predictions and experimental results for the dumbbell

16 A. Urbina et al. (Level 1) and three-leg wavelet test (Level 2) configurations. The Partial Least Squares Regression (PLS) based approach developed by Hills [7] is used to develop a linear relationship between differences in experiment and model predictions throughout a validation hierarchy, and a correction to a model prediction for the target application. Using the probabilistic model of the four parameters of the Iwan model, we generate 20 sets of the four parameters and used in the models of the dumbbell and three leg configuration. Each model is run using Salinas and there are a total of 60 model runs. Acceleration time history responses were obtained and subsequently post-processed, using the procedure shown in Fig. 2.5, to obtain an estimate of the energy dissipation for each run. The resulting data is referred to as the training sets and will be used to develop this regression. To perform this regression, the training sets are organized as follows: FD 2 6 6 4 fdb1;1 fdb1;ndb : : : : : : : : : fdbn;1 fdbn;ndb fwl 1;1 fwl 1;nwl : : : : : : : : : fwl n;1 fwl n;nwl 3 7 7 5 (2.1) GD 2 6 6 4 gbl 1;1 gbl 1;nbl : : : : : : : : : gbl n;1 gbl n;nbl 3 7 7 5 (2.2) F represents the model generated training sets for the validation data (dumbbell (fdb) and three-leg wavelet (fwl) energy dissipations). Grepresents the model generated training sets for the target application (three-leg shock (gbl) energy dissipations). The number of force levels at which energy dissipation is calculated (i.e. number of columns) for the corresponding configurations are denoted ndb, nwl, and nbl. For the present case, ndb, nwl, nbl D31. There is no requirement that these three numbers be the same. The number of Iwan parameter sets is denotedn. Each row of FandGprovides energy dissipation for each of these parameter sets and there must be a one-to-one correspondence within and between the rows of Eqs. 2.1 and 2.2. For the present case, nD20. The regression used is of the form Œ1 f “Dg (2.3) where “ is a (1Cndb Cnwl) by nbl matrix of regression coefficients. The vector f is a (ndb Cnwl) by 1 row vector of energy dissipations for the dumbbell and the three-leg wavelet configurations, and the vector g is a nbl by 1 row vector of corrected three-leg shock energy dissipations. The 1 in the leading matrix of Eq. 2.3 allows for a constant offset to be estimated between the linear combination of dumbbell and three-leg wavelet dissipations, and the predicted three-leg shock dissipations. Equation 2.3 is regressed to the data provided by Eqs. 2.1 and 2.2. Specifically, “is estimated to satisfy Eq. 2.4 in a partial least squares (PLS) regression sense. Œ1 F “ŠG (2.4) The 1correspond to a column vector of ones with the number of rows equal to the number of rows inF. The MATLAB functionplsregress [8] was used to obtain a PLS estimate for “[9]. To address ill-conditioning or singular systems, PLS regression develops an intermediate space that’s defined by Fand Gin terms of nl latent components where nl is specified by the user. Too few latent components results in a poor representation of F and Gwhile too many latent components results in over-fitting of G, leading to increased sensitivity of the regression to noise or error when applied to data other than the training sets. Unlike standard least squares regression and principal component analysis, PLSR is designed to address error in both Gand F. Note that since models are approximations, we expect errors (differences from the actual energy dissipations for the systems modeled) in bothFandG. 2.3.2 Choice of the Number of Latent Components For the present analysis, the number of latent components chosen is based on the estimated prediction uncertainty of the regression model due to two sources of errors:

RkJQdWJsaXNoZXIy MTMzNzEzMQ==