## Publicaciones Relevantes

#### Ambiental

**Photocatalytic inhibition of bacteria by TiO2 nanotubes-doped polyethylene composites**

*Yañez, D., Guerrero, S., Lieberwirth, I., Ulloa, M.T., Gomez, T., Rabagliati, F.M., Zapata, P.A. | Applied Catalysis A: General | 2015*

Polyethylene (PE) and polyethylene-octadecene (LLDPE) composites containing titanium dioxide nanotubes were synthesized and applied to the inhibition of selected bacteria. It was found that polymerization rate of the polymerizations increased with the incorporation of the octadecene compared with bare ethylene, while with modified nanotubes (O–TiO_{2}–Ntbs) the catalytic activity showed a slight decrease compared with the pure polymer. Regarding physical properties, the melting temperature and cristallinity of PE was higher than LLDPE. LLDPE presented lower rigidity than PE and thus lower Young’s modulus. On the other hand, with the incorporation of nanotubes, Young’s modulus did not change significantly with respect to PE. After 2 h of contact, the PE/O–TiO_{2}–Ntbs composite showed a reduction of *Escherichia coli* of 36.7% under no UVA irradiations. In contrast, LLDPE/O–TiO_{2}–Ntbs showed 63.5%. The photocatalytic reduction (under UVA light) was much higher and after 60 min the LLDPE/*O*-TiO_{2}-Ntbs composites showed a bacterial reduction of 99.9%, whereas the PE/*O*-TiO_{2}-Ntbs showed 42.6% of catalytic reduction.

**Sensitivity analysis of biodiesel blends on Benzo[a]pyrene and main emissions using MOVES: A case study in Temuco, Chile**

*Pino-Cortés E., Díaz-Robles L.A., Cubillos, F., Fu, J.S., Vergara-Fernández, A. | Science of The Total Environment | 2015*

Temuco is one of the most highly wood-smoke polluted cities in Chile; however, the diesel mobile sources are growing very fast in the past 10 years and so far very few studies have been done. The main goal of this research was to develop a 2013 emission inventory of criteria pollutants and Benzo[a]pyrene (BaP) and to evaluate the use of six biodiesel blends of 0%, 1%, 4%, 8%, 12%, and 20% by volume of fuel in diesel motors from the vehicle fleet within the mentioned areas using the Motor Vehicle Emission Simulator (MOVES). Input parameters for the base year 2005 were estimated to implement and adapt the model in Chile, while results of NOx, PM_{10}, PM_{2.5}, NH_{3}, CO_{2}equivalent and SO_{2} were compared with the Chilean Emission Inventory estimated by the model “Methodology for the Calculation of Vehicle Emissions.” The 2013 emissions reduced with respect to 2005, in the majority of the contaminants analyzed, despite the 47% increase in the annual miles traveled. Using biodiesel blends, an emission reduction was estimated at up to 15% in particulate matter, BaP, and CO for the year 2013, as well as an increment of 2% in NOx emissions, attributed to low sulfur content (50 ppm) in the diesel and the antiquity of the vehicle fleet. The results obtained gave evidence of the influence of the biodiesel use in the pollutant emissions to improve the Chilean air quality, as well as providing a strategy for this air quality management.

**Effects of Temperature on Steam Explosion Pretreatment of Poplar Hybrids with Different Lignin Contents in Bioethanol Production**

*San Martin-Davison, Jessica; Ballesteros, Mercedes; Manzanares, Paloma; Vergara-Fernández A | International Journal of Green Energy | 2015*

The aim of this study is to evaluate the effect of the lignin content in four hybrid poplars for enhancing ethanol production. The study was conducted using steam explosion at 200 and 220°C for 5 min as a pre-treatment and then a simultaneous saccharification and fermentation (SSF) process with *Saccharomyces cerevisiae*. The composition of raw material, liquid, and solid fraction obtained after pretreatment, enzymatic digestion, and ethanol production under the different experimental conditions was analyzed. The best results for bioethanol production were obtained from steam explosion pre-treatment carried out at 220°C with the hybrid poplar H-29, with cellulose recovery of over 63%, enzymatic hydrolysis yield of approximately 67%, and SSF yield of 70% of the theoretical value. However, the highest enzymatic hydrolysis yield (79%) was obtained for the hybrid poplar H-34, which has the lowest lignin content.

**Assessing Polycyclic Aromatic Hydrocarbons (PAHs) using passive air sampling in the atmosphere of one of the most wood-smoke-polluted cities in Chile: The case study of Temuco**

*Pozo K., Estellano V.H., Harner T., Diaz-Robles L., Cereceda-Baliz F., Etcharren P., Pozo K., Guerrero F., Vergara-Fernández A. | Chemosphere | 2015*

This study addresses human health concerns in the city of Temuco that are attributed to wood smoke and related pollutants associated with wood burning activities that are prevalent in Temuco. Polycyclic Aromatic Hydrocarbons (PAHs) were measured in air across urban and rural sites over three seasons in Temuco using polyurethane foam (PUF) disk passive air samplers (PUF-PAS). Concentrations of ΣPAHs (15 congeners) in air ranged from BDL to ∼70 ng m^{−3} and were highest during the winter season, which is attributed to emissions from residential heating by wood combustion. The results for all three seasons showed that the PAH plume was widespread across all sites including rural sites on the outskirts of Temuco. Some interesting variations were observed between seasons in the composition of PAHs, which were attributed to differences in seasonal point sources. A comparison of the PAH composition in the passive samples with active samples (gas + particle phase) from the same site revealed similar congener profiles. Overall, the study demonstrated that the PUF disk passive air sampler provides a simple approach for measuring PAHs in air and for tracking effectiveness of pollution control measures in urban areas in order to improve public health.

**Effects of urban configuration on the wind energy distribution over a building**

*errmann-Priesnitz, Benjamin; Calderon-Munoz, Williams R.; LeBoeuf, R. | Journal of Renewable and Sustainable Energy | 2015*

A numerical study to investigate the wind energy potential for various building configurations is presented. Steady-state incompressible flow simulations were performed using the finite volume method of ANSYS Fluent with the** k-ε** turbulence **model**. A simplified city **model** was used to study the flow behavior over a building rooftop for various configurations of the upwind structure. Results show an increase of up to 29% in the available energy compared to the free stream due to variations in the dimensions of the separation bubble over the rooftop. This study shows the influence of building configuration on the wind resource near buildings and how it can affect the feasibility of a small-scale **wind turbine **project.

**Wind-driven nearshore sediment resuspension in a deep lake during Winter**

*Peardon, KE, Bombardelli, FA., Moreno-Casas, PA., Rueda, FJ., Schladow, SG. | Water Resources Research | 2014*

Ongoing public concern over declining water quality at Lake Tahoe, California-Nevada (USA) led to an investigation of wind-driven nearshore sediment resuspension that combined field measurements and modeling. Field data included: wind speed and direction, vertical profiles of water temperature and currents, nearbed velocity, lakebed sediment characteristics, and suspended sediment concentration and particle size distribution. Bottom shear stress was computed from ADV-measured nearbed velocity data, adapting a turbulent kinetic energy method to lakes, and partitioned according to its contributions attributed to wind-waves, mean currents, and random motions. When the total shear stress exceeded the critical shear stress, the contribution to overall shear stress was about 80% from wind-waves and 10% each from mean currents and random motions. Therefore, wind-waves were the dominant mechanism resulting in sediment resuspension as corroborated by simultaneous increases in shear stress and total measured sediment concentration. The wind-wave model STWAVE was successfully modified to simulate wind-wave-induced sediment resuspension for viscous-dominated flow typical in lakes. Previous lake applications of STWAVE have been limited to special instances of fully turbulent flow. To address the validity of expressions for sediment resuspension in lakes, sediment entrainment rates were found to be well represented by a modified 1991 García and Parker formula. Last, in situ measurements of suspended sediment concentration and particle size distribution revealed that the predominance of fine particles (by particle count) that most negatively impact clarity was unchanged by wind-related sediment resuspension. Therefore, we cannot assume that wind-driven sediment resuspension contributes to Lake Tahoe’s declining nearshore clarity.

**Effect of initial substrate/inoculum ratio on cell yield in the removal of hydrophobic VOCs in fungal biofilters**

*Vergara-Fernández A., J. San Martín-Davison, L. Díaz-Robles, O. Soto-Sánchez | Revista Mexicana de Ingeniería Química | 2014*

Different kinetic models have been proposed to describe the elimination of hydrophobic volatile organic compounds (VOCs) by fungal biofiltration. In this process the ratio of the initial substrate concentration (Cpb0) to the initial biomass (X0) has been shown to influence the cell yield. This papers presents a study of the efect of the Cpb0/X0 ratio on observed cell yield (Yobs) in a fixed bed batch system (microcosm) using a gaseous carbon source, as an approximation to its application in the fungal biofiltration of hydrophobic VOCs. Essays were carried out in fixed-bed microcosms using the filamentous fungus Fusarium solani as a biological agent and n-pentane as a carbon and energy source. The results indicated that Yobs in the gas phase is inversely proportional to the Cpb0/X0 ratio, with values of 0.9 to 0.35 gbiomass g-1pentane being obtained when the Cpb0/X0 ratio is changed from 0.1 to 1.0 gpentane g-1biomass. The results indicate that more than 60% of n-pentane was consumed due to energy spilling, and that strong dissociation of catabolism from anabolism occurred at higher Cb0/X0 ratios.

**Quantitative goals for large-scale fog collection projects as a sustainable fresh water resource in northern Chile**

*LeBoeuf, R. L., De la Jara, E. | Water International | 2014*

The objective of this study was to determine the quantitative goals for a large-scale fog collection project if it were to be an economically competitive source of freshwater in northern Chile. When the initial costs are factored in, the cost of water from such a project would exceed the market price of the alternatives. However, given current costs, the project could be profitable given an average collection rate of about 10 litres per day per square metre. Investment in site selection and system improvements to reduce costs and improve collection rates are essential for making large-scale fog collection an economically competitive source of freshwater.

**Producción Limpia y Núcleos para la Sustentabilidad Territorial NEST: un modelo Chileno para acuerdos voluntarios**

*Consejo Nacional de Producción Limpia | Consejo Nacional de Producción Limpia, Santiago-Chile. | 2014*

**Encouraging Sustainable Energy in the Developing World**

*Westenenk, Nicolás | Science in the News, Harvard University, available online | 2012*

#### Computación

**A semantic approach for dynamically determining complex composed ser- vice behaviour**

*C. Vairetti, R. Alarcon and J. Bellido | Journal of Web Engineering – Rinton Press | 2016*

**Analysis and Improvement of Business Process Models Using Spreadsheets**

*J. Saldivar, C. Vairetti C. Rodriguez , F. Daniel, F. Casati and R. Alarcon | Information Systems – Elsevier | 2016*

**Kernel Penalized K-means: A feature selection method based on Kernel K-means**

*Maldonado, Sebastian; Carrizosa, Emilio; Weber, Richard | Information Sciences | 2015*

We present an unsupervised method that selects the most relevant features using an embedded strategy while maintaining the cluster structure found with the initial feature set. It is based on the idea of simultaneously minimizing the violation of the initial cluster structure and penalizing the use of features via scaling factors. As the base method we use Kernel K-means which works similarly to K-means, one of the most popular clustering algorithms, but it provides more flexibility due to the use of kernel functions for distance calculation, thus allowing the detection of more complex cluster structures. We present an algorithm to solve the respective minimization problem iteratively, and perform experiments with several data sets demonstrating the superior performance of the proposed method compared to alternative approaches.

**Robust feature selection for multiclass Support Vector Machines using second-order cone programming**

*Lopez, Julio; Maldonado, Sebastian | Intelligent Data Analysis | 2015*

This work addresses the issue of high dimensionality for linear multiclass Support Vector Machines (SVMs) using second-order cone programming (SOCP) formulations. These formulations provide a robust and efficient framework for classification, while an adequate feature selection process may improve predictive performance. We extend the ideas of SOCP-SVM from binary to multiclass classification, while a sequential backward elimination algorithm is proposed for variable selection, defining a contribution measure to determine the feature relevance. Experimental results with multiclass microarray datasets demonstrate the effectiveness of a low-dimensional data representation in terms of performance.

**A multi-class SVM approach based on the l1-norm minimization of the distances between the reduced convex hulls**

*Carrasco, M., López, J., Maldonado, S. | Pattern Recognition | 2015*

Multi-class classification is an important pattern recognition task that can be addressed accurately and efficiently by Support Vector Machine (SVM). In this work we present a novel SVM-based multi-class classification approach based on the center of the configuration, a point which is equidistant to all classes. The center of the configuration is obtained from the dual formulation by minimizing the distances between the reduced convex hulls using the l1-norm, while the decision functions are subsequently constructed from this point. This work also extends the ideas of Zhou et al. (2002) [37] to multi-class classification. The use of l1-norm provides a single linear programming formulation, which reduces the complexity and confers scalability compared with other multi-class SVM methods based on quadratic programming formulations. Experiments on benchmark datasets demonstrate the virtues of our approach in terms of classification performance and running times compared with various other multi-class SVM methods.

**Computational Intelligence Challenges and Applications on Large-Scale Astronomical Time Series Databases**

*Huijse, P. ; Estevez, P.A. ; Protopapas, P. ; Principe, J.C. ; Zegers, P. | IEEE Computational Intelligence Magazine | 2014*

Time-domain astronomy (TDA) is facing a paradigm shift caused by the exponential growth of the sample size, data complexity and data generation rates of new astronomical sky surveys. For example, the Large Synoptic Survey Telescope (LSST), which will begin operations in northern Chile in 2022, will generate a nearly 150 Petabyte imaging dataset of the southern hemisphere sky. The LSST will stream data at rates of 2 Terabytes per hour, effectively capturing an unprecedented movie of the sky. The LSST is expected not only to improve our understanding of time-varying astrophysical objects, but also to reveal a plethora of yet unknown faint and fast-varying phenomena. To cope with a change of paradigm to data-driven astronomy, the fields of astroinformatics and astrostatistics have been created recently. The new data-oriented paradigms for astronomy combine statistics, data mining, knowledge discovery, machine learning and computational intelligence, in order to provide the automated and robust methods needed for the rapid detection and classification of known astrophysical objects as well as the unsupervised characterization of novel phenomena. In this article we present an overview of machine learning and computational intelligence applications to TDA. Future big data challenges and new lines of research in TDA, focusing on the LSST, are identified and discussed from the viewpoint of computational intelligence/machine learning. Interdisciplinary collaboration will be required to cope with the challenges posed by the deluge of astronomical data coming from the LSST.

**Feature Selection for Support Vector Machines via Mixed Integer Linear Programming**

*Maldonado, S., Pérez, J., Labbé, M., Weber, R. | Information Sciences | 2014*

The performance of classification methods, such as Support Vector Machines, depends heavily on the proper choice of the feature set used to construct the classifier. Feature selection is an NP-hard problem that has been studied extensively in the literature. Most strategies propose the elimination of features independently of classifier construction by exploiting statistical properties of each of the variables, or via greedy search. All such strategies are heuristic by nature. In this work we propose two different Mixed Integer Linear Programming formulations based on extensions of Support Vector Machines to overcome these shortcomings. The proposed approaches perform variable selection simultaneously with classifier construction using optimization models. We ran experiments on real-world benchmark datasets, comparing our approaches with well-known feature selection techniques and obtained better predictions with consistently fewer relevant features.

**Imbalanced data classification using second-order cone programming Support Vector Machines**

*Maldonado, S., López, J. | Pattern Recognition | 2014*

Learning from imbalanced data sets is an important machine learning challenge, especially in Support Vector Machines (SVM), where the assumption of equal cost of errors is made and each object is treated independently. Second-order cone programming SVM (SOCP-SVM) studies each class separately instead, providing quite an interesting formulation for the imbalanced classification task. This work presents a novel second-order cone programming (SOCP) formulation, based on the LP-SVM formulation principle: the bound of the VC dimension is loosened properly using the l∞-norm, and the margin is directly maximized using two margin variables associated with each class. A regularization parameter C is considered in order to control the trade-off between the maximization of these two margin variables. The proposed method has the following advantages: it provides better results, since it is specially designed for imbalanced classification, and it reduces computational complexity, since one conic restriction is eliminated. Experiments on benchmark imbalanced data sets demonstrate that our approach accomplishes the best classification performance, compared with the traditional SOCP-SVM formulation and with cost-sensitive formulations for linear SVM.

**Alternative Second-Order Cone Programming Formulations for Support Vector Classification**

*Maldonado, S., López, J. | Information Sciences | 2014*

This paper presents two novel second-order cone programming (SOCP) formulations that determine a linear predictor using Support Vector Machines (SVMs). Inspired by the soft-margin SVM formulation, our first approach (ξ-SOCP-SVM) proposes a relaxation of the conic constraints via a slack variable, penalizing it in the objective function. The second formulation (r -SOCP-SVM) is based on the LP-SVM formulation principle: the bound of the VC dimension is loosened properly using the l∞-norm, and the margin is directly maximized. The proposed methods have several advantages: The first approach constructs a flexible classifier, extending the benefits of the soft-margin SVM formulation to second-order cones. The second method obtains comparable results to the SOCP-SVM formulation with less computational effort, since one conic restriction is eliminated. Experiments on well-known benchmark datasets from the UCI Repository demonstrate that our approach accomplishes the best classification performance compared to the traditional SOCP-SVM formulation, LP-SVM, and to standard linear SVM.

**Feature selection for high-dimensional class-imbalanced data sets using support vector machines**

*Maldonado, S., Weber, R., Famili, F. | Information Sciences | 2014*

Feature selection and classification of imbalanced data sets are two of the most interesting machine learning challenges, attracting a growing attention from both, industry and academia. Feature selection addresses the dimensionality reduction problem by determining a subset of available features to build a good model for classification or prediction, while the class-imbalance problem arises when the class distribution is too skewed. Both issues have been independently studied in the literature, and a plethora of methods to address high dimensionality as well as class-imbalance has been proposed. The aim of this work is to simultaneously explore both issues, proposing a family of methods that select those attributes that are relevant for the identification of the target class in binary classification. We propose a backward elimination approach based on successive holdout steps, whose contribution measure is based on a balanced loss function obtained on an independent subset. Our experiments are based on six highly imbalanced microarray data sets, comparing our methods with well-known feature selection techniques, and obtaining a better prediction with consistently fewer relevant features.

**Exploring the feasibility of web form adaptation to users’ cultural dimension scores**

Recabarren, M., Nussbaum, M. | User Modeling and User-Adapted Interaction | 2010

With many daily tasks now performed on the Internet, productivity and efficiency in working with web pages have become transversal necessities for all users. Many of these tasks involve the inputting of user information, obligating the user to interact with a webform. Research has demonstrated that productivity depends largely on users’ personal characteristics, implying that it will vary from user to user. The webform development process must therefore, include modeling of its intended users to ensure the interface design is appropriate. Taking all potential users into account is difficult, however, primarily because their identity is unknown, and some may be effectively excluded by the final design. Such discrimination can be avoided by incorporating rules that allow webforms to adapt automatically to the individual user’s characteristics, the principal one being the person’s culture. In this paper we report two studies that validate this option. We begin by determining the relationships between a user’s cultural dimension scores and their behavior when faced with a webform. We then validate the notion that rules based on these relationships can be established for the automatic adaptation of a webform in order to reduce the time taken to complete it. We conclude that the automatic webform adaptation to the cultural dimensions of users improves their performance.

#### Eléctrica

**On the cancellation of OAM beams propagating through convective turbulence**

*Funes, G. and Anguita, J. | Optics Letters | 2017*

**Multi-objective optimization for parameter selection and characterization of optical flow methods**

*Delpiano, Jose; Pizarro, Luis; Verschae, Rodrigo; Ruiz-del-Solar, Javier; | APPLIED SOFT COMPUTING | 2016*

**Fisher Information Properties**

*Zegers, P. | Entropy | 2015*

A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a new proof that Fisher information increases under conditioning; (iii) showing that Fisher information decreases in Markov chains; and (iv) bound estimation error using Fisher information. This last result is especially important, because it completes Fano’s inequality, i.e., a lower bound for estimation error, showing that Fisher information can be used to define an upper bound for this error. In this way, it is shown that Shannon’s differential entropy, which quantifies the behavior of the random variable, and the Fisher information, which quantifies the internal structure of the density function that defines the random variable, can be used to characterize the estimation error.

**A Novel, Fully automated pipeline for period estimation in the eros 2 data set**

*Protopapas, Pavlos; Huijse, Pablo; Estevez, Pablo A.; Zegers, Pablo | The Astrophysical Journal Supplement Series | 2015*

We present a new method to discriminate periodic from nonperiodic irregularly sampled light curves. We introduce a periodic kernel and maximize a similarity measure derived from information theory to estimate the periods and a discriminator factor. We tested the method on a data set containing 100,000 synthetic periodic and nonperiodic light curves with various periods, amplitudes, and shapes generated using a multivariate generative model. We correctly identified periodic and nonperiodic light curves with a completeness of ~90% and a precision of ~95%, for light curves with a signal-to-noise ratio (S/N) larger than 0.5. We characterize the efficiency and reliability of the model using these synthetic light curves and apply the method on the EROS-2 data set. A crucial consideration is the speed at which the method can be executed. Using a hierarchical search and some simplification on the parameter search, we were able to analyze 32.8 million light curves in ~18?hr on a cluster of GPGPUs. Using the sensitivity analysis on the synthetic data set, we infer that 0.42% of the sources in the LMC and 0.61% of the sources in the SMC show periodic behavior. The training set, catalogs, and source code are all available at http://timemachine.iic.harvard.edu.

**Turbulence-induced persistence in laser beam wandering**

*Luciano Zunino, Damián Gulich, Gustavo Funes, and Darío G. Pérez | Optics Letters | 2015*

We have experimentally confirmed the presence of long-memory correlations in the wandering of a thin Gaussian laser beam over a screen after propagating through a turbulent medium. A laboratory-controlled experiment was conducted in which coordinate fluctuations of the laser beam were recorded at a sufficiently high sampling rate for a wide range of turbulent conditions. Horizontal and vertical displacements of the laser beam centroid were subsequently analyzed by implementing *detrended fluctuation analysis*. This is a very well-known and widely used methodology to unveil memory effects from time series. Results obtained from this experimental analysis allow us to confirm that both coordinates behave as highly persistent signals for strong turbulent intensities. This finding is relevant for a better comprehension and modeling of the turbulence effects in free-space optical communication systems and other applications related to propagation of optical signals in the atmosphere.

**Orbital-angular-momentum crosstalk and temporal fading in a terrestrial laser link using single-mode fiber coupling**

*Funes, G., Vial, M., Anguita, J.A. | Optics Express | 2015*

Using a mobile experimental testbed, we perform a series of measurements on the detection of laser beams carrying orbital angular momentum (OAM) to evaluate turbulent channel distortions and crosstalk among receive states in an 84-m roofed optical link. We find that a receiver assembly using single-mode fiber coupling serves as a good signal selector in terms of crosstalk rejection. From the recorded temporal channel waveforms, we estimate average crosstalk profiles and propose an appropriate probability density function for the fluctuations of the detected OAM signal. Further measurements of OAM crosstalk are described for a horizontal 400-m link established over our campus.

**Neurofuzzy self-tuning of the dissipation rate gain for model-free force-position exponential tracking of robots**

*Parra-Vega, V., García-Rodríguez, R., Armendariz, J. | Neurocomputing | 2015*

Simultaneous force and position control of robots interacting with a rigid environment has been broadly studied assuming several contact force models, being the differential algebraic – DAE – model the most complete one, however DAE robots show complex and strong nonlinear couplings that make difficult to achieve tracking, when dynamic model is not available. In this paper, considering the fundamental structural properties of DAE robots, in particular passivity and the orthogonalization of force and velocity vectors, it is proposed a model-free neurofuzzy-based self-tuning robot controller for exponential tracking, which is composed of orthogonalized PID-position plus an I-force (If) control terms, and a feed-forward desired force term. The salient feature of this proposal is a novel neurofuzzy self-tuning scheme aimed at tuning the dissipation rate gain (DRG) so as to enforce dissipativity in closed-loop, rather than the standard scheme of tuning the feedback control gains, or the control structure, which in our case stands for a simple constant feedback gain orthogonalized PID+If controller. In fact, in virtue of such orthogonalization, it emerges a simple and low cost parallel structure of the neuro-fuzzy network that targets solely the DRG so as to drive error dynamics to zero with exponential rate, without any knowledge of robot dynamics or carrying out any approximation of inverse dynamics. Thus, this technique can be applied to other class of systems and controllers that ensure passivity in closed-loop. Simulations show the validity and the feasibility of this new approach.

**Effects of urban configuration on the wind energy distribution over a building**

*Herrmann-Priesnitz, Benjamin; Calderon-Munoz, Williams R.; LeBoeuf, R. | Journal of Renewable and Sustainable Energy | 2015*

A numerical study to investigate the wind energy potential for various building configurations is presented. Steady-state incompressible flow simulations were performed using the finite volume method of ANSYS Fluent with the **k-ε** turbulence **model**. A simplified city **model **was used to study the flow behavior over a building rooftop for various configurations of the upwind structure. Results show an increase of up to 29% in the available energy compared to the free stream due to variations in the dimensions of the separation bubble over the rooftop. This study shows the influence of building configuration on the wind resource near buildings and how it can affect the feasibility of a small-scale** wind turbine** project.

**An economical dual hot-wire liquid water flux probe design**

*LeBoeuf, R. L..; de Dios Rivera, J.; de la Jara, E. | Atmospheric Research | 2015*

The velocity, liquid water content (LWC) and their product, the liquid water flux (LWF), are of interest for research in environmental sciences, fog collection, and free-space communications. This paper provides a design for an economical dual hot-wire LWF probe, which enables the ground-based measurement of velocity, LWC and LWF. The design accounts for the droplet deposition efficiency, prong conduction, saturation and sensitivity. The operating mode and probe configurations are described. Two 125 μm diameter, 5 cm long platinum wires having 2 and 50 °C wire to air temperature offsets would yield measurement uncertainties of about 6% for velocity and from 8 to 22% for the LWC and from 2 to 17% for the LWF given velocities in the range 2 to 8 m/s and LWC in the range 0.2 to 0.8 g/m^{3}. The lower uncertainties correspond to higher LWF, which is of particular interest in fog collection projects. The recurring costs of the instrument’s mechanical and electrical components would be about US $150 per unit. Therefore, the design presented herein is a viable option for large-scale sensor networks.

**Tuning of Power System Stabilizers using Multiobjective Optimization NSGA II**

*Verdejo, H; Gonzalez, D; Delpiano, J; Becker, C; | IEEE LATIN AMERICA TRANSACTIONS | 2015*

**An Implementation of Combined Local-Global Optical Flow**

*Jara-Wilde, J; Cerda, M; Delpiano, J; Hartel, S; | IMAGE PROCESSING ON LINE | 2015*

Optical Flow (OF) approaches for motion estimation calculate vector fields for the apparent velocities of objects in image sequences. In 1981 Horn and Schunck (HS) introduced two basic assumptions: ‘brightness value constancy’ and ‘smooth variation’ to estimate a smooth OF field over the entire image -global approach-. In parallel, Lucas and Kanade (LK) assumed constant motion patterns for image patches, estimating piecewise-homogeneous OF fields -local approach-. Several variations of these approaches exist today. Here we present the combined local-global (CLG) approach by Bruhn et al. which encompasses properties of HS-OF and LK-OF, aiming to improve the OF accuracy for small-scale variations, while delivering the HS-OF dense and smooth fields. A multiscale implementation is provided for 2D images, together with two numerical solvers: Successive Over-Relaxation and the faster Pointwise-Coupled Gauss-Seidel by Bruhn et al.. The algorithm works on gray-scale (single channel) images, with color images being converted prior to the OF computation.

**On the optical flow model selection through metaheuristics**

*Pereira, DR; Delpiano, J; Papa, JP; | EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING | 2015*

Optical flow methods are accurate algorithms for estimating the displacement and velocity fields of objects in a wide variety of applications, being their performance dependent on the configuration of a set of parameters. Since there is a lack of research that aims to automatically tune such parameters, in this work, we have proposed an optimization-based framework for such task based on social-spider optimization, harmony search, particle swarm optimization, and Nelder-Mead algorithm. The proposed framework employed the well-known large displacement optical flow (LDOF) approach as a basis algorithm over the Middlebury and Sintel public datasets, with promising results considering the baseline proposed by the authors of LDOF.

**Neuro-Fuzzy Self-tuning of PID Control for Semiglobal Exponential Tracking of Robot Arms**

*Armendariz, J., V. Parra-Vega, R. García-Rodríguez, S. Rosales | Applied Soft Computing | 2014*

The PID controller with constant feedback gains has withstood as the preferred choice for control of linear plants or linearized plants, and under certain conditions for non-linear ones, where the control of robotic arms excels. In this paper a model-free self-tuning PID controller is proposed for tracking tasks. The key idea is to exploit the passivity-based formulation for robotic arms in order to shape the damping injection to enforce dissipativity and to guarantee semiglobal exponential convergence in the sense of Lyapunov. It is shown that a neuro-fuzzy network can be used to tune dissipation rate gain through a self-tuning policy of a single gain. Experimental studies are presented to confirm the viability of the proposed approach.

**Computational Intelligence Challenges and Applications on Large-Scale Astronomical Time Series Databases**

*Huijse, P. ; Estevez, P.A. ; Protopapas, P. ; Principe, J.C. ; Zegers, P. | IEEE Computational Intelligence Magazine | 2014*

Time-domain astronomy (TDA) is facing a paradigm shift caused by the exponential growth of the sample size, data complexity and data generation rates of new astronomical sky surveys. For example, the Large Synoptic Survey Telescope (LSST), which will begin operations in northern Chile in 2022, will generate a nearly 150 Petabyte imaging dataset of the southern hemisphere sky. The LSST will stream data at rates of 2 Terabytes per hour, effectively capturing an unprecedented movie of the sky. The LSST is expected not only to improve our understanding of time-varying astrophysical objects, but also to reveal a plethora of yet unknown faint and fast-varying phenomena. To cope with a change of paradigm to data-driven astronomy, the fields of astroinformatics and astrostatistics have been created recently. The new data-oriented paradigms for astronomy combine statistics, data mining, knowledge discovery, machine learning and computational intelligence, in order to provide the automated and robust methods needed for the rapid detection and classification of known astrophysical objects as well as the unsupervised characterization of novel phenomena. In this article we present an overview of machine learning and computational intelligence applications to TDA. Future big data challenges and new lines of research in TDA, focusing on the LSST, are identified and discussed from the viewpoint of computational intelligence/machine learning. Interdisciplinary collaboration will be required to cope with the challenges posed by the deluge of astronomical data coming from the LSST.

**Coherent Multimode OAM Superpositions for Multidimensional Modulation**

*Anguita, JA., Herreros, J, Djordjevic, IB. | IEEE Photonics Journal | 2014*

The generation, propagation, and detection of high-quality and coherently superimposed optical vortices, carrying two or more orbital angular momentum (OAM) states, is experimentally demonstrated using an optical arrangement based on spatial light modulators. We compare our results with numerical simulations and show that, in the context of turbulence-free wireless optical communication (indoor or satellite), individual OAM state identification at the receiver of an OAM-modulated system can be achieved with good precision, to accommodate for high-dimensional OAM modulation architectures. We apply our results to the simulation of a communication system using low-density parity-check-coded modulation that considers optimal signal constellation design in a channel that includes OAM crosstalk induced by realistic (imperfect) detection.

**Performance of optical flow techniques for motion analysis of fluorescent point signals in confocal microscopy**

*Delpiano J.; Ruiz-del-Solar J.; Jara J.; Scheer J.; Ramirez OA; Hartel S.; | MACHINE VISION AND APPLICATIONS | 2012*

Optical flow approaches calculate vector fields which determine the apparent velocities of objects in time-varying image sequences. They have been analyzed extensively in computer science using both natural and synthetic video sequences. In life sciences, there is an increasing need to extract kinetic information from temporal image sequences which reveals the interplay between form and function of microscopic biological structures. In this work, we test different variational optical flow techniques to quantify the displacements of biological objects in 2D fluorescent image sequences. The accuracy of the vector fields is tested for defined displacements of fluorescent point sources in synthetic image series which mimic protein traffic in neuronal dendrites, and for GABA_{B}R1 receptor subunits in dendrites of hippocampal neurons. Our results reveal that optical flow fields predict the movement of fluorescent point sources within an error of 3% for a maximum displacement of 160 nm. Displacement of agglomerated GABA_{B}R1 receptor subunits can be predicted correctly for maximum displacements of 640 nm. Based on these results, we introduce a criteria to derive the optimum parameter combinations for the calculation of the optical flow fields in experimental images. From these results, temporal sampling frequencies for image acquisition can be derived to guarantee correct motion estimation for biological objects.

#### Física

**Arrays of two-state stochastic oscillators: Roles of tail and range of interactions**

*Alexandre Rosas, Daniel Escaff, Italo’Ivo Lima Dias Pinto and Katja Lindenberg | Physical Review E, 95, 032104 | 2017*

**Stochastic thermodynamics for Ising chain and symmetric exclusion process**

*Raúl Toral, C. Van den Broeck, Daniel Escaff, and Katja Lindenberg. | Physical Review E, 95, 032114 | 2017*

**Development of an anisotropic turbulence model for laser propagation in the atmosphere.**

*Gustavo Funes | FAI | 2016*

Nowadays many devices use a laser beam for diff erent purposes, as it has become a common tool in everyday living. In science, laser has become a top instrument not only in optics but also in metrology. The propagation of laser beams in a turbulent atmosphere has become more and more interesting due to the possibility of using high-data-rate optical transmitters for satellite-communication channels and lasercom systems connecting groundairborne- space or space-airborne-ground data links. This subject also o ffers a wide range of possible applications aside from free-space optical telecommunications (FSO), like remote sensing, target pointing and tracking, etc. But all of these applications have one common enemy, Earth’s atmosphere causes serious degradation of the reliability of such optical communication channels. Moreover, theoretical results are often insu fficient to match experimental data. The main problem is that all theoretical approaches have been based on common assumptions about the atmospheric turbulence through which the optical wave travels. For example, it is usual to suppose that the turbulence is homogeneous and isotropic, with a fixed power law spectrum in the inertial range of scales. Also, there are spectrum cutoff s at small and large scales that are quantitatively represented by numerical values of inner and outer scales.

Based on the previous statements the aim of this project is to develop a new theoretical model, that has a strong connection with phenomenology than the regular ones. We expect that this new model could make a good representation of the turbulence under weak conditions and also, could be extended to the other turbulence regimes such as anisotropic behavior or non-Kolmogorov turbulence.

Hypothetically a region of eddies with random variations of the air refraction index could simulate the atmospheric turbulence if certain scaling conditions are applied. This model is called Quasi-wavelets and was applied previously for sound propagation and temperature fluctuations with good results. Recently we have published an article in which we test this model for the propagation of a laser beam and proved to be valid theoretically in ideal conditions.

As mentioned before, the atmosphere has a random behavior and is clearly not stationary since there are considerable transient events that perturb laser beam propagation, creating wavefront distortions and high intensity deviations. This is due to lateral winds or an increment in the energy budget that generates the perturbation.

Since the Quasi-Wavelet model basically builds the region of turbulence, some hypotheses like uniform distribution of fluctuations and isotropy can be changed to more realistic conditions that could be encountered in the real atmosphere. This is the key to creating a new model that can be either calculated analytically or simulated by computers.

**Globally coupled stochastic two-state oscillators: synchronization of infinite and finite arrays**

*Alexandre Rosas, Daniel Escaff, Italo’Ivo Lima Dias Pinto and Katja Lindenberg | Journal of Physics A: Mathematical and Theoretical, 49, 095001 | 2016*

**Synchronization of coupled noisy oscillators: Coarse graining from continuous to discrete phases**

*Daniel Escaff, Alexandre Rosas, Raúl Toral, and Katja Lindenberg | Physical Review E, 94, 052219 | 2016*

**Localized vegetation patterns, fairy circles, and localized patches in arid landscapes**

*Escaff, D.; Fernandez-Oto, C.; Clerc, M. G.; M. Tlidi | Physical Review E | 2015*

**Noisy localized structures induced by large noise**

*Descalzi, Orazio; Cartes, Carlos; Brand, Helmut R | Physical Review E | 2015*

We investigate the influence of large noise on the formation of localized patterns in the framework of the cubic-quintic complex Ginzburg-Landau equation. The interaction of localization and noise can lead to filling in or noisy localized structures for fixed noise strength. To focus on the interaction between noise and localization we cover a region in parameter space, in particular, subcriticality, for which stationary stable deterministic pulses do not exist. Possible experimental tests of the work presented for autocatalytic chemical reactions and bioinspired systems are outlined.

**Turbulence-induced persistence in laser beam wandering**

*Luciano Zunino, Damián Gulich, Gustavo Funes, and Darío G. Pérez | Optics Letters | 2015*

We have experimentally confirmed the presence of long-memory correlations in the wandering of a thin Gaussian laser beam over a screen after propagating through a turbulent medium. A laboratory-controlled experiment was conducted in which coordinate fluctuations of the laser beam were recorded at a sufficiently high sampling rate for a wide range of turbulent conditions. Horizontal and vertical displacements of the laser beam centroid were subsequently analyzed by implementing *detrended fluctuation analysis*. This is a very well-known and widely used methodology to unveil memory effects from time series. Results obtained from this experimental analysis allow us to confirm that both coordinates behave as highly persistent signals for strong turbulent intensities. This finding is relevant for a better comprehension and modeling of the turbulence effects in free-space optical communication systems and other applications related to propagation of optical signals in the atmosphere.

**Periodic and Chaotic Exploding Dissipative Solitons**

*Cartes, Carlos; Descalzi, Orazio | Fiber and Integrated Optics | 2015*

This article shows for the first time the existence of periodic exploding dissipative solitons. These non-chaotic explosions appear when higher-order non-linear and dispersive effects are added to the complex cubic–quintic Ginzburg–Landau equation modeling soliton transmission lines. This counter-intuitive phenomenon is the result of period-halving bifurcations leading to order (periodic explosions), followed by period-doubling bifurcations leading to chaos (chaotic explosions). This periodic behavior is persistent even when small amounts of noise are added to the system. Since for ultrashort optical pulses it is necessary to include these higher-order effects, it is conjectured that the predictions can be tested in mode-locked lasers.

**On a quasi-wavelet model of refractive index fluctuations due to atmospheric turbulence**

*Pérez DG, Funes G. | Opt Express | 2015*

When studying light propagation through the atmosphere, it is usual to rely on widely used spectra such as the modified von Kármán or Andrews-Hill. These are relatively tractable models for the fluctuations of the refractive index, and are primarily used because of their mathematical convenience. They correctly describe the fluctuations behaviour at the inertial range yet lack any physical basis outside this range. In recent years, deviations from the Obukhov-Kolmogorov theory (e. g. interminttency, partially developed turbulence, etc.) have been built upon these models through the introduction of arbitrary spectral power laws. Here we introduce a quasi-wavelet model for the refractive index fluctuations which is based on a phenomenological representation of the Richardson cascade. Under this model, the atmospheric refractive index has a correct spectral representation for the inertial range, behaves as expected outside it, and even accounts for non-Kolmogorov behaviour; moreover, it has non-Gaussian statistics. Finally, we are able to produce second order moments under the Rytov approximation for the complex phase; we estimate the angle-of-arrival as an example of application.

**Estimation of Cn² based on scintillation of fixed targets imaged through atmospheric turbulence**

*Gulich D, Funes G, Pérez D, Zunino L. | Opt Lett | 2015*

**Orbital-angular-momentum crosstalk and temporal fading in a terrestrial laser link using single-mode fiber coupling**

*Funes G, Vial M, Anguita JA. | Opt Express | 2015*

Using a mobile experimental testbed, we perform a series of measurements on the detection of laser beams carrying orbital angular momentum (OAM) to evaluate turbulent channel distortions and crosstalk among receive states in an 84-m roofed optical link. We find that a receiver assembly using single-mode fiber coupling serves as a good signal selector in terms of crosstalk rejection. From the recorded temporal channel waveforms, we estimate average crosstalk profiles and propose an appropriate probability density function for the fluctuations of the detected OAM signal. Further measurements of OAM crosstalk are described for a horizontal 400-m link established over our campus.

**Turbulence-induced persistence in laser beam wandering**

*Zunino L, Gulich D, Funes G, Pérez DG | Opt Lett | 2015*

L**ocalized plateau beam resulting from strong nonlocal coupling in a cavity filled by metamaterials and liquid-crystal cells**

*M. Tlidi, C. Fernandez-Oto, M. G. Clerc, D. Escaff, and P. Kockaert | Physical Review A, 92, 053838 | 2015*

**Class of compound dissipative solitons as a result of collisions in one and two spatial dimensions**

*Descalzi, O. and Brand, H.R. | Physical Review E | 2014*

We study the interaction of quasi-one-dimensional (quasi-1D) dissipative solitons (DSs). Starting with quasi-1D solutions of the cubic-quintic complex Ginzburg-Landau (CGL) equation in their temporally asymptotic state as the initial condition, we find, as a function of the approach velocity and the real part of the cubic interaction of the two counterpropagating envelopes: interpenetration, one compound state made of both envelopes or two compound states. For the latter class both envelopes show DSs superposed at two different locations. The stability of this class of compound states is traced back to the quasilinear growth rate associated with the coupled system. We show that this mechanism also works for 1D coupled cubic-quintic CGL equations. For quasi-1D states that are not in their asymptotic state before the collision, a breakup along the crest can be observed, leading to nonunique results after the collision of quasi-1D states.

**Localized Structures in Physics and Chemistry**

*Descalzi, O., Rosso, O. A., Larrondo, H.A. | Eur. Phys. J. Special Topics | 2014*

**Exploding dissipative solitons in the cubic-quintic complex Ginzburg-Landau equation in one and two spatial dimensions A review and a perspective**

*Cartes, C., Descalzi, O., Brand, H. R. | he European Physical Journal Special Topics | 2014*

We review the work on exploding dissipative solitons in one and two spatial dimensions. Features covered include: the transition from modulated to exploding dissipative solitons, the analogue of the Ruelle-Takens scenario for dissipative solitons, inducing exploding dissipative solitons by noise, two classes of exploding dissipative solitons in two spatial dimensions, diffusing asymmetric exploding dissipative solitons as a model for a two-dimensional extended chaotic system. As a perspective we outline the interaction of exploding dissipative solitons with quasi one-dimensional dissipative solitons, breathing quasi one-dimensional solutions and their possible connection with experimental results on convection, and the occurence of exploding dissipative solitons in reaction-diffusion systems.

**Anomalous diffusion of dissipative solitons in the cubic-quintic complex Ginzburg-Landau equation in two spatial dimensions**

*Jaime Cisternas, Orazio Descalzi, Tony Albers, and Günter Radons | Phys. Rev. Lett. 116, 203901 (2016) | 2014*

We demonstrate the occurrence of anomalous diffusion of dissipative solitons in a “simple” and deterministic prototype model: the cubic-quintic complex Ginzburg-Landau equation in two spatial dimensions. The main features of their dynamics, induced by symmetric-asymmetric explosions, can be modeled by a subdiffusive continuous-time random walk, while in the case dominated by only asymmetric explosions, it becomes characterized by normal diffusion.

**Symmetry breaking term effects on explosive localized solitons**

*Cartes, C. and O. Descalzi | The European Physical Journal Special Topics | 2014*

We study the influence of an analog of self–steepening (SST), which is a term breaking the T →−T symmetry, on explosive localized solutions for the cubic–quintic complex Ginzburg–Landau equation in the anomalous dispersion regime. We find that while this explosive behavior occurs for a wide range of the parameter s, characterizing SST, the mean distance between explosions diverges close to a critical value s c . After this value the explosive solution becomes a fixed shape soliton that moves at constant speed. The transition between explosive and regular behavior is characterized by a transcritical bifurcation controlled by the SST parameter. We also proposed a mechanism which explains and predicts the mean distance between explosions as a function of s. We are glad to dedicate this article to Professor Helmut R. Brand on occasion of his 60th birthday.

**Strong interaction between plants induces circular barren patches: fairy circles**

*Fernandez-Oto, C., Tlidi, M., Escaff, D., Clerc, M. G. | Philosophical Transactions of the Royal Society A | 2014*

Fairy circles consist of isolated or randomly distributed circular areas devoid of any vegetation. They are observed in vast territories in southern Angola, Namibia and South Africa. We report on the formation of fairy circles, and we interpret them as localized structures with a varying plateau size as a function of the aridity. Their stabilization mechanism is attributed to a combined influence of the bistability between the bare state and the uniformly vegetation state, and Lorentzian-like non-local coupling that models the competition between plants. We show how a circular shape is formed, and how the aridity level influences the size of fairy circles. Finally, we show that the proposed mechanism is model-independent.

**Mean field model for synchronization of coupled two-state units and the effect of memory**

*Escaff, D., Lindenberg, K. | The European Physical Journal Special Topics | 2014*

A prototypical model for a mean field second order transition is presented, which is based on an ensemble of coupled two-states units. This system is used as a basic model to study the effect of memory. To wit, we distinguish two types of memories: weak and strong, depending on the feasibility of linearizing the generalized mean field master equation. For weak memory we find static solutions that behave much like those of the memoryless (Markovian) system. The latter exhibits a pitchfork bifurcation as the control parameter is increased, with two stable and one unstable solution. The former exhibits an imperfect pitchfork bifurcation to states with the same behaviors. In both cases, the stability of the static solutions is analyzed via the usual linearization around the equilibrium solution. For strong memories we again find an imperfect pitchfork bifurcation, with two stable and one unstable branch. However, it is no longer possible to analyze these behaviors via the usual linearization, which is local in time, because a strong memory requires knowledge of the system for its entire past. Finally, we are pleased to dedicate this publication to Helmut Brand on the occasion of his 60th birthday.

**Globally coupled stochastic two-state oscillators: Fluctuations due to finite numbers**

*Dias Pinto, Italo’Ivo Lima, Escaff Daniel, Harbola Upendra, Rosas Alexandre, Lindenberg Katja | Physical Review E | 2014*

Infinite arrays of coupled two-state stochastic oscillators exhibit well-defined steady states. We study the fluctuations that occur when the number N of oscillators in the array is finite. We choose a particular form of global coupling that in the infinite array leads to a pitchfork bifurcation from a monostable to a bistable steady state, the latter with two equally probable stationary states. The control parameter for this bifurcation is the coupling strength. In finite arrays these states become metastable: The fluctuations lead to distributions around the most probable states, with one maximum in the monostable regime and two maxima in the bistable regime. In the latter regime, the fluctuations lead to transitions between the two peak regions of the distribution. Also, we find that the fluctuations break the symmetry in the bimodal regime, that is, one metastable state becomes more probable than the other, increasingly so with increasing array size. To arrive at these results, we start from microscopic dynamical evolution equations from which we derive a Langevin equation that exhibits an interesting multiplicative noise structure. We also present a master equation description of the dynamics. Both of these equations lead to the same Fokker-Planck equation, the master equation via a 1/N expansion and the Langevin equation via standard methods of Itô calculus for multiplicative noise. From the Fokker-Planck equation we obtain an effective potential that reflects the transition from the monomodal to the bimodal distribution as a function of a control parameter. We present a variety of numerical and analytic results that illustrate the strong effects of the fluctuations. We also show that the limits N→∞ and t→∞ (t is the time) do not commute. In fact, the two orders of implementation lead to drastically different results.

**Travelling fronts of the CO oxidation on Pd(111) with coverage-dependent diffusion**

*Cisternas, J., Karpitchka, S. and Wehner, S. | The Journal of Chemical Physics | 2014*

In this work, we study a surface reaction on Pd(111) crystals under ultra-high-vacuum conditions that can be modeled by two coupled reaction-diffusion equations. In the bistable regime, the reaction exhibits travelling fronts that can be observed experimentally using photo electron emission microscopy. The spatial profile of the fronts reveals a coverage-dependent diffusivity for one of the species. We propose a method to solve the nonlinear eigenvalue problem and compute the direction and the speed of the fronts based on a geometrical construction in phase-space. This method successfully captures the dependence of the speed on control parameters and diffusivities.

**Plant clonal morphologies and spatial patterns as self-organized responses to resource-limited environments**

*Couteron, P., Anthelme, F., Clerc, M., Escaff, D., Fernandez-Oto, C., Tlidi, M. | Philosophical Transactions of the Royal Society A | 2014*

We propose here to interpret and model peculiar plant morphologies (cushions, tussocks) observed in the Andean altiplano as localized structures. Such structures resulting in a patchy, aperiodic aspect of the vegetation cover are hypothesized to self-organize thanks to the interplay between facilitation and competition processes occurring at the scale of basic plant components biologically referred to as ‘ramets’. (Ramets are often of clonal origin.) To verify this interpretation, we applied a simple, fairly generic model (one integro-differential equation) emphasizing via Gaussian kernels non-local facilitative and competitive feedbacks of the vegetation biomass density on its own dynamics. We show that under realistic assumptions and parameter values relating to ramet scale, the model can reproduce some macroscopic features of the observed systems of patches and predict values for the inter-patch distance that match the distances encountered in the reference area (Sajama National Park in Bolivia). Prediction of the model can be confronted in the future to data on vegetation patterns along environmental gradients as to anticipate the possible effect of global change on those vegetation systems experiencing constraining environmental conditions.

**Arrays of stochastic oscillators: Nonlocal coupling, clustering, and wave formation**

*Escaff, D., I.L. Dias Pinto, and K. Lindenberg | Physical Review E | 2014*

We consider an array of units each of which can be in one of three states. Unidirectional transitions between these states are governed by Markovian rate processes. The interactions between units occur through a dependence of the transition rates of a unit on the states of the units with which it interacts. This coupling is nonlocal, that is, it is neither an all-to-all interaction (referred to as global coupling), nor is it a nearest neighbor interaction (referred to as local coupling). The coupling is chosen so as to disfavor the crowding of interacting units in the same state. As a result, there is no global synchronization. Instead, the resultant spatiotemporal configuration is one of clusters that move at a constant speed and that can be interpreted as traveling waves. We develop a mean field theory to describe the cluster formation and analyze this model analytically. The predictions of the model are compared favorably with the results obtained by direct numerical simulations.

**Experimental confirmation of long-memory correlations in star-wander data**

*Zunino L, Gulich D, Funes G, Ziad A. | Opt Lett | 2014*

**Estimating the optimal sampling rate using wavelet transform: an application to optical turbulence**

*Funes G, Fernández A, Pérez DG, Zunino L, Serrano E. | Opt Express | 2013*

Sampling rate and frequency content determination for optical quantities related to light propagation through turbulence are paramount experimental topics. Some papers about estimating properties of the optical turbulence seem to use *ad hoc* assumptions to set the sampling frequency used; this chosen sampling rate is assumed good enough to perform a proper measurement. On the other hand, other authors estimate the optimal sampling rate via fast Fourier transform of data series associated to the experiment. When possible, with the help of analytical models, cut-off frequencies, or frequency content, can be determined; yet, these approaches require prior knowledge of the optical turbulence. The aim of this paper is to propose an alternative, practical, experimental method to estimate a proper sampling rate. By means of the discrete wavelet transform, this approach can prevent any loss of information and, at the same time, avoid oversampling. Moreover, it is independent of the statistical model imposed on the turbulence.

**Beam wandering statistics of twin thin laser beam propagation under generalized atmospheric conditions**

*Pérez DG, Funes G. | Opt Express | 2012*

Under the Geometrics Optics approximation is possible to estimate the covariance between the displacements of two thin beams after they have propagated through a turbulent medium. Previous works have concentrated in long propagation distances to provide models for the wandering statistics. These models are useful when the separation between beams is smaller than the propagation path—regardless of the characteristics scales of the turbulence. In this work we give a complete model for these covariances, behavior introducing absolute limits to the validity of former approximations. Moreover, these generalizations are established for non-Kolmogorov atmospheric models.

#### Industrial

**Robust Kernel-based Multiclass Support Vector Machines via Second-Order Cone Programming.**

*Maldonado, S., López, J. | Applied Intelligence 46 (4) 983-992 | 2017*

**Group-Penalized Feature Selection and Robust Twin SVM Classification via Second-order Cone Programming**

López, J., Maldonado, S. | Neurocomputing 235. 112-121. | 2017

**Embedded Heterogeneous Feature Selection for Conjoint Analysis: a SVM approach using L1 penalty**

*Maldonado, S., Montoya, R., López, J. | Applied Intelligence 46(4). 775-787 | 2017*

**Assessing University Enrollment and Admission Efforts via Hierarchical Classification and Feature Selection**

*Maldonado, S., Armelini, G., Guevara, C.A. | Intelligent Data Analysis 21(4). 945-962. | 2017*

**Simultaneous Preference Estimation and Heterogeneity Control for Choice-based Conjoint via Support Vector Machines**

*López, J., Maldonado, S., Montoya, R. | Journal of the Operational Research Society 68(11). 1323-1334. | 2017*

**Impact on Yard Efficiency of a Truck Appointment System for a Port Terminal**

*Nafarrete-Ramírez, A., Guerra-Olivares, R., Smith, N.R., González-Ramírez, R.G, Voβ, S. | Annals of Operations Research, 258(2), 196-216 | 2017*

**Cost-based feature selection for Support Vector Machines – An application in credit scoring**

*Maldonado, S., Pérez, J., Bravo, C. | European Journal of Operational Research 261 (2), 656–665. | 2017*

**A robust formulation for twin multiclass support vector machine**

*López, J., Maldonado, S., Carrasco, M. | Applied Intelligence 47(4), 1031-1043. | 2017*

**Synchronized feature selection for Support Vector Machines with twin hyperplanes**

*Maldonado, S., López, J. | Knowledge-based Systems 132C, 119-128. | 2017*

**Integrated framework for profit-based feature selection and SVM classification in credit scoring**

*Maldonado, S., Bravo, C., Pérez, J., López, J. | Decision Support Systems 104, 113-121. | 2017*

**Dynamic Rough-Fuzzy Support Vector Clustering.**

*Saltos, R., Maldonado, S., Weber, R. | IEEE Transactions on Fuzzy Systems 25(6) 1508-1521. | 2017*

**A GRASP-Tabu heuristic approach to territory design for pickup and delivery operations for large scale instances**

*Ramírez, R.G. Neale R. Smith, Ronald G. Askin, José-Fernando Camacho-Vallejod, Jose Luis González-Velarde. | ”. Mathematicals Problems in Engineering. Volume 2017 (2017), Article ID 4708135, 13 pages. | 2017*

**Reducing Port-Related Empty Trip Truck Emissions: A Mathematical Approach for Truck Appointments with Collaboration**

*Schulte, F., Lalla-Ruíz, E., González-Ramírez, R.G., Voβ, S. | Transport Research Part E, 105, 195-212. | 2017*

**Integrated framework for profit-based feature selection and SVM classification in credit scoring**

*S. Maldonado, C. Bravo, J. López, J. Pérez | Decision Support Systems, Vol. 104, pp. 113-121. | 2017*

**Cost-base feature selection for Support Vector Machines: An application in credit scoring**

*S. Maldonado, J. Pérez, C. Bravo | European Journal of Operational Research, Vol. 261, Issue2, pp. 656-665 | 2017*

**Transaction vs. Switching Costs – Comparison of Three Core Mechanisms for Mobile Markets**

*Basaure, A., Suomi, H., & Hämmäinen, H | Telecommunications Policy | 2016*

**A Reconfiguration of Fire Station and Fleet Locations for the Santiago Fire Department**

*J. Pérez, S. Maldonado, V. Marianov | Iinternational Journal of Production Research 54(11), 3170-3186. | 2016*

**A fleet management model for the Santiago Fire Department**

*Juan Pérez, Sebastián Maldonado, Héctor López-Ospina | FIRE SAFETY JOURNAL | 2016*

**Pricing and composition of bundles with Constrained Multinomial Logit**

*Juan Pérez, Héctor López-Ospina, Alejandro Cataldo and Juan-Carlos Ferrer | INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH | 2016*

**A time allocation model considering external providers**

*Rosales-Salas, J., and Jara-Díaz, S. | Transportation Research Part B: Methodological | 2016*

**Beyond Transport time: A review of Time Use Modelling**

*Jara-Díaz, S., and Rosales-Salas, J. | Transportation Research Part A: Policy and Practice | 2016*

**A Reconfiguration of Fire Station and Fleet Locations for the Santiago Fire Department**

*Pérez, J., Maldonado, S., Marianov, V. | International Journal of Production Research 54 (11), 3170-3186 | 2016*

**A Novel Multi-class SVM Model Using Second-Order Cone Constraints**

*López, J., Maldonado, S., Carrasco, M. | Applied Intelligence 44(2), 457-469 | 2016*

**Multi-class Second-Order Cone Programming Support Vector Machines**

*López, J., Maldonado, S. | Information Sciences 330, 328-341 | 2016*

**A second-order cone programming formulation for Twin Support Vector Machines**

*Maldonado, S., López, J., Carrasco, M. | Applied Intelligence 45(2), 265-276. | 2016*

**A Second-Order Cone Programming Formulation for Nonparallel Hyperplane Support Vector Machine**

*Carrasco, M., López, J., Maldonado, S. | Expert Systems with Applications 54, 95-104 | 2016*

**A Fleet Management Model for the Santiago Fire Department**

*Pérez, J., Maldonado, S., López, H | Fire Safety Journal 82, 1-11. | 2016*

**The impact of lanes segmentation and booking levels on container terminal gate congestion**

*Gracia, M.D., González-Ramírez R.G., Mar-Ortiz, J. | Flexible Services and Manufacturing, 29 (3-4), 403-432. | 2016*

**Directions for sustainable port in Latin America and the Caribbean**

*Schulte F., González-Ramírez, R.G., Ascencio, L.M., Voß, S. | International Journal of Transport Economics, 43(3), 315-337. | 2016*

**An online algorithm for the container stacking problem**

*Guerra-Olivares, R., Smith N.R., González-Ramírez R.G. | DYNA, 83 (198), pp. 196-205. | 2016*

**How Do Surficial Lithic Assemblages Weather in Arid Environments? A Case Study from the Atacama Desert, Northern Chile**

*Ugalde, Paula C., Calogero M. Santoro, Eugenia M. Gayo, Claudio Latorre, Sebastián Maldonado, Ricardo Pol‐Holz, and Donald Jackson | Geoarchaeology | 2015*

Archaeological sites composed only of surficial lithics are widespread in arid environments. Numerical dating of such sites is challenging, however, and even establishing a relative chronology can be daunting. One potentially helpful method for assigning relative chronologies is to use lithic weathering, on the assumption that the most weathered artifacts are also the oldest. Yet, few studies have systematically assessed how local environmental processes affect weathering of surficial lithics. Using macroscopic analyses, we compared the weathering of surficial lithic assemblages from seven mid-to-late Holocene archaeological sites sampled from four different microenvironments in the Atacama Desert of northern Chile. Changes in polish, texture, shine, and color were used to establish significant differences in weathering between two kinds of locations: interfluves and canyon sites. Lithics from interfluve sites were moderately to highly weathered by wind and possessed a dark coating, whereas canyon lithics were mildly weathered despite greater exposure to moisture, often lacked indications of eolian abrasion, and lacked dark coatings. Our results show that lithic weathering can be used as a proxy for relative age, but only after considering local environmental factors. The power of such chronologies can be improved by combining archaeological, paleoenvironmental, geomorphological, and taphonomic data.

**New insights on random regret minimization models**

*van Cranenburgh, Sander; Guevara, Cristian Angelo; Chorus, Caspar G | Transportation Research Part A: Policy and Practice | 2015*

This paper develops new methodological insights on Random Regret Minimization (RRM) models. It starts by showing that the classical RRM model is not scale-invariant, and that – as a result – the degree of regret minimization behavior imposed by the classical RRM model depends crucially on the sizes of the estimated taste parameters in combination with the distribution of attribute-values in the data. Motivated by this insight, this paper makes three methodological contributions: (1) it clarifies how the estimated taste parameters and the decision rule are related to one another; (2) it introduces the notion of “profundity of regret”, and presents a formal measure of this concept; and (3) it proposes two new family members of random regret minimization models: the *μ*RRM model, and the Pure-RRM model. These new methodological insights are illustrated by re-analyzing 10 datasets which have been used to compare linear-additive RUM and classical RRM models in recently published papers. Our re-analyses reveal that the degree of regret minimizing behavior imposed by the classical RRM model is generally very limited. This insight explains the small differences in model fit that have previously been reported in the literature between the classical RRM model and the linear-additive RUM model. Furthermore, we find that on 4 out of 10 datasets the *μ*RRM model improves model fit very substantially as compared to the RUM and the classical RRM model.

**A multi-class SVM approach based on the l1-norm minimization of the distances between the reduced convex hulls**

*Carrasco, M., López, J., Maldonado, S. | Pattern Recognition | 2015*

Multi-class classification is an important pattern recognition task that can be addressed accurately and efficiently by Support Vector Machine (SVM). In this work we present a novel SVM-based multi-class classification approach based on the center of the configuration, a point which is equidistant to all classes. The center of the configuration is obtained from the dual formulation by minimizing the distances between the reduced convex hulls using the l1-norm, while the decision functions are subsequently constructed from this point. This work also extends the ideas of Zhou et al. (2002) [37] to multi-class classification. The use of l1-norm provides a single linear programming formulation, which reduces the complexity and confers scalability compared with other multi-class SVM methods based on quadratic programming formulations. Experiments on benchmark datasets demonstrate the virtues of our approach in terms of classification performance and running times compared with various other multi-class SVM methods.

**Kernel Penalized K-means: A feature selection method based on Kernel K-means**

*Maldonado, Sebastian; Carrizosa, Emilio; Weber, Richard | Information Sciences | 2015*

We present an unsupervised method that selects the most relevant features using an embedded strategy while maintaining the cluster structure found with the initial feature set. It is based on the idea of simultaneously minimizing the violation of the initial cluster structure and penalizing the use of features via scaling factors. As the base method we use Kernel K-means which works similarly to K-means, one of the most popular clustering algorithms, but it provides more flexibility due to the use of kernel functions for distance calculation, thus allowing the detection of more complex cluster structures. We present an algorithm to solve the respective minimization problem iteratively, and perform experiments with several data sets demonstrating the superior performance of the proposed method compared to alternative approaches.

**Fieller Stability Measure: a novel model-dependent backtesting approach**

*Bravo, Cristian; Maldonado, Sebastian | Journal of the Operational Research Society | 2015*

Dataset shift is present in almost all real-world applications, since most of them are constantly dealing with changing environments. Detecting fractures in datasets on time allows recalibrating the models before a significant decrease in the model’s performance is observed. Since small changes are normal in most applications and do not justify the efforts that a model recalibration requires, we are only interested in identifying those changes that are critical for the correct functioning of the model. In this work we propose a model-dependent backtesting strategy designed to identify significant changes in the covariates, relating a confidence zone of the change to a maximal deviance measure obtained from the coefficients of the model. Using logistic regression as a predictive approach, we performed experiments on simulated data, and on a real-world credit scoring dataset. The results show that the proposed method has better performance than traditional approaches, consistently identifying major changes in variables while taking into account important characteristics of the problem, such as sample sizes and variances, and uncertainty in the coefficients.

**Profit-based feature selection using support vector machines – General framework and an application for customer retention**

*Maldonado, Sebastian; Flores, Alvaro; Verbraken, Thomas; Bart Baesens, Richard Weber | Applied Soft Computing | 2015*

Churn prediction is an important application of classification models that identify those customers most likely to attrite based on their respective characteristics described by e.g. socio-demographic and behavioral variables. Since nowadays more and more of such features are captured and stored in the respective computational systems, an appropriate handling of the resulting information overload becomes a highly relevant issue when it comes to build customer retention systems based on churn prediction models. As a consequence, feature selection is an important step of the classifier construction process. Most feature selection techniques; however, are based on statistically inspired validation criteria, which not necessarily lead to models that optimize goals specified by the respective organization. In this paper we propose a profit-driven approach for classifier construction and simultaneous variable selection based on support vector machines. Experimental results show that our models outperform conventional techniques for feature selection achieving superior performance with respect to business-related goals.

**Identifying next relevant variables for segmentation by using feature selection approaches**

*Seret, Alex; Maldonado, Sebastian; Baesens, Bart | Expert Systems with Applications | 2015*

Data mining techniques are widely used by researchers and companies in order to solve problems in a myriad of domains. While these techniques are being adopted and used in daily activities, new operational challenges are encountered concerning the steps following this adoption. In this paper, the problem of updating and improving an existing clustering model by adding relevant new variables is studied. A relevant variable is here defined as a feature which is highly correlated with the current structure of the data, since our main goal is to improve the model by adding new information to the current segmentation, but without modifying it significantly. For this purpose, a general framework is proposed, and subsequently applied in a real business context involving an event organizer facing this problem. Based on extensive experiments based on real data, the performance of the proposed approach is compared to existing methods using different evaluation metrics, leading to the conclusion that the proposed technique is performing better for this specific problem.

**Advanced conjoint analysis using feature selection via support vector machines**

*Maldonado, Sebastian; Montoya, Ricardo; Weber, Richard | European Journal of Operational Research | 2015*

One of the main tasks of conjoint analysis is to identify consumer preferences about potential products or services. Accordingly, different estimation methods have been proposed to determine the corresponding relevant attributes. Most of these approaches rely on the post-processing of the estimated preferences to establish the importance of such variables. This paper presents new techniques that simultaneously identify consumer preferences and the most relevant attributes. The proposed approaches have two appealing characteristics. Firstly, they are grounded on a support vector machine formulation that has proved important predictive ability in operations management and marketing contexts and secondly they obtain a more parsimonious representation of consumer preferences than traditional models. We report the results of an extensive simulation study that shows that unlike existing methods, our approach can accurately recover the model parameters as well as the relevant attributes. Additionally, we use two conjoint choice experiments whose results show that the proposed techniques have better fit and predictive accuracy than traditional methods and that they additionally provide an improved understanding of customer preferences.

**Robust feature selection for multiclass Support Vector Machines using second-order cone programming**

*Lopez, Julio; Maldonado, Sebastian | Intelligent Data Analysis | 2015*

This work addresses the issue of high dimensionality for linear multiclass Support Vector Machines (SVMs) using second-order cone programming (SOCP) formulations. These formulations provide a robust and efficient framework for classification, while an adequate feature selection process may improve predictive performance. We extend the ideas of SOCP-SVM from binary to multiclass classification, while a sequential backward elimination algorithm is proposed for variable selection, defining a contribution measure to determine the feature relevance. Experimental results with multiclass microarray datasets demonstrate the effectiveness of a low-dimensional data representation in terms of performance.

**Churn prediction via support vector classification: An empirical comparison**

*Maldonado, Sebastian | Intelligent Data Analysis | 2015*

An empirical framework for customer churn prediction modeling is presented in this work. This task represents a very interesting business analytics challenge, given its highly class imbalanced nature, and the presence of noisy variables that adversely affect the prediction capabilities of classification models. In this work, two SVM-based techniques are compared: Support Vector Data Description (SVDD), and standard two-class SVMs. The proposed methodology involves the comparison of these two methods under different conditions of class imbalance and using different subsets of variables. Feature ranking is performed via the Fisher Score Criterion, while the class imbalance problem is dealt with through resampling techniques, namely random undersampling and SMOTE oversampling. Experiments on four customer churn prediction datasets show the advantages of SVDD: it outperforms standard SVM in terms of predictive performance, demonstrating the importance of techniques that take the class imbalance problem into account.

**Spectrum and license flexibility for 5G networks**

*Basaure, A., & Matinmikko, M., Kliks, A., Holland, O. | IEEE Communications Magazine, 53(7), 42-49 | 2015*

Spectrum sharing is a key solution facilitating availability of the necessary spectrum for 5G wireless networks. This article addresses the problem of flexible spectrum sharing by the application of adaptive licensing among interested stakeholders. In particular, it acts as a proponent of “pluralistic licensing” and verifies it in three simulation scenarios that are of strong interest from the perspective of 5G networks. The concluding analysis offers discussion of the potential benefits offered to spectrum holders and other interested players through the application of the pluralistic licensing concept.

**Adoption of Dynamic Spectrum Access technologies: A System Dynamics approach**

*Basaure, A; Sridhar, V; Hämmäinen, H. | Telecommunication systems | 2015*

The introduction of dynamic spectrum access (DSA) technologies in mobile markets faces technical, economic and regulatory challenges. This paper defines industry openness and spectrum centralization as the two key factors that affect the adoption of DSA technologies. The adoption process is analyzed employing a comprehensive System Dynamics model that considers the network and substitution effects. Two possible scenarios, namely operator-centric and user-centric adoption of DSA technologies are explored in the model. The analysis indicates that operator-centric DSA technologies may be adopted in most countries where spectrum is centralized, while end-user centric DSA technologies may be adopted in countries with decentralized spectrum regime and in niche emerging services. The study highlights the role of standards-based design and concludes by citing case studies that show the practicality of this analysis and associated policy prescriptions.

**A time-hierarchical microeconomic model of activities**

*López-Ospina H., Martínez, F. J., Cortés, C. E. | Transportation | 2015*

The microeconomic approach to explain consumers’ behavior regarding the choice of activities, consumption of goods and use of time is extended in this paper by explicitly including the temporal dimension in the choice-making process. Recognizing that some activities, such as a job and education, involve a long-term commitment and that other activities, such as leisure and shopping, are conducted and modified in the short term, we make these differences explicit in a microeconomic framework. Thus, a hierarchical temporal structure defines the time window or frequency of adjusting the variables of activities (duration, location and consumption of goods) and the magnitude of the resources (time and money) spent. We specify and analyze a stylized microeconomic model with two time scales, the macro and micro level, concluding that preference observations at the micro level, such as transport mode choice, are strongly conditioned by the prevailing choices at the macro scale. This result has strong implications for the current theory of the value and allocation of time, as well as on the location of activities, as illustrated by numerical example.

**Understanding time use: Daily or weekly data?**

*Jara-Díaz, S., and Rosales-Salas, J. | Transportation Research Part A: Policy and Practice, Vol. 76, pp. 38-57. | 2015*

**Feature Selection for Multiclass Support Vector Machines Using Second-Order Cone Programming**

*López, J., Maldonado, S. | Intelligent Data Analysis 19 (S1), Special Issue in Business Analytics, 117-133 | 2015*

**Editorial. Intelligent Data Analysis 19 (S1)**

*Bravo, C., Davison, M., Maldonado, S., Weber, R. | Special Issue in Business Analytics, 1-2. | 2015*

**An Embedded Feature Selection Approach for Support Vector Classification via Second-order Cone Programming**

*Maldonado, S., López, J. | Intelligent Data Analysis 19 (6), 1233-1257 | 2015*

**A Bi-level Optimization Model for Aid Distribution after the Occurrence of a Disaster**

*Camacho-Vallejo, J.F., González-Rodríguez, E., Almaguer F.J., González-Ramírez, R.G. | Journal of Cleaner Production, 105, 134-145. | 2015*

**F****eature selection for high-dimensional class-imbalanced data sets using support vector machines**

*Maldonado, S., Weber, R., Famili, F. | Information Sciences | 2014*

Feature selection and classification of imbalanced data sets are two of the most interesting machine learning challenges, attracting a growing attention from both, industry and academia. Feature selection addresses the dimensionality reduction problem by determining a subset of available features to build a good model for classification or prediction, while the class-imbalance problem arises when the class distribution is too skewed. Both issues have been independently studied in the literature, and a plethora of methods to address high dimensionality as well as class-imbalance has been proposed. The aim of this work is to simultaneously explore both issues, proposing a family of methods that select those attributes that are relevant for the identification of the target class in binary classification. We propose a backward elimination approach based on successive holdout steps, whose contribution measure is based on a balanced loss function obtained on an independent subset. Our experiments are based on six highly imbalanced microarray data sets, comparing our methods with well-known feature selection techniques, and obtaining a better prediction with consistently fewer relevant features.

**Alternative Second-Order Cone Programming Formulations for Support Vector Classification**

*Maldonado, S., López, J. | Information Sciences | 2014*

This paper presents two novel second-order cone programming (SOCP) formulations that determine a linear predictor using Support Vector Machines (SVMs). Inspired by the soft-margin SVM formulation, our first approach (ξ-SOCP-SVM) proposes a relaxation of the conic constraints via a slack variable, penalizing it in the objective function. The second formulation (r -SOCP-SVM) is based on the LP-SVM formulation principle: the bound of the VC dimension is loosened properly using the l∞-norm, and the margin is directly maximized. The proposed methods have several advantages: The first approach constructs a flexible classifier, extending the benefits of the soft-margin SVM formulation to second-order cones. The second method obtains comparable results to the SOCP-SVM formulation with less computational effort, since one conic restriction is eliminated. Experiments on well-known benchmark datasets from the UCI Repository demonstrate that our approach accomplishes the best classification performance compared to the traditional SOCP-SVM formulation, LP-SVM, and to standard linear SVM.

**Imbalanced data classification using second-order cone programming Support Vector Machines**

*Maldonado, S., López, J. | Pattern Recognition | 2014*

Learning from imbalanced data sets is an important machine learning challenge, especially in Support Vector Machines (SVM), where the assumption of equal cost of errors is made and each object is treated independently. Second-order cone programming SVM (SOCP-SVM) studies each class separately instead, providing quite an interesting formulation for the imbalanced classification task. This work presents a novel second-order cone programming (SOCP) formulation, based on the LP-SVM formulation principle: the bound of the VC dimension is loosened properly using the l∞-norm, and the margin is directly maximized using two margin variables associated with each class. A regularization parameter C is considered in order to control the trade-off between the maximization of these two margin variables. The proposed method has the following advantages: it provides better results, since it is specially designed for imbalanced classification, and it reduces computational complexity, since one conic restriction is eliminated. Experiments on benchmark imbalanced data sets demonstrate that our approach accomplishes the best classification performance, compared with the traditional SOCP-SVM formulation and with cost-sensitive formulations for linear SVM.

**Feature Selection for Support Vector Machines via Mixed Integer Linear Programming**

*Maldonado, S., Pérez, J., Labbé, M., Weber, R. | Information Sciences | 2014*

The performance of classification methods, such as Support Vector Machines, depends heavily on the proper choice of the feature set used to construct the classifier. Feature selection is an NP-hard problem that has been studied extensively in the literature. Most strategies propose the elimination of features independently of classifier construction by exploiting statistical properties of each of the variables, or via greedy search. All such strategies are heuristic by nature. In this work we propose two different Mixed Integer Linear Programming formulations based on extensions of Support Vector Machines to overcome these shortcomings. The proposed approaches perform variable selection simultaneously with classifier construction using optimization models. We ran experiments on real-world benchmark datasets, comparing our approaches with well-known feature selection techniques and obtained better predictions with consistently fewer relevant features.

**Robust Classification of Imbalanced Data using Ensembles of One-Class and Two-Class SVMs**

*Maldonado, S., Montecinos, C. | Intelligent Data Analysis | 2014*

200 words for Intelligent Data Systems The class imbalance problem is a relatively new challenge that has attracted growing attention from both industry and academia, since it strongly affects classification performance. Research also established that class imbalance is not an issue by itself, but its relationship with class overlapping and noise has an important impact on the prediction performance and stability. This fact has motivated the development of several approaches for classification of imbalanced data see e.g. [29,39]. In this paper, we present credit card customer churn prediction, an important topic in business analytics, using an ensemble of classifiers. Since this problem is considered as highly imbalanced, we employ different techniques for classification, such as Support Vector Data Description SVDD and two-class SVMs. The main idea is to address both class imbalance and class overlapping by stacking different classification approaches, while evaluating the diversity of the individual classifiers considering meta-learning measures. We performed experiments on artificial data sets and one real customer churn prediction problem from a Chilean financial entity, comparing our approach with well-known classification techniques for imbalanced data. The proposed strategy achieves an improvement of 6.1% over the best individual classifier in terms of predictive performance, providing accurate and robust classification models for different levels of balance and noise.

**Implications of dynamic spectrum management for regulation**

*Basaure, A., Marianov, V., & Paredes, R. | Telecommunications Policy. Vol 39 (7), 563–579 | 2014*

The Coase theorem suggests that a regulatory scheme, which clearly defines spectrum property rights and allows transactions between participants, induces an optimal spectrum assignment. This paper argues that the conditions required by Coase are gradually achieved by the introduction of Dynamic Spectrum Management (DSM), which enables a dynamic reassignment of spectrum bands at different times and places. DSM reduces the costs associated with spectrum transactions and thus provides an opportunity to enhance efficiency through voluntary transactions. This study analyzes the factors affecting the benefits of a regulatory scheme allowing transactions, compares and quantifies the potential gains associated with different spectrum regimes by employing agent-based simulations and suggests policy implications for spectrum regulation.

**A method for designing a strategy map using AHP and linear programming**

*Quezada, L.E., López-Ospina, H. | International Journal of Production Economics | 2014*

This paper presents a method to support the identification of the cause-effect relationships of strategic objectives of a strategy map of a balanced scorecard. A strategy map contains the strategic objectives of an organization, grouped into four perspectives (a) finances, (b) clients, (c) internal processes and (d) growth and learning, all of them linked through cause-effect relationships. The issue addressed in this paper is the identification of those relationships, topic in which the existing literature is scarce. A previous work was revisited, which uses the Analytic Hierarchy Process (AHP) to establish the “importance” of the arcs (relationships) of a strategy map. That work then deletes those arcs with an importance lower than a given threshold level defined by the authors. This paper goes beyond by selecting the arcs using a multi-objective linear programming model (LP). The model has two objectives (a) to minimize the number of selected relationships and (b) to maximize the total importance of the selected relationships. It is interested to see that a trade-off between both objectives is produced, so a control variable is used to incorporate the importance given to both objectives by managers. The paper also shows some applications of the method and their analysis.

**A Review of Primary Mine Ventilation System Optimization**

*Acuña, Enrique I., Lowndes, Ian S. | Interfaces | 2014*

Within the mining industry, a safe and economical mine ventilation system is an essential component of all underground mines. In recent years, research scientists and engineers have explored operations research methods to assist in the design and safe operation of primary mine ventilation systems. The main objective of these studies is to develop algorithms to identify the primary mine ventilation systems that minimize the fan power costs, including their working performance. The principal task is to identify the number, location, and duty of fans and regulators for installation within a defined ventilation network to distribute the required fresh airflow at minimum cost. The successful implementation of these methods may produce a computational design tool to aid mine planning and ventilation engineers. This paper presents a review of the results of a series of recent research studies that have explored the use of mathematical methods to determine the optimum design of primary mine ventilation systems relative to fan power costs.

**Feature selection for support vector machines via mixed integer linear programming**

*Maldonado, S.; Pérez, J.; Labbé, M.; Weber, R. | INFORMATION SCIENCES | 2014*

**A Collaborative Framework for a Port Logistics Chain**

*Ascencio, L., González-Ramírez R.G., Smith, N., Bearzotti, L., Camacho, F. | Journal of Applied Research Technology, Volume 12, Issue 3, 2014, 444-458. | 2014*

**Methodologies for Granting and Managing Loans for Micro-Entrepreneurs: New Developments and Practical Experiences**

*Bravo, C., Maldonado, S., Weber, R. | European Journal of Operational Research 227(2), 358-366 | 2013*

**Identification of Non-Technical Competency Gaps of Engineering Graduates in Chile**

*Le Boeuf, R., Pizarro, M., Espinoza R. | International Journal of Engineering Education | 2013*

A study was performed to identify the non-technical competencies needed by engineering graduates in Chile. A list of abilities and knowledge attributes were derived from similar studies and the expectations expressed by professional organizations. Input was received from managers of 75 different companies across a broad range of industries and sizes and 116 engineering graduates regarding the importance and preparation for 57 abilities and knowledge attributes in 10 categories. Each competency was given a priority based on one of three criteria: 50% or more of managers reporting it to be of the highest level of importance, an average rating greater than a cutoff, and a weighted measure of priority incorporating the importance and gap between graduate preparation and needs. The results suggest that, to managers in Chile, the most important non-technical competencies are in the areas of project control, ethics, communications, teamwork, innovation and budgeting. The competencies identified as important were similar to those seen in studies in other countries, but with a greater emphasis on ethics and innovation and less emphasis on quality and customer focus. Amethod for prioritizing the important competencies was also presented. Many initiatives were proposed to improve specific non-technical competencies that graduates need for competing in the Chilean job market. This paper presents the methodology, the findings, the comparisons with results from similar studies in other countries, and the strategies developed as a result of the findings.

**Insider Trading en Chile: Uso de Información Privilegiada en la bolsa de valores chilena entre 2006 y 2008**

*Rosales, Jorge | Editorial Académica Española. Saarbrücken, Alemania. ISBN 978-3-8473-6036-0. | 2012*

**Rentabilidad Educacional: Medición Econométrica de la influencia de la Educación sobre el nivel de los Ingresos en Chile**

*Rosales, J., Prado, J., Canales, K. | Editorial Académica Española. Saarbrücken, Alemania. ISBN 978-3-659-04742-8 | 2012*

**A branch-and-cluster coordination scheme for selecting prison facility sites under uncertainty**

*Hernández P, Alonso-Ayuso A., Bravo F., Escudero L., Grignard M., Marianov V., Weintraub A. | Computers & Operations Research, Volume 39 | 2012*

A multi-period stochastic model and an algorithmic approach to location of prison facilities under uncertainty are presented and applied to the Chilean prison system. The problem consists of finding locations and sizes of a preset number of new jails and determining where and when to increase the capacity of both new and existing facilities over a time horizon, while minimizing the expected costs of the prison system. Constraints include maximum inmate transfer distances, upper and lower bounds for facility capacities, and scheduling of facility openings and expansion, among others. The uncertainty lies in the future demand for capacity, because of the long time horizon under study and because of the changes in criminal laws, which could strongly modify the historical tendencies of penal population growth. Uncertainty comes from the effects of penal reform in the capacity demand. It is represented in the model through probabilistic scenarios, and the large-scale model is solved via a heuristic mixture of branch-and-fix coordination and branch-and-bound schemes to satisfy the constraints in all scenarios, the so-called branch-and-cluster coordination scheme. We discuss computational experience and compare the results obtained for the minimum expected cost and average scenario strategies. Our results demonstrate that the minimum expected cost solution leads to better solutions than does the average scenario approach. Additionally, the results show that the stochastic algorithmic approach that we propose outperforms the plain use of a state-of-the-art optimization engine, at least for the three versions of the real-life case that have been tested by us.

**A Dantzig-Wolfe decomposition approach for a multi-product capacitated lot sizing problem with pricing**

*González-Ramírez R.G., Smith N.R., Askin R.G. | International Journal of Production Research, IJPR. Volume 49, Number 4, 2011, pp. 1173-1196. | 2011*

**OPF with SVC and UPFC modeling for longitudinal systems**

*R. Palma, L. Vargas, J. Pérez, R. Torres | IEEE TRANSACTIONS ON POWER SYSTEMS | 2004*

**Engineering and Technology Management: Education**

*D. F. Kocaoglu, F. E. Rivera | Directions, INFORMS XXXIII – SINGAPORE, Singapore, June 25-28 | 1995*

**Engineering and Technology Management**

*D. F. Kocaoglu, F. E. Rivera | TIMS/ORSA Joint National Meeting, Detroit, October 23-26 | 1994*

#### Matemática

**Stochastic topology design optimization for continuous elastic materials**

*Carrasco, M., Ivorra, B., Ramos, A.M. | Computer Methods in Applied Mechanics and Engineering | 2015*

In this paper, we develop a stochastic model for topology optimization. We find robust structures that minimize the compliance for a given main load having a stochastic behavior. We propose a model that takes into account the expected value of the compliance and its variance. We show that, similarly to the case of truss structures, these values can be computed with an equivalent deterministic approach and the stochastic model can be transformed into a nonlinear programming problem, reducing the complexity of this kind of problems. First, we obtain an explicit expression (at the continuous level) of the expected compliance and its variance, then we consider a numerical discretization (by using a finite element method) of this expression and finally we use an optimization algorithm. This approach allows to solve design problems which include point, surface or volume loads with dependent or independent perturbations. We check the capacity of our formulation to generate structures that are robust to main loads and their perturbations by considering several 2D and 3D numerical examples. To this end, we analyze the behavior of our model by studying the impact on the optimized solutions of the expected-compliance and variance weight coefficients, the laws used to describe the random loads, the variance of the perturbations and the dependence/independence of the perturbations. Then, the results are compared with similar ones found in the literature for a different modeling approach.

**A multi-class SVM approach based on the l1-norm minimization of the distances between the reduced convex hulls**

*Carrasco, M., López, J., Maldonado, S. | Pattern Recognition | 2015*

Multi-class classification is an important pattern recognition task that can be addressed accurately and efficiently by Support Vector Machine (SVM). In this work we present a novel SVM-based multi-class classification approach based on the center of the configuration, a point which is equidistant to all classes. The center of the configuration is obtained from the dual formulation by minimizing the distances between the reduced convex hulls using the l1-norm, while the decision functions are subsequently constructed from this point. This work also extends the ideas of Zhou et al. (2002) [37] to multi-class classification. The use of l1-norm provides a single linear programming formulation, which reduces the complexity and confers scalability compared with other multi-class SVM methods based on quadratic programming formulations. Experiments on benchmark datasets demonstrate the virtues of our approach in terms of classification performance and running times compared with various other multi-class SVM methods.

#### Movilidad Urbana

**Exploring the effect of boarding and alighting ratio on passengers’ behaviour at metro stations by laboratory experiments**

*Seriani, S., Fernandez, R., Luangboriboon, N., Fujiyama, T. | Journal of Advanced Transportation Article ID 6530897 / 2019 / DOI: 10.1155/2019/6530897. | 2019*

**Experimental study for estimating the passenger space at metro stations with platform edge doors**

*Seriani, S, Fujiyama, T | Transportation Research Record: Journal of the Transportation Research Board | 2019*

#### Obras Civiles

**Biofiltration of benzo [α] pyrene, toluene and formaldehyde in air by a consortium of Rhodococcus erythropolis and Fusarium solani: Effect of inlet loads, gas flow and temperature**

*Vergara-Fernández, A., Yánez, D., Morales, P., Scott, F., Aroca, G., Diaz-Robles, L., & Moreno-Casas, P. | . Chemical Engineering Journal, 332, 702-710. | 2018*

**Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures**

*Astroza, R., Ebrahimian, H., Li, Y., and Conte, J.P. | Mechanical Systems and Signal Processing, 93, 661–687 | 2017*

**Site response analysis using one-dimensional equivalent-linear analysis and Bayesian filtering**

*Astroza, R., Pastén, C., and Ochoa-Cornejo, F. | Computers and Geotechnics, 89, 43–54. | 2017*

**Predominant period and equivalent viscous damping ratio identification on a full-scale building shake table test.**

*Chen, M.C., Astroza, R., Restrepo, J.I., Conte, J.P., Hutchinson, T.C., and Bock, Y. | Earthquake Engineering & Structural Dynamics, 46(14), 2459–2477 | 2017*

**Nonlinear FE model updating and reconstruction of the response of an instrumented seismic isolated bridge to the 2010 Maule Chile earthquake**

*Li, Y., Astroza, R., and Conte, J.P. | Earthquake Engineering & Structural Dynamics, 46(15), 2699–2716 | 2017*

**A nonlinear model inversion method for joint system parameter, noise, and input identification of civil structures**

Ebrahimian, H., Astroza, R., Conte, J.P., and Papadimitriou, C. | Procedia Engineering, 199, 924–929 | 2017

**Nonlinear FE model updating of seismic isolated bridge instrumented during the 2010 Mw 8.8 Maule-Chile Earthquake**

*Li, Y., Astroza, R., Conte, J.P., and Soto, P. | Procedia Engineering, 199, 3003–3008 | 2017*

**Time-variant modal parameters and response behavior of a base-isolated building tested on a shake table**

*Astroza, R., Gutierrez, G., Repenning, C., and Hernández, F. | Earthquake Spectra, 34(1), 121-143. | 2018 | 2017*

**Insights into Delayed Ettringite Formation Damage Through Acoustic Nonlinearity**

*Rashidi, M., Paul, A., Kim, J.-Y., Jacobs, L., Kurtis, K. | Cement and Concrete Research, V 95, pp 1–8. | 2017*

**Transfer and Development Length of High-Strength Duplex Stainless Steel Strand in Prestressed Concrete Piles**

*Paul, A., Gleich, L., Kahn, L. | PCI Journal, V 62, No. 3, pp 59–71. | 2017*

**Structural Performance of Prestressed Concrete Bridge Piles Using Duplex Stainless Steel Strands**

*Paul, A., Gleich, L., Kahn, L. | Journal of Structural Engineering, V 143, No. 17, pp 04017042-1– 04017042-9. | 2017*

**Traffic Flow Analysis**

*Fernandez, R., Yousaf, M.H., Ellis, T., Chen, Z. and Velastin, S.A. | Computer Vision in Intelligent Transportation Systems. Eds: R. Loce, M. Trivedi, and R. Bala. Wiley. 131- 163. | 2017*

**Computation of the Basset force: recent advances and environmental flow applications**

*Moreno-Casas, P. A., & Bombardelli, F. A. | Environmental Fluid Mechanics, 16(1), 193-208. | 2016*

**Elucidating the key role of the fungal mycelium on the biodegradation of n-pentane as a model hydrophobic VOC**

*Vergara-Fernández, A., Scott, F., Moreno-Casas, P., Díaz-Robles, L., & Muñoz, R. | Chemosphere, 157, 89-96. | 2016*

**System identification of a full-scale five-story reinforced concrete building tested on the NEES-UCSD shake table**

*Astroza, R., Ebrahimian, H., Conte, J.P., Restrepo, J.I., and Hutchinson, T.C. | Structural Control and Health Monitoring, 23(3), 535–559. | 2016*

This paper presents the identification of modal properties of a full-scale five-story reinforced concrete building fully outfitted with nonstructural components and systems (NCSs) tested on the NEES-UCSD shake table. The fixed base building is subjected to a sequence of earthquake motions selected to progressively damage the structure and NCSs. Between seismic tests, ambient vibration response is recorded. Additionally, low-amplitude white noise (WN) base excitation tests are conducted during the test protocol. Using the vibration data recorded, five state-of-the-art system identification (SID) methods are employed, including three output-only and two input-output. These methods are used to estimate the modal properties of an equivalent viscously-damped linear elastic time-invariant model of the building at different levels of damage and their results compared. The results show that modal properties identified from different methods are in good agreement and that the estimated modal parameters are affected by the amplitude of excitation and structural/nonstructural damage. Detailed visual inspections of damage performed between the seismic tests permit correlation of the identified modal parameters with the actual damage. The identified natural frequencies are used to determine the progressive loss of apparent global stiffness of the building, and the state-space models identified using WN test data are employed to investigate the relative modal contributions to the measured building response at different damage states. This research provides a unique opportunity to investigate the performance of different SID methods when applied to vibration data recorded in a real building subjected to progressive damage induced by a realistic source of dynamic excitation. Copyright © 2015 John Wiley & Sons, Ltd.

**Influence of the construction process and nonstructural components on the modal properties of a five-story building**

*Astroza, R., Ebrahimian, H., Conte, J.P., Restrepo, J.I., and Hutchinson, T.C. | Earthquake Engineering & Structural Dynamics, 45(7), 1063–1084. | 2016*

A full-scale five-story reinforced concrete building was built and tested on the NEES-UCSD shake table during the period from May 2011 to May 2012. The purpose of this test program was to study the response of the structure and nonstructural components and systems (NCSs) and their dynamic interaction during seismic base excitation of different intensities. The building specimen was tested first under a base-isolated condition and then under a fixed-based condition. As the building was being erected, an accelerometer array was deployed on the specimen to study the evolution of its modal parameters during the construction process and placement of major NCSs. A sequence of dynamic tests, including daily ambient vibration, shock (free vibration) and forced vibration tests (low-amplitude white noise and seismic base excitations), were performed on the building at different stages of construction. Different state-of-the-art system identification methods, including three output-only and two input-output methods, were used to estimate the modal properties of the building. The obtained results allow to investigate in detail the effects of the construction process and NCSs on the dynamic parameters of this building system and to compare the modal properties obtained from different methods, as well as the performance of these methods. Copyright © 2015 John Wiley & Sons, Ltd.

**Full-scale structural and nonstructural building system performance during earthquakes: Part I – Specimen description, test protocol, and structural response**

*Chen, M.C., Pantoli, E., Wang, X., Astroza, R., Ebrahimian, H., Hutchinson, T.C., Conte, J.P., Restrepo, J.I., Marin, C., Walsh, K., Bachman, R., Hoehler, M., Englekirk, R., and Faghihi, M. | Earthquake Spectra | 2016*

A landmark experimental program was conducted to advance the understanding of nonstructural system performance during earthquakes. The centerpiece of this effort involved shake table testing a full-scale five-story reinforced concrete building furnished with a broad variety of nonstructural components and systems (NCSs) including complete and operable egress, mechanical and electrical systems, façades, and architectural layouts. The building-NCS system was subjected to a suite of earthquake motions of increasing intensity, while base isolated and then fixed at its base. In this paper, the major components of the test specimen, including the structure and its NCSs, the monitoring systems, and the seismic test protocol are described in detail. Important response and damage characteristics of the structure are also presented. A companion paper describes the damage observed for the various NCSs and correlates these observations with the structure’s response.

**Full-scale structural and nonstructural building system performance during earthquakes: Part II – NCS damage states**

*Pantoli, E., Chen, M.C., Wang, X., Astroza, R., Ebrahimian, H., Hutchinson, T.C., Conte, J.P., Restrepo, J.I., Marin, C., Walsh, K., Bachman, R., Hoehler, M., Englekirk, R., and Faghihi, M. | Earthquake Spectra | 2016*

Nonstructural components and systems (NCSs) provide little to no load bearing capacity to a building; however, they are essential to support its operability. As a result, 75–85% of the initial building financial investment is associated with these elements. The vulnerability of NCSs even during low intensity earthquakes is repeatedly exposed, resulting in large economic losses, disruption of building functionality, and concerns for life safety. This paper describes and classifies damage to NCSs observed during landmark shake table tests of a full-scale five-story reinforced concrete building furnished with a broad variety of NCSs. This system-level test program provides a unique dataset due to the completeness and complexity of the investigated NCSs. Results highlight that the interactions between disparate nonstructural systems, in particular displacement compatibility, as well as the interactions between the NCSs and the building structure often govern their seismic performance.

**Landmark dataset from the building nonstructural components and systems (BNCS) project.**

*Pantoli, E., Chen, M.C., Hutchinson, T.C., Astroza, R., Conte, J.P., Ebrahimian, H., Restrepo, J.I., and Wang, X. | Earthquake Spectra | 2016*

A full-scale five-story reinforced concrete building fully equipped with nonstructural components and systems (NCSs) was tested to a near collapse condition on the large outdoor shake table at the University of California, San Diego in 2012. This landmark test program was intended to advance the understanding of the seismic behavior of NCSs installed in buildings, and for this reason it was coined the “Building Nonstructural Components and Systems” (BNCS) project. The BNCS test specimen was monitored with digital still cameras, more than 80 video cameras, 500 analog sensors, and a global positioning system (GPS) and subjected to a suite of earthquake input motions of increasing intensity while in a base isolated and fixed-base configuration. The resulting high-quality dataset is now publicly available within the NEES repository (NEEShub). The goal of this paper is to outline the types of data available and provide a roadmap for navigating through it in an effort to support its future use by the community.

**Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation**

*Ebrahimian, H., Astroza, R., Conte, J.P., and de Callafon, R.A. | Mechanical Systems and Signal Processing. | 2016*

This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer–Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

#### Keywords

- Bayesian inference;
- Nonlinear finite element model;
- Model updating;
- Uncertainty quantification;
- Gradient-based optimization;
- Nonlinear system identification

**Finite element model updating using simulated annealing hybridized with unscented Kalman filter**

*Astroza, R., Nguyen, L.T., and Nestorović, T. | Computers & Structures, 177, 176–191 | 2016*

#### Abstract

This paper proposes a method for finite element (FE) model updating of civil structures. The method is a hybrid global optimization algorithm combining simulated annealing (SA) with the unscented Kalman filter (UKF). The objective function in the optimization problem can be defined in the modal, time, or frequency domains. The algorithm improves the accuracy, convergence rate, and computational cost of the SA algorithm by local improvements of the accepted candidates though the UKF. The proposed methodology is validated using a mathematical function and numerically simulated response data from linear and nonlinear FE models of realistic three-dimensional structures.

#### Keywords

- Finite element model;
- Model updating;
- Simulated annealing;
- Unscented Kalman filter;
- Parameter estimation;
- Global optimization

**Shake table testing of an elevator system in a full-scale five-story building**

*Wang, X., Hutchinson, T.C., Astroza, R., Conte, J.P., Restrepo, J.I., Hoehler, M.S., and Ribeiro, W. | Earthquake Engineering & Structural Dynamics, In Press. | 2016*

**Stochastic topology design optimization for continuous elastic materials**

*Carrasco, M., Ivorra, B., Ramos, A.M. | Computer Methods in Applied Mechanics and Engineering | 2015*

In this paper, we develop a stochastic model for topology optimization. We find robust structures that minimize the compliance for a given main load having a stochastic behavior. We propose a model that takes into account the expected value of the compliance and its variance. We show that, similarly to the case of truss structures, these values can be computed with an equivalent deterministic approach and the stochastic model can be transformed into a nonlinear programming problem, reducing the complexity of this kind of problems. First, we obtain an explicit expression (at the continuous level) of the expected compliance and its variance, then we consider a numerical discretization (by using a finite element method) of this expression and finally we use an optimization algorithm. This approach allows to solve design problems which include point, surface or volume loads with dependent or independent perturbations. We check the capacity of our formulation to generate structures that are robust to main loads and their perturbations by considering several 2D and 3D numerical examples. To this end, we analyze the behavior of our model by studying the impact on the optimized solutions of the expected-compliance and variance weight coefficients, the laws used to describe the random loads, the variance of the perturbations and the dependence/independence of the perturbations. Then, the results are compared with similar ones found in the literature for a different modeling approach.

**Seasonal nearshore sediment resuspension and water clarity at Lake Tahoe**

*Reardon, K. E., Moreno-Casas, P. A., Bombardelli, F. A., & Schladow, S. G. | Lake and Reservoir Management, 32(2), 132-145. | 2015*

**Experimental evaluation of the seismic response of a roof-top mounted cooling tower**

*Astroza, R., Pantoli, E., Selva, F., Restrepo, J.I., Hutchinson, T.C., and Conte, J.P. | Earthquake Spectra | 2015*

This paper examines the seismic response of a cooling tower supported on four isolation/restraint (I/R) mounts. The tower was mounted on the roof of a five-story reinforced concrete building built at full-scale and tested on the large outdoor unidirectional shake table at the University of California, San Diego. The building was tested in two phases: (1) base-isolated and (2) fixed-base. In each phase, the building was subjected to six earthquake input ground motions reproduced by the shake table. In this paper, the measured response of the cooling tower and its supporting system are analyzed and compared to current code provisions.

**Material parameter identification in distributed plasticity FE models of frame-type structures using nonlinear stochastic filtering**

*Astroza, R., Ebrahimian, H., and Conte, J.P. | ASCE Journal of Engineering Mechanics | 2015*

This paper proposes a novel framework that combines high-fidelity mechanics-based nonlinear (hysteretic) finite-element (FE) models and a nonlinear stochastic filtering technique, referred to as the unscented Kalman filter, to estimate unknown material parameters in frame-type structures. The proposed identification framework updates nonlinear FE models using spatially limited noisy measurement data, and it can be further used for damage identification purposes. To validate its effectiveness, robustness, and accuracy, this framework is applied to a cantilever steel column representing a bridge pier and two-dimensional steel frame. Both structures are modeled using beam-column elements with distributed plasticity and are subjected to a suite of earthquake ground motions of varying intensity. The results indicate that the material parameters of the nonlinear FE models are accurately estimated provided that the loading intensity is sufficient to exercise the parts (branches) of the nonlinear material model, which are governed by the material parameters to be identified, and the measured response quantities are sufficiently sensitive to the material parameters to be identified, especially when a limited number of measurements are considered.

**Extended Kalman filter for material parameter estimation in nonlinear structural finite element models using direct differentiation method**

*Ebrahimian, H., Astroza, R., and Conte, J.P. | Earthquake Engineering & Structural Dynamics | 2015*

This paper presents a novel nonlinear finite element (FE) model updating framework, in which advanced nonlinear structural FE modeling and analysis techniques are used jointly with the extended Kalman filter (EKF) to estimate time-invariant parameters associated to the nonlinear material constitutive models used in the FE model of the structural system of interest. The EKF as a parameter estimation tool requires the computation of structural FE response sensitivities (total partial derivatives) with respect to the material parameters to be estimated. Employing the direct differentiation method, which is a well-established procedure for FE response sensitivity analysis, facilitates the application of the EKF in the parameter estimation problem. To verify the proposed nonlinear FE model updating framework, two proof-of-concept examples are presented. For each example, the FE-simulated response of a realistic prototype structure to a set of earthquake ground motions of varying intensity is polluted with artificial measurement noise and used as structural response measurement to estimate the assumed unknown material parameters using the proposed nonlinear FE model updating framework. The first example consists of a cantilever steel bridge column with three unknown material parameters, while a three-story three-bay moment resisting steel frame with six unknown material parameters is used as second example. Both examples demonstrate the excellent performance of the proposed parameter estimation framework even in the presence of high measurement noise.

**Dynamic characteristics and seismic behavior of prefabricated steel stairs in a full-scale five-story building shake table test program**

*Wang, X., Astroza, R., Hutchinson, T.C., Conte, J.P., and Restrepo, J.I. | Earthquake Engineering & Structural Dynamics | 2015*

This paper investigates the dynamic characteristics and seismic behavior of prefabricated steel stairs in a full-scale five-story building shake table test program. The test building was subjected to a suite of earthquake input motions and low-amplitude white noise base excitations first, while the building was isolated at its base, and subsequently while it was fixed to the shake table platen. This paper presents the modal characteristics of the stairs identified using the data recorded from white noise base excitation tests as well as the physical and measured responses of the stairs from the earthquake tests. The observed damage to the stairs is categorized into three distinct damage states and is correlated with the interstory drift demands of the building. These shake table tests highlight the seismic vulnerability of modern designed stair systems and in particular identifies as a key research need the importance of improving the deformability of flight-to-building connections.

**Wind‐driven nearshore sediment resuspension in a deep lake during winter**

*Reardon, K. E., Bombardelli, F. A., Moreno‐Casas, P. A., Rueda, F. J., & Schladow, S. G. | Water Resources Research, 50(11), 8826-8844 | 2014*

**Tactical design of high-demand bus transfers**

*Guevara, C. A., & Donoso, G. A. | Transport Policy | 2014*

We use micro-simulation to assess five tactical designs seeking variance reduction of a high-demand transfer stop that resembles a representative case of Transantiago, the public transportation system of Santiago de Chile. We explore: demand splitting, route differentiation, offline holding, online holding, and prepayment; all of which are applied locally at the transfer stop, and affecting only the feeders. We analyze the impacts over operators and users, both at the transfer stop and downstream, finding that online holding has the best performance overall. These findings were robust to various changes in the simulation assumptions. The paper finishes discussing implications of these results for public policy design, and possible extensions of this research.

**Wind-driven nearshore sediment resuspension in a deep lake during Winter**

*Peardon, KE, Bombardelli, FA., Moreno-Casas, PA., Rueda, FJ., Schladow, SG. | Water Resources Research | 2014*

Ongoing public concern over declining water quality at Lake Tahoe, California-Nevada (USA) led to an investigation of wind-driven nearshore sediment resuspension that combined field measurements and modeling. Field data included: wind speed and direction, vertical profiles of water temperature and currents, nearbed velocity, lakebed sediment characteristics, and suspended sediment concentration and particle size distribution. Bottom shear stress was computed from ADV-measured nearbed velocity data, adapting a turbulent kinetic energy method to lakes, and partitioned according to its contributions attributed to wind-waves, mean currents, and random motions. When the total shear stress exceeded the critical shear stress, the contribution to overall shear stress was about 80% from wind-waves and 10% each from mean currents and random motions. Therefore, wind-waves were the dominant mechanism resulting in sediment resuspension as corroborated by simultaneous increases in shear stress and total measured sediment concentration. The wind-wave model STWAVE was successfully modified to simulate wind-wave-induced sediment resuspension for viscous-dominated flow typical in lakes. Previous lake applications of STWAVE have been limited to special instances of fully turbulent flow. To address the validity of expressions for sediment resuspension in lakes, sediment entrainment rates were found to be well represented by a modified 1991 García and Parker formula. Last, in situ measurements of suspended sediment concentration and particle size distribution revealed that the predominance of fine particles (by particle count) that most negatively impact clarity was unchanged by wind-related sediment resuspension. Therefore, we cannot assume that wind-driven sediment resuspension contributes to Lake Tahoe’s declining nearshore clarity.

**Synthetic Hybrid Broadband Seismograms Based on InSAR Coseismic Displacements**

*Fortuno, C ; de la Llera, JC; Wicks, CW; Abell, JA | Bulletin of the Seismological Society of America | 2014*

Conventional acceleration records do not properly account for the observed coseismic ground displacements, thus leading to an inaccurate definition of the seismic demand needed for the design of flexible (long period) structures. Large coseismic displacements observed during the 27 February 2010 Maule earthquake suggest that this effect should be included in the design of flexible structures by modifying the design ground motions and spectra considered. Consequently, Green’s functions are used herein to compute synthetic low‐frequency seismograms that are consistent with the coseismic displacement field obtained from interferometry using synthetic aperture radar (SAR) images. In this case, the coseismic displacement field was determined by interfering twenty SAR images of the Advanced Land Observation Satellite (ALOS)/PALSAR satellite taken between 12 October 2007 and 28 May 2010. These images cover the region affected by the 2010 *M*_{w} 8.8 Maule earthquake. Synthetic broadband seismograms are built by superimposing the low‐pass filtered synthetic low‐frequency seismograms with high‐frequency strong‐motion data. The broadband seismograms generated are then consistent with the coseismic displacement field and the high‐frequency content of the earthquake. A sensitivity analysis is performed using three different fault and slip parameters, the rupture velocity, the corner frequency, and the slip rise time. Results show that the optimal corner frequency of the low‐pass filter *f*_{c}=1/*T*_{c}, leads to a trade‐off between acceleration and displacement accuracy. Furthermore, spectral response for long periods, say *T*≥8 s, is relatively insensitive to the value of *T*_{c}, whereas shorter periods are strongly dependent on both the slip rise time and *T*_{c}. In general, larger displacements consistent with coseismic data are obtained using this technique instead of digitally processing the acceleration ground‐motion records.

**Recopilación de criterios de evaluación de proyectos y alternativas de ubicación de infraestructura portuaria**

*Repetto F., Quiroz C., Vásquez J. | Seminario Internacional de Ingeniería y Operación Portuaria 2014 | 2014*

**High Strength Lightweight Concrete (HSLC): Challenges when Moving from the Laboratory to the Field**

*Moreno, D., Zunino, F., Paul, A., Lopez, M. | Construction and Building Materials, V 56, No. 15, pp 44–52. | 2014*

**Using video to assess the performance of phase-resolving models**

*Repetto, F., Catalan, P.A. and Cienfuegos, R. | Proceedings Coastal Dynamics 2013, ASCE | 2013*

**Damage assessment and seismic intensity analysis of the 2010 (Mw 8.8) Maule Earthquake**

A*stroza, M., Ruiz, S., and Astroza, R | Earthquake Spectra | 2012*

The MSK-64 seismic intensities inside the damage area of the 2010 Maule earthquake are estimated. With this purpose, field surveys of damage to typical single-family buildings located in 111 cities of the affected area were used. Cities located close to the north part of the earthquake rupture suffered higher damage, but most of this damage concerned adobe and unreinforced masonry houses. Minor and moderate damage was noted in modern low-rise engineered and nonengineered constructions, especially in confined masonry buildings. Despite the large length of the rupture, which reached more than 450 km, only one intensity value equal to IX was determined, and 21% of the values were greater than VII. The attenuation of seismic intensity was controlled by the distance to the main asperity more than to the hypocenter, which would be an important characteristic of the megathrust earthquakes, and it should therefore be considered in the seismic risk of large subduction environments.

**Concrete Containing Natural Pozzolans: New Challenges for Internal Curing**

*Espinoza-Hijazin, G., Paul, A., Lopez, M. | Journal of Materials in Civil Engineering, V 24, No. 8, pp 981–988. | 2012*

**3D numerical simulation of particle-particle collisions in saltation mode near stream beds**

*Moreno, P., & Bombardelli, F. | Acta Geophysica, 60(6), 1661-1688 | 2012*

**Enhancement of long period components of recorded and synthetic ground motions using InSAR**

*Abell J.A., de la Llera J.C., Wicks C. | Soil Dynamics and Earthquake Engineering | 2011*

Tall buildings and flexible structures require a better characterization of long period ground motion spectra than the one provided by current seismic building codes. Motivated by that, a methodology is proposed and tested to improve recorded and synthetic ground motions which are consistent with the observed co-seismic displacement field obtained from interferometric synthetic aperture radar (InSAR) analysis of image data for the Tocopilla 2007 earthquake (*M*_{w}=7.7) in Northern Chile. A methodology is proposed to correct the observed motions such that, after double integration, they are coherent with the local value of the residual displacement. Synthetic records are generated by using a stochastic finite-fault model coupled with a long period pulse to capture the long period fling effect.

It is observed that the proposed co-seismic correction yields records with more accurate long-period spectral components as compared with regular correction schemes such as acausal filtering. These signals provide an estimate for the velocity and displacement spectra, which are essential for tall-building design. Furthermore, hints are provided as to the shape of long-period spectra for seismic zones prone to large co-seismic displacements such as the Nazca-South American zone.

**Assessing Lightweight Aggregate Efficiency for Maximizing Internal Curing Performance**

*Paul, A., Lopez, M. | ACI Materials Journal, V 108, No. 4, pp 385-393. | 2011*

**The use of seawater as process water at Las Luces copper–molybdenum beneficiation plant in Taltal (Chile)**

*Moreno, P. A., Aral, H., Cuevas, J., Monardes, A., Adaro, M., Norgate, T., & Bruckard, W. | Minerals Engineering, 24(8), 852-858 | 2011*

**Evaluation of the operating performance of conventional versus flocculator secondary clarifiers at the kuwahee wastewater treatment plant, Knoxville, Tennessee.**

*Moreno, P. A., & Reed, G. D. | Water environment research, 79(5), 547-553 | 2007*

#### Química

**Highly active copper-based Ce@TiO2 core-shell catalysts for the selective reduction of nitric oxide with carbon monoxide in the presence of oxygen**

*N. López, G. Aguila, P. Araya, S. Guerrero | Catalysis Communications, volume 104, pages 17–21. | 2018*

**Study of the effect of the incorporation of TiO2 nanotubes on the mechanical and photodegradation properties of polyethylenes**

*A. Zenteno, I. Lieberwirth, F. Catalina, T. Corrales, S. Guerrero, D. Vasco, P. Zapata | Composites B: Engineering, volumen 112, pages 66-73. | 2017*

**Biodegradation of benzo[α]pyrene, toluene, and formaldehyde from the gas phase by a consortium of Rhodococcus erythropolis and Fusarium solani**

*Morales, P., Cáceres, M., Scott, F., Díaz-Robles, L., Aroca, G., Vergara-Fernández, A. | Appl. Microbiol. Biotechnol. 101, 6765–6777. | 2017*

**On the solution of differential-algebraic equations through gradient flow embedding**

*del Rio-Chanona, E.A., Bakker, C., Fiorelli, F., Paraskevopoulos, M., Scott, F., Conejeros, R., Vassiliadis, V.S. | Comput. Chem. Eng. 103, 165–175. | 2017*

**A generalized disjunctive programming framework for the optimal synthesis and analysis of processes for ethanol production from corn stover**

*Scott, F., Aroca, G., Caballero, J.A., Conejeros, R. | Bioresour. Technol. 236, 212–224. | 2017*

**Constrained NLP via Gradient Flow Penalty Continuation: Towards Self-Tuning Robust Penalty Schemes**

*Scott, F., Conejeros, R., Vassiliadis, V.S. | Comput. Chem. Eng. 101, 243–258. | 2017*

**Highly active Rb/Cu/YCeO2 catalyst for the storage of nitric oxide under lean conditions**

*C. Bormann, N. Rodríguez, P. Araya, S. Guerrero | Catalysis Communications, volume 76, pages 76–81. | 2016*

**Lactose-Derived Prebiotics: A Process Perspective**

*Illanes, A., Guerrero, C., Vera, C., Wilson, L., Conejeros, R., Scott, F. | … | 2016*

**Technical and Economic Analysis of Industrial Production of Lactose-Derived Prebiotics With Focus on Galacto-Oligosaccharides**

*Scott, F., Vera, C., Conejeros, R. | Lactose-Derived Prebiotics, Academic Press, pp. 261–284. | 2016*

**Photocatalytic inhibition of bacteria by TiO2 nanotubes-doped polyethylene composites**

*Yañez, D., Guerrero, S., Lieberwirth, I., Ulloa, M.T., Gomez, T., Rabagliati, F.M., Zapata, P.A. | Applied Catalysis A: General | 2015*

Polyethylene (PE) and polyethylene-octadecene (LLDPE) composites containing titanium dioxide nanotubes were synthesized and applied to the inhibition of selected bacteria. It was found that polymerization rate of the polymerizations increased with the incorporation of the octadecene compared with bare ethylene, while with modified nanotubes (O–TiO_{2}–Ntbs) the catalytic activity showed a slight decrease compared with the pure polymer. Regarding physical properties, the melting temperature and cristallinity of PE was higher than LLDPE. LLDPE presented lower rigidity than PE and thus lower Young’s modulus. On the other hand, with the incorporation of nanotubes, Young’s modulus did not change significantly with respect to PE. After 2 h of contact, the PE/O–TiO_{2}–Ntbs composite showed a reduction of *Escherichia coli* of 36.7% under no UVA irradiations. In contrast, LLDPE/O–TiO_{2}–Ntbs showed 63.5%. The photocatalytic reduction (under UVA light) was much higher and after 60 min the LLDPE/*O*-TiO_{2}-Ntbs composites showed a bacterial reduction of 99.9%, whereas the PE/*O*-TiO_{2}-Ntbs showed 42.6% of catalytic reduction.

**Potassium titanate for the production of biodiesel**

*D. Salinas, S. Guerrero, A. Cross, P.Araya, E.E. Wolf | Fuel, volumen 166C, pages 237-244. | 2015*

**Corn stover semi-mechanistic enzymatic hydrolysis model with tight parameter confidence intervals for model-based process design and optimization**

*Scott, F., Li, M., Williams, D.L., Conejeros, R., Hodge, D.B., Aroca, G. | Bioresour. Technol. 177, 255–65. | 2015*

**Effect of initial substrate/inoculum ratio on cell yield in the removal of hydrophobic VOCs in fungal biofilters**

*Vergara-Fernández A., J. San Martín-Davison, L. Díaz-Robles, O. Soto-Sánchez | Revista Mexicana de Ingeniería Química | 2014*

Different kinetic models have been proposed to describe the elimination of hydrophobic volatile organic compounds (VOCs) by fungal biofiltration. In this process the ratio of the initial substrate concentration (Cpb0) to the initial biomass (X0) has been shown to influence the cell yield. This papers presents a study of the efect of the Cpb0/X0 ratio on observed cell yield (Yobs) in a fixed bed batch system (microcosm) using a gaseous carbon source, as an approximation to its application in the fungal biofiltration of hydrophobic VOCs. Essays were carried out in fixed-bed microcosms using the filamentous fungus Fusarium solani as a biological agent and n-pentane as a carbon and energy source. The results indicated that Yobs in the gas phase is inversely proportional to the Cpb0/X0 ratio, with values of 0.9 to 0.35 gbiomass g-1pentane being obtained when the Cpb0/X0 ratio is changed from 0.1 to 1.0 gpentane g-1biomass. The results indicate that more than 60% of n-pentane was consumed due to energy spilling, and that strong dissociation of catabolism from anabolism occurred at higher Cb0/X0 ratios.

#### Transporte

**Exploring the effect of boarding and alighting ratio on passengers’ behaviour at metro stations by laboratory experiments**

*Seriani, S., Fernandez, R., Luangboriboon, N., Fujiyama, T. | Journal of Advanced Transportation Article ID 6530897 / 2019 / DOI: 10.1155/2019/6530897. | 2019*

**Experimental study for estimating the passenger space at metro stations with platform edge doors**

*Seriani, S, Fujiyama, T | Transportation Research Record: Journal of the Transportation Research Board | 2019*

**Learning about what research is and how researchers do it: Supporting the pursuit of and transition to postgraduate studies. | Shaping Higher Education with Students: Ways to Connect Research and Teaching (Eds: Vincent C. H. Tong, Alex Standen, Mina Sotiriou).**

*Crisan, C, Geraniou, E, Townsend, A, Seriani, S, Filho, P.I.O. | UCL Press, Londres, Reino Unido | 2018*

**Exploring the pedestrian level of interaction on platform conflict areas at metro stations by real-scale laboratory experiments.**

*Seriani, S, Fujiyama, T, Holloway, C | Transportation Planning and Technology 40(1), 100-118 | 2017*

**Impact of platform edge doors on passengers’ boarding and alighting time and platform behavior**

*De Ana Rodriguez, G, Seriani, S, Holloway, C | Transportation Research Record Journal of the Transportation Research Board 2540, 102-110 | 2016*

**New insights on random regret minimization models**

*van Cranenburgh, Sander; Guevara, Cristian Angelo; Chorus, Caspar G | Transportation Research Part A: Policy and Practice | 2015*

This paper develops new methodological insights on Random Regret Minimization (RRM) models. It starts by showing that the classical RRM model is not scale-invariant, and that – as a result – the degree of regret minimization behavior imposed by the classical RRM model depends crucially on the sizes of the estimated taste parameters in combination with the distribution of attribute-values in the data. Motivated by this insight, this paper makes three methodological contributions: (1) it clarifies how the estimated taste parameters and the decision rule are related to one another; (2) it introduces the notion of “profundity of regret”, and presents a formal measure of this concept; and (3) it proposes two new family members of random regret minimization models: the *μ*RRM model, and the Pure-RRM model. These new methodological insights are illustrated by re-analyzing 10 datasets which have been used to compare linear-additive RUM and classical RRM models in recently published papers. Our re-analyses reveal that the degree of regret minimizing behavior imposed by the classical RRM model is generally very limited. This insight explains the small differences in model fit that have previously been reported in the literature between the classical RRM model and the linear-additive RUM model. Furthermore, we find that on 4 out of 10 datasets the *μ*RRM model improves model fit very substantially as compared to the RUM and the classical RRM model.

**Experimental study for estimating capacity of cycle lanes**

*Seriani, S, Fernandez, R, Hermosilla, E | Transportation Research 8, 192-203 | 2015*

**Pedestrian traffic management of boarding and alighting in metro stations**

*Seriani, S, Fernandez, R | Transportation Research Part C 53, 76-92 | 2015*

**A time-hierarchical microeconomic model of activities**

*López-Ospina H., Martínez, F. J., Cortés, C. E. | Transportation | 2015*

The microeconomic approach to explain consumers’ behavior regarding the choice of activities, consumption of goods and use of time is extended in this paper by explicitly including the temporal dimension in the choice-making process. Recognizing that some activities, such as a job and education, involve a long-term commitment and that other activities, such as leisure and shopping, are conducted and modified in the short term, we make these differences explicit in a microeconomic framework. Thus, a hierarchical temporal structure defines the time window or frequency of adjusting the variables of activities (duration, location and consumption of goods) and the magnitude of the resources (time and money) spent. We specify and analyze a stylized microeconomic model with two time scales, the macro and micro level, concluding that preference observations at the micro level, such as transport mode choice, are strongly conditioned by the prevailing choices at the macro scale. This result has strong implications for the current theory of the value and allocation of time, as well as on the location of activities, as illustrated by numerical example.

**Characteristics of lateral vehicle interaction**

*Delpiano, R., Herrera, J. C., & Coeymans, J. E. | Transportmetrica A: Transport Science, 11(7), 636–647. | 2015*

**The kinematic wave model with finite decelerations: A social force car-following model approximation**

*Delpiano, R., Laval, J., Coeymans, J. E., & Herrera, J. C. | Transportation Research Part B: Methodological, 71, 182–193. | 2015*

**On passenger saturation flow in public transport doors**

*Fernandez, R, Valencia, A, Seriani, S | Transportation Research Part A 78, 102-112 | 2015*

**Planning guidelines for metro-bus interchanges by means of a pedestrian microsimulation model**

*Seriani, S, Fernandez, R | Transportation Planning and Technology 38(5), 569-583 | 2015*

**Temas de Ingeniería y Gestión de Tránsito**

*Fernández, R. | RiL Editores, Santiago, 201 pág. | 2014*

**Tactical design of high-demand bus transfers**

*Guevara, C. A., & Donoso, G. A. | Transport Policy | 2014*

We use micro-simulation to assess five tactical designs seeking variance reduction of a high-demand transfer stop that resembles a representative case of Transantiago, the public transportation system of Santiago de Chile. We explore: demand splitting, route differentiation, offline holding, online holding, and prepayment; all of which are applied locally at the transfer stop, and affecting only the feeders. We analyze the impacts over operators and users, both at the transfer stop and downstream, finding that online holding has the best performance overall. These findings were robust to various changes in the simulation assumptions. The paper finishes discussing implications of these results for public policy design, and possible extensions of this research.

**Elementos de Seguridad Vial**

*Fernandez, R, Seriani, S | Capítulo 9 del Libro Temas de Ingeniería y Gestión de Tránsito. Ril Editores, Santiago de Chile | 2014*

**Empirical Evidence on the Existence of Collateral Anomaly**

*Delpiano, R., Herrera, J. C., & Coeymans, J. E. | Presentado en Transportation Research Board 92nd Annual Meeting | 2013*

**A method to calculate commercial speed on bus corridors**

*Valencia, A. and Fernandez, R. | Traffic Engineering and Control 53(6), 215-221. | 2012*

**Elementos de la Teoría del Tráfico Vehicular**

*Fernández, R. | Fondo Editorial Pontificia Universidad Católica del Perú, Lima, 215 pág. | 2011*

**Effect of door width, platform height and fare collection on bus dwell time. Laboratory evidence for Santiago de Chile.**

*Fernández, R. Zegers, P., Weber, G. and Tyler, N. | Transportation Research Record 2143, 59-66 (ISI). | 2010*

**Modelling public transport stops by microscopic simulation**

*Fernández, R. | Transportation Research Part C: Emerging Technologies 18(6), 856-868 (ISI). | 2010*

**Microscopic simulation of transit operations. Policy studies with the MISTRANSIT application programming interface.**

*Fernández, R., Cortés, C.E. and Burgos, V. | Transportation Planning and Technology, 33(2), 157-176 (ISI). | 2010*

**Modelling passengers, buses and stops in traffic microsimulation. Review and extensions**

*Cortés, C.E., Burgos, V. and Fernández, R. | Journal of Advanced Transportation, 44, 72-88 (ISI). | 2010*

**Elementos de la Teoría del Tráfico Vehicular**

*Fernández, R. | Lom Ediciones Ltda., Santiago, 236 pág. | 2009*

**Robust automated multiple view inspection.**

*Pizarro, L., Mery, D., Delpiano, R., & Carrasco, M. | Pattern Analysis and Applications, 11, 21–32. | 2007*

**PASSION 5.0: A microscopic simulator of multiple-berth bus stops.**

*Fernández, R. | Traffic Engineering and Control 48(7), 324-328. | 2007*

**Evolution of the TRANSYT model in a developing country**

*Fernández, R., Valenzuela, E., Casanello, F. and Jorquera, C. | Transportation Research Part A: Policy and Practice 40(5), 375-458 (ISI). | 2006*

**Transport and Air Quality in Santiago, Chile, 79-105.**

*Osses, M. and Fernández, R. | Advances in City Transport: Case Studies. WIT Press, Southampton, UK, 193pp. | 2005*

**Study of passenger-bus-traffic interactions in bus stop operations.**

*Fernández, R. and Tyler, N. | Transportation Planning & Technology 28(4), 273-292 (ISI). | 2005*

**Can Santiago de Chile’s transport policy break the vicious circle?**

*Fernández, R. and Osses, M. | Traffic Engineering and Control 46(9), 332-339. | 2005*

**Transporte Público: Las opciones que tenemos, 147-154**

*Fernández, R. | Muévete por tu ciudad: una propuesta ciudadana para transporte con equidad. LOM Ediciones, Santiago, 201pp. | 2003*

**A model to estimate bus commercial speed**

*Fernández R. and Valenzuela, E. | Traffic Engineering and Control 43(2), 352-356. | 2003*

**Operational Impacts of Bus Stops, 99-137**

*Tyler, N. A., Silva, P., Brown, N. and Fernández, R. | Accessibility and the Bus System: from concepts to practice. Thomas Telford, London, 432pp. | 2002*

**On the capacity of bus transit systems**

*Fernández R. and Planzer R. | Transport Reviews 22(3), 267-293 (ISI). | 2002*

**A new approach to bus stop modelling**

*Fernández R. | Traffic Engineering & Control 42(7), 240-246. | 2001*

**A bus-based transitway or light rail? The engineering view**

*Fernández, R. | Road & Transport Research 9(1), 108-113. | 2000*

**Design of bus-stop priorities**

*Fernández, R. | Traffic Engineering and Control 40(6), 335-340. | 1999*

**An expert system for preliminary design and location of high capacity bus stops**

*Fernández, R. | Traffic Engineering and Control 34(11), 533-539. | 1993*