Skip to content

You are here: Home / Doctoral Thesis / Deposit and assessment / Public display of deposited theses

Public display of deposited theses

Presentació d'al·legacions a una tesi doctoral en el termini d'exposició pública

In accordance with the Academic Regulations for Doctoral Studies, doctors may request access to a doctoral thesis in deposit for consultation and, if there are, to send to the Permanent Commission of the Doctoral School the observations and allegations that they consider opportune on the content.

 

DOCTORAL DEGREE IN AEROSPACE SCIENCE AND TECHNOLOGY

  • PARÉS CALAF, MARIA EULÀLIA: A geodetic approach to precise, accurate, available and reliable navigation
    Author: PARÉS CALAF, MARIA EULÀLIA
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN AEROSPACE SCIENCE AND TECHNOLOGY
    Department: Department of Physics (FIS)
    Mode: Normal
    Deposit date: 24/11/2020
    Deposit END date: 09/12/2020
    Thesis director: COLOMINA FOSCH, ISMAEL
    Committee:
         PRESIDENT: CASAS PIEDRAFITA, JAIME OSCAR
         SECRETARI: MARTÍN FURONES, ANGEL E.
         VOCAL: WIS GIL, MARIANO
    Thesis abstract: The determination of a body position, velocity and attitude is the purpose of the navigation techniques. The most extended navigation systems (INS/GNSS), as any other HW and SW system, provide precise and accurate enough trajectory determinations under the appropriate conditions. However, when those systems acquire data on non-friendly environments the computed navigation solution su ers from unacceptable errors. Since the foundation of the Geodesy and Navigation group of the former Institute of Geomatics, now at CTTC, its researchers have to deal with three fundamental issues: solution performance (precision-accuracy), specially for low-cost systems, reliability and environmentally independent availability. The main objective of the research presented on this dissertation is to contribute to the adoption of the geodetic method for navigation. The geodetic method is based on a proper problem abstraction, on optimal estimation criteria, on rigorous mathematical modelling, on the use of suiciently redundant data and on the use of suficiently heterogeneous data. Since rigorous modelling, redundancy and heterogeneous data have been widely used by the navigation community, this thesis focus on a proper problem abstraction as well as on an optimal estimation criteria. The first step to prove the geodetic approach suitability has been to design, implement and validate the GEMMA system. The GEMMA system is a set of SW modules allowing the validation of new trajectory determination algorithms. The system is made up of measurement generators, filters and analysers, as well as trajectory generators and analysers and, as its main component, a generic platform for the optimal determination of trajectories (NAVEGA). The main purpose of the signal and trajectory generators and Iters is to provide data to test and validate new navigation algorithms. Signal and trajectory analysers are used to characterize the error of data sets. NAVEGA is a software platform for the optimal determination of trajectories or paths of stochastic dynamical systems driven by observations and their associated dynamic or static models. The second step has been the design of a new optimal estimation algorithm that maximise the benefits of redundant systems. Redundancy in the number of available satellites but also redundancy in the number of sensors. The availability of several sensors allows reducing the noise and detecting possible outliers. In this thesis, a new bayesian filter implementation, named Simultaneous Prediction and Filtering (SiPF), providing access to the residuals of redundant measurements is presented. This approach allows to apply all the quality determination geodetic techniques to the navigation solution determination. The benefits of the previous tools, together with the extended acceptance of the benefit of redundancy, rigorous modelling and heterogeneity lead us to conclude that the geodetic approach is a suitable way to face the navigation problem and to improve its performance, its availability and its reliability.

DOCTORAL DEGREE IN ARCHITECTURAL DESIGN

  • MAROTO SALES, JUAN: Arquitecturas para el juego del habitante emancipado. La arquitectura como dispositivo de intermediación: Lacaton y Vassal. Casa Latapie (1993), edificio de 14 viviendas en Mulhouse (2005), ENSA Nantes (2009)
    Author: MAROTO SALES, JUAN
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN ARCHITECTURAL DESIGN
    Department: Department of Architectural Design (PA)
    Mode: Normal
    Deposit date: 16/11/2020
    Deposit END date: 30/11/2020
    Thesis director: BRU BISTUER, EDUARDO
    Committee:
         PRESIDENT: GAUSA NAVARRO, MANUEL
         SECRETARI: SANTACANA JUNCOSA, AMADEO
         VOCAL: GENIS VIÑALS, MARIONA
         VOCAL: GIRONÈS SADERRA, ANTONIO
         VOCAL: FONTANA, MARIA PIA
    Thesis abstract: Miessen states, in the book `The Nightmare of Participation¿ (2010), that `participation¿ is too often observed through romantic concepts of negotiation and decision making. However, the author proceeds, it has been noted that some formulas do not generate significant results.This research¿s framework is based, fundamentally, in the turning of the inhabitant into an active agent in the architectural processes. Specifically, it tracks and highlights the positions and project practices of the French architects Anne Lacaton (1955) and Jean-Philippe Vassal (1954).The `turning of the inhabitant into an active agent¿ has a clear antecedent: during the decades that go from the 50s until the end of the 70s of the xx century (period that undoubtedly shook the discipline with the promotion of various significant displacements), the rebalancing of architect and inhabitant agencies, the dislocation of disciplinary limits, or the unlocking of excessively autonomous and determinists gazes, drew a complex scenario in which both the architecture and what is attributed to the architect were diluted into positions that had a difficult fit for the discipline. Such discipline reacted, as Rossi pointed out, reinforcing disciplinary autonomy, thus distancing it from uncomfortable, unknown, or plainly impossible places.In the contemporary scenario, once again, `participation¿, `spatial flexibility¿, `versatility¿, or other similar concepts explored during that period, reappear around the discipline.In order not to fall, yet again, into slippery territories, this research explores an intermediate stage where we can focus on 'participation', within the processes of architecture, from the perspective of 'emancipation' (Rancière): architecture as an in-between interface which conveys, mediates and promotes both knowledge of the architect and the inhabitant.This research explores how Lacaton and Vassal project architectures that are understood as a 'thing in-between'; architectures that make the deployment of the concept `freedom¿ possible, a term continuously used by them. `Freedom¿ for the (emancipated) architect to explore and propose architectures ¿professional knowledge within the framework of a non-deterministic disciplinary autonomy¿ and incorporate their intuitions, reflections and personal background; as well as `freedom¿ for the (emancipated) inhabitant in order to play, explore, discover, and implement their own knowledge. Architecture thus becomes a kind of communication tool (dialogue), through time, between the architect and the inhabitant, without impositions and, fundamentally, without resignations.After an exhaustive analysis of Lacaton and Vassal¿s works, three basic project strategies have been detected: space deregulation ¿excessively regulated space, in multiple dimensions, heir to modernity¿, biodynamic skin ¿the active relationship between inhabitant, architecture and the environment¿, and, unlocking the support ¿allowing, for a community of inhabitants, to reconfigure a finished as well as open architecture. Each one of these strategies is deployed by a set of project techniques that are explored and contextualized with the aim of giving this research an operational nature.This thesis focuses on three works by Lacaton and Vassal in which the following strategies are implemented: the Latapie House (1993), the 14-dwelling building in Mulhouse (2005), and the Nantes School of Architecture (2009). Architectures that become `relational¿, `performative¿, and operate in and from everyday life. "Inhabited space should be generous, comfortable, appropriable, economical, fluid, flexible, bright, evolving and 'luxurious', while allowing for the simplest uses: eating, working, resting, isolating, welcoming and receiving friends, hang clothes, play music, do DIY, park your bicycle, car or watch orchids grow¿. Lacaton y Vassal, 2016

DOCTORAL DEGREE IN ARCHITECTURAL, CIVIL AND URBAN HERITAGE AND REFURBISHMENT OF EXISTING BUILDINGS

  • BOTERO MÁRQUEZ, NATALIA DEL PILAR: Arquitectura y genética: Como analogía en el proceso de diseño arquitectónico
    Author: BOTERO MÁRQUEZ, NATALIA DEL PILAR
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN ARCHITECTURAL, CIVIL AND URBAN HERITAGE AND REFURBISHMENT OF EXISTING BUILDINGS
    Department: (RA)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ESCODA PASTOR, MARIA CARMEN
    Committee:
         PRESIDENT: GRANERO MARTÍN, FRANCISCO DE ASÍS
         SECRETARI: ZARAGOZA DE PEDRO, MARIA ISABEL
         VOCAL: PLANAS ROSELLO, MIGUEL ANGEL
    Thesis abstract: Due to the progressive development of new technologies, modern society is experiencing quite a degree of disconnection from the elements of nature. This is reflected in increased levels of stress, anxiety and depression, thus affecting the quality of life of modern humans. Thanks to the emergence of neuroscience we can scientifically test how our brain reacts to situations, people, and to forms and spatial configurations. Genetic architecture, as an analogy in the architectural design process, aims to show the importance of reconnecting with nature and the more organic systems of evolution, and how this set of patterns are essential when projecting in architecture. This thesis aims to investigate the quality of space designed through digital media, analysing whether this scientific analogy applied to architectural design processes has a positive impact in the behaviour of its inhabitants. Collecting a series of patterns from nature proposed by biophilia, scientists have revealed through research that they have very positive effects on the human psyche, and therefore can improve an architectural design elevating it to the category of a healing space. It also aims to shed light on the interpretation of emerging and evolutionary design techniques, which have been used to replace the traditional design; which proliferate in modern times by way of analogy. The first part of the thesis deals with the subject of organic analogy, showing how throughout the history of art, architecture and science, organic elements have been a source of inspiration and an example in many literary and artistic works. The background and most relevant historical aspects of the biological inspiration in architecture and design, emphasising those architects who tried to capture natural systems but not their geometric forms, are also covered in this first part. In the second part of the thesis we develop the theme of creation of worlds, showing how architects based on these analogies have also been adapting to these scientific advances, in some cases naively and in others understanding organic processes at a more scientific level. In this part of the thesis the theme related to the genetic analogy, evolutionary analogy, and more specifically the use of emerging computing techniques, as well as which are the most representative architects in this field of computational design and creativity, are covered. Finally, a brief review is made of the works of architects who are directly related to genetic analogy in architecture. The third part of the thesis "On the correct interpretation" deals with the analysis of spatial quality, studying aspects of neurology that allow us to analyse which formal or spatial configurations generate states of stress, anxiety, fear, etc. In this way we will present a series of "pattern elements" (extracted from nature) that can generate the opposite effect to these emotional states, thus generating well-being and harmony. In the conclusion we will analyse the importance of knowledge of modelling tools that allow us to use these analogies in the design process, and on the other hand how ignorance of these and an irresponsible and naive use can act as blockages to the architectural design process, thus generating forms and spaces, which according to the geometric configurations that they present, will be reflected in our psyche, and so affect our behaviour, both as individuals and as a society.

DOCTORAL DEGREE IN ARCHITECTURE, ENERGY AND ENVIRONMENT

  • CHANDIA JAURE, ROSA NOEMI: La persistencia de lo habitable. Socoroma y la construcción del paisaje del agua: El habitar y la gestión del agua para la construcción del paisaje de los pueblos alto-andinos del norte de Chile
    Author: CHANDIA JAURE, ROSA NOEMI
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN ARCHITECTURE, ENERGY AND ENVIRONMENT
    Department: Department of Architectural Technology (TA)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: CUCHÍ BURGOS, ALBERTO
    Committee:
         PRESIDENT: FARIÑA TOJO, JOSÉ
         SECRETARI: SERRA PERMANYER, MARTA
         VOCAL NO PRESENCIAL: TELLO ARAGAY, ENRIQUE
    Thesis abstract: The study of Traditional Ancestral Knowledge associated with the construction of a landscape and a way of living, constitutes an effective mechanism to move towards sustainable development. It is a body of technical knowledge to build habitability, which is developed by a human group through generations, who live in close contact with the surrounding ecosystems. It is cumulative and continually readapted to the cultural, economic, social or environmental transformations that occur over time. In the oases case, it talks about a territorial unit that is built considering water as a fundamental resource for the existence and gravitational condition of its displacement. The community develops management strategies and construction techniques for the interaction of water with the components of the pre-existing territorial matrix: soil, relief, flora and fauna; in extreme environmental conditions: high solar radiation, water shortage, low ambient humidity and high temperatures. The case study is located in in the high-Andean territory in the northern Chile, in the village of Socoroma, at 21º latitude and 3,300 meters above sea level. It is an agricultural landscape of dry-stacked stone terraces, with a history of human occupation by various human groups since pre-Hispanic times. This thesis seeks to describe the process of landscape construction, linked to traditional mountain living. The gravitational displacement of water guides the definition of the hydraulic spaces that make up the Hydraulic System, delimiting irrigated spaces, which will be later modeled by the inhabitants in the Space and Construction System to shape the landscape. Technical operations are carried out, for the construction of stacked stone walls, associated with the proper management of the hillside slopes for water displacement; erosion control, moisture maintenance and minimization of landslide risks. These two systems perpetuate over time as traces in the territory, even before their eventual abandonment by the human group. However, its operation depends on social and cultural practices that constitute the Productive System, where water is managed from everyday life: irrigation and distribution of water between the community and its crops in the plots; and from the symbolic point of view, where the rite builds territorial imaginary around the cultural practices linked to the inhabited revalidation of the territory, in a larger territorial unit that is implicitly recognized from the Andean worldview: water basin, represented by the sacred condition of the mountains as providing sources for existence.It is concluded in the importance of understanding from an interdisciplinary and inter-scalar approach, these alternative models of construction and landscape management, whose methods are far from industrialized practices, and have not lost connection with the environment. Comprehensively understanding the logic of the technical model of the oasis is a process that requires crossing disciplinary boundaries. However, learning allows to extrapolate knowledge in other territorial areas, contributing to improve environmental knowledge, in order to define intervention strategies linked to the environment in the field of architecture and urban design.
  • GONZÁLEZ MATTERSON, MARÍA LEANDRA: Natural lighting in Mediterranean climates. Visual comfort in top-lit sports halls: from the Barcelona 1992 Olympics to the Tarragona 2018 Mediterranean Games
    Author: GONZÁLEZ MATTERSON, MARÍA LEANDRA
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN ARCHITECTURE, ENERGY AND ENVIRONMENT
    Department: Department of Architectural Technology (TA)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ZAMORA MESTRE, JOAN LLUIS
    Committee:
         PRESIDENT: ROGORA, ALESSANDRO
         SECRETARI: COCH ROURA, HELENA
         VOCAL NO PRESENCIAL: SALVATI, AGNESE
    Thesis abstract: The role of natural light in the health and wellbeing of users is a central subject to architectural design today. Sustainability and energy efficiency are also leading concerns. Therefore, new and existing buildings must provide optimal comfort, while reducing the energy consumption for lighting, ventilation and heating.In this context, this thesis investigates the performance of natural light in sports halls in a Mediterranean climate in relation to both users, athletes and spectators, and television broadcasting requirements. In addition, it explores the impact of daylight design strategies to achieve visual comfort. Even though there is a large amount of research in the field of visual comfort and daylight, sports spaces are scarcely studied. Because the athletes and their visual targets are in movement, the visual field becomes three-dimensional and its assessment complex. Thus, this thesis explains a specific methodology which was designed and implemented from an architectonic and holistic approach. Both qualitative and quantitative parameters of visual comfort were assessed in daylit sports halls. Objective and subjective data were collected from case studies, comprising in-situ measurements, such as horizontal and vertical illuminance. As well, it included high dynamic range images survey, glare and contrast evaluation in the field of view, simulations, an experimental test and visual comfort surveys. In addition, it analysed the optimisation of daylight at the initial stages of the design development of a new sports facility. This research is explained in three main parts completed over several years. The first part of this work evaluates the performance of naturally lit and, in particular, top-lit sports halls built for the Barcelona 1992 Olympic Games. Thirteen different sport buildings in Catalunya were assessed, where the widespread use of skylights for daylighting correlates with a suitable global performance. Nonetheless, visual discomfort issues by absolute and contrast glare were frequent because of lack of daylight control and solar protection devices, among others.In the second part and based on previous results, daylight design strategies are suggested for the improvement of visual comfort in four of the Olympic sports halls. Photorealistic and calibrated images were obtained to validate the design measures proposed through experimental tests. Additionally, the panel responses were collected and analysed, revealing that users prefer a uniformly well day-lit court, when daylighting strategies were integrated. The court is featured as the main and central element of the luminous space for users.The third part was completed at the Catalonia Institute for Energy Research - IREC and presents daylighting design strategies, which were compiled for the optimisation of natural light in sports halls of Catalonia. These guidelines were implemented in the building design of the new Palau d¿Esports Catalunya, built for the Tarragona 2018 Mediterranean Games. The goals of the optimisation of the central skylight were also contrasted and verified with two Post Occupancy Evaluation campaigns. Finally, this work highlights the complexity of the design of the luminous space and encourages the inclusion of natural light from early design phases. This can be useful either for retrofitting strategies or new design of sports halls in Mediterranean climates and other climates and latitudes, since both overcast skies and clear skies were assessed.

DOCTORAL DEGREE IN AUTOMATIC CONTROL, ROBOTICS AND VISION

  • GÁMIZ CARO, JAVIER FRANCISCO: Contribución al modelado e implementación de un control avanzado para un proceso de cloración de una Estación de Tratamiento de Agua Potable
    Author: GÁMIZ CARO, JAVIER FRANCISCO
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN AUTOMATIC CONTROL, ROBOTICS AND VISION
    Department: Department of Automatic Control (ESAII)
    Mode: Confidentiality
    Deposit date: 18/11/2020
    Deposit END date: 02/12/2020
    Thesis director: BOLEA MONTE, YOLANDA | MARTINEZ GARCIA, HERMINIO
    Committee:
         PRESIDENT: GUINJOAN GISPERT, FRANCISCO JUAN
         SECRETARI: PALACÍN ROCA, JORGE
         VOCAL: MENESES BENÍTEZ, MONTSERRAT
    Thesis abstract: The effective disinfection in chlorination tanks of drinking water treatment plants depends on many factors that must be taken into account when designing a suitable control system. The chemical characteristics of the water to be treated, the possible flow disturbances or the contact time of the chlorine with the outgoing water must be taken into account in order to establish a control strategy that contemplates adaptation to the different possible scenarios. An excessive dosage of chlorine, or on the other hand a deficit in the dose, may lead to a legal failure to comply with the existing regulations that establish obligatory limit values. Likewise, an overdose generates an unnecessary expenditure of chlorine and, as a collateral effect, problems due to an increase in the maximum values allowed for by-products such as trihalomethanes. The contribution of the present work regarding chlorination in drinking water treatment tanks is divided into two areas: simulation and control. The studies and field work have been carried out on one of the largest drinking water treatment plants in southern Europe. The design and implementation of a simulator has made it possible to reproduce the behaviour of the system to test the proposed controller without the need to interfere in the production process, thus avoiding possible effects on the supply. The control implemented at the Plant is based on a control advance that compensates the disturbance of ammonium and other compounds and a feedback with planned gains depending on the flow rate and the temperature and origin of the water to be treated. The two control blocks (feedforward and feedback) are supervised by a diffuse system that, depending on the characteristics of the water to be treated, combines a control feedforward with a gains planning on a PI control.The simulation and experimental results obtained have been validated with real plant data from the Supervision System database. The control presented in this work has been integrated into the current Plant Control System. The results obtained, apart from being satisfactory, have allowed the Plant Control Room operator to move from the role of Process Control to that of Supervision of the Chlorination Process.
  • SHARAFELDEEN, MOHAMMED DIAB ELSAYED: Knowledge Representation and Reasoning for Perception-based Manipulation Planning
    Author: SHARAFELDEEN, MOHAMMED DIAB ELSAYED
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN AUTOMATIC CONTROL, ROBOTICS AND VISION
    Department: Institute of Industrial and Control Engineering (IOC)
    Mode: Temporary seizure
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ROSELL GRATACOS, JOAN
    Committee:
         PRESIDENT: GONZÁLEZ JIMÉNEZ, JAVIER
         SECRETARI: SUAREZ FEIJOO, RAUL
         VOCAL: KUNZE, LARS
    Thesis abstract: This thesis develops a series of modeling and reasoning tools for knowledge-oriented manipulation planning in semi/unstructured environments. The main idea is to use high-level knowledge-based reasoning to capture a rich semantic description of the scene, knowledge about the physical behavior of the objects, and inference mechanism to reason about the potential manipulation actions. Moreover, a multi-model sensory module is proposed to perceive the objects in the environment and build the ontological knowledge.The first part of the thesis is focused on the techniques to provide useful knowledge to guide and facilitate the planning process within a classical-based manipulation planning framework. This planning framework facilitates the combination of task and motion planning approaches which includes Fast Forward (FF), a classical symbolic planning approach to compute the sequence of actions to be done in a certain task, and physics-based motion planning which deals with motions and possible interactions with the objects. The tool proposed to provide useful knowledge to the planning process is called Perception and Manipulation Knowledge (PMK). It provides, on the one hand, a standardized formalization under several foundations, such as the Suggested Upper Merged Ontology (SUMO), and the Core Ontology for Robotics and Automation (CORA), in order to facilitate the shareability and reusability when the interaction between humans and/or robots is done. On the other hand, it provides the inference mechanism to reason about TAMP requirements, such as robot capabilities, action constraints, action feasibility, and manipulation behaviors. Moreover, PMK allows breaking the closed-world assumption of classical-based manipulation planning approaches. This proposal has been tested for a serving task in a table-top manipulation problem.The second part of the thesis is focused on providing FailRecOnt framework which provides a failure interpretation and recovery knowledge for any planner approaches. We are mainly interested in assembly manipulation tasks for bi-manual robots, which often encounter complexity or failures in the planning and execution phases and also takes into account the manipulation actions to couple/decouple or attach/detach the objects. Planning phase failures typically refer to failures of the planner itself, but we will use planning phase failures to also refer to situations where the planner reasons that some action would be infeasible, e.g. because objects block access to what the robot should reach. A correct selection of grasps and placements must be produced in such an eventuality. Depending on the type of problem, goal order must be carefully handled specially in the assembly domain; very large search spaces are possible, requiring objects to be moved more than once for achieving the goals. Execution phase failures refer to hardware failures related to the system devices, e.g. robot or camera needs to be re-calibrated, or software failures related to the capabilities offered by specific software components, or failures in action performance such as an unexpected occluding object, or slippage. Description of how to use the failure ontology in a task and motion planning process, such as the knowledge-enabled approaches or the heuristic search classical approaches is proposed. The third part of the thesis is focused on the use of knowledge-based experience, called experiential knowledge, in every-day manipulation problems, such as, in service robotics applications, serve a cup in a cluttered environment, where some repeatable skills are usually used like pick-up, place-down or navigate. To efficiently handle these tasks, a planning and execution framework, called SkillMaN, for robotic manipulation tasks is proposed, which is equipped with a module withexperiential knowledge (learned from its experience or given by the user) on how to execute a set of skills, like pick-up, put-down or open a drawer.

DOCTORAL DEGREE IN COMPUTATIONAL AND APPLIED PHYSICS

  • BELTRÁN GONZÁLEZ, MARTÍ: Analysis and degradation mechanisms of enamels, grisailles and silver stains on Modernist stained glass.
    Author: BELTRÁN GONZÁLEZ, MARTÍ
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTATIONAL AND APPLIED PHYSICS
    Department: Department of Applied Physics (FA)
    Mode: Normal
    Deposit date: 16/11/2020
    Deposit END date: 30/11/2020
    Thesis director: PRADELL CARA, TRINITAT
    Committee:
         DIRECTOR: PRADELL CARA, TRINITAT
         PRESIDENT: GARCIA ARANDA, MIGUEL ANGEL
         SECRETARI: MOLERA I MARIMÓN, JUDIT
         VOCAL: VILARIGUES, MARCIA
    Thesis abstract: Materials and methods used in the production of modernist (late 19th and early 20th century) stained glass from the city of Barcelona with special regards to the degradation mechanisms of enamels, grisailles and silver stains have been studied. Coloured enamels from the raw materials used in the Rigalt, Granell & cia modernist workshops from Barcelona were produced and compared to those found in the buildings and belonging to the private collection of J.M. Bonet workshop to explore the reason for the reduced stability of the blue and green enamels. The chemical composition has been determined (and pigments identified) by means of Laser Ablation Inductively-Coupled Plasma Mass Spectrometry (LA-ICP-MS), X-Ray Diffraction (XRD) and UV-Vis-NIR spectroscopy, and the thermal properties of the enamels measured by Differential Scanning Calorimetry (DSC) and Hot Stage Microscopy (HSM). The enamels are made of a lead-zinc borosilicate glass characterised by its low sintering temperatures and high stability against chemical corrosion, in particular to water corrosion. However, the relatively narrow range of firing temperatures necessary for correct adherence of the enamels to the contemporary glass base may have required the addition of a high lead borosilicate flux, which would have increased the lead content of the enamel, decreasing the firing temperature but also its stability. The historical enamels show a lead, boron and zinc depleted silica rich amorphous glass, with precipitated lead and calcium sulphates or carbonates, characteristic of extensive atmospheric corrosion. The blue and green enamels show a heterogeneous layered microstructure more prone to degradation which is augmented by a greater heating and thermal stress affectation produced by the enhanced Infrared absorbance of blue tetrahedral cobalt colour centres and copper ions dissolved in the glass and, in particular, of the cobalt spinel particles
  • XIE, CHENYANG: Corrosion studies on Cu-based alloys.
    Author: XIE, CHENYANG
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTATIONAL AND APPLIED PHYSICS
    Department: Department of Applied Physics (FA)
    Mode: Change of supervisor
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: RENNER, FRANK U. | CRESPO ARTIAGA, DANIEL
    Committee:
         SECRETARI: ROJAS GREGORIO, JOSE IGNACIO
         VOCAL: DUARTE CORREA, MARIA JAZMIN
    Thesis abstract: Cu-based alloys are widely applied in corrosive environments. The improvement of the alloys' corrosion resistance will significantly reduce energy consumption and overexploitation of resources. To increase the resistance of copper as less corrosion resistant alloy component, nine imidazole-based compounds with different functional groups were tested as potential corrosion inhibitors. Imidazole derivatives were chosen as a potential inhibitor for copper alloys due to its diverse performance on pure Cu. Besides, CuZn alloys, CuZr crystalline and amorphous alloys, and their pure metals were tested to explore the correlations between inhibition, structure and atomic species. The determination of the performance of corrosion inhibitors on metals is a complex problem due to the mixed influence of surface condition, inhibitor-surface interaction and environmental conditions.For CuZn alloys, the a-Cu phase Cu70Zn30 alloy and non-a-Cu phase Cu30Zn70 alloy were tested. CuZr alloys are the basis of a family of metallic glasses with large glass forming ability and remarkable mechanical properties. The corrosion response of as-produced crystalline and amorphous CuxZr(100-x )alloys (x = 40, 50, 64 at. %) was tested. All alloys were immersed in 3 wt.% NaCl aqueous solution without and with 1mM inhibitor concentration. Potentiodynamic polarization measurements, electrochemical impedance spectroscopy and long-term immersion tests followed by microscopy analysis and Raman spectroscopic analysis were carried out. Comparative analysis of pure Cu and Cu70Zn30 alloy shows that the same inhibitors are effective in both alloys. A similar behavior is found with pure Zn and Cu30Zn70 alloy. However, the inhibition power shows a different value, which should be attributed to the healing effect. Defects present in most of the polished samples accelerate the pitting on these locations. The healing effect will lead to a patch on those positions, which will slow down the local attacks.All the tested CuZr amorphous alloys exhibit a much better corrosion resistance than their crystalline counterparts in the presence and absence of inhibitors. The main factor controlling the corrosion resistance of the alloys appears to be the Zr-rich (or at least equiatomic) amorphous structure, the effect of the inhibitors being secondary. Results therefore show a complex relationship between inhibitor performance, microstructure and composition of CuZr alloys. SH-ImiH-4Ph shows potential to become a global a-Cu phase alloy inhibitor and SH-BimH-5NH2 shows potential for Zn-based and CuZr alloys. Electrochemical measurements, especially long-term measurements, display a significant correlation to immersion tests. The conducted research offers some understanding on the corrosion mechanism and resistance after inhibitor application. Previous surface oxidation, defects on the surface, and inhibitor functional group are found to be the most significant factors that influence inhibition. The healing effect is also responsible for the improved efficiency of some inhibitors.

DOCTORAL DEGREE IN COMPUTER ARCHITECTURE

  • BUCHACA PRATS, DAVID: Learning workload behaviour models from monitored time-series for resource estimation towards data center optimization
    Author: BUCHACA PRATS, DAVID
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTER ARCHITECTURE
    Department: (DAC)
    Mode: Normal
    Deposit date: 12/11/2020
    Deposit END date: 26/11/2020
    Thesis director: CARRERA PÉREZ, DAVID | BERRAL GARCÍA, JOSEP LLUÍS
    Committee:
         PRESIDENT: BIFET FIGUEROL, ALBERT CARLES
         SECRETARI: NIN GUERRERO, JORDI
         VOCAL: IQBAL, WAHEED
    Thesis abstract: In recent years there has been an extraordinary growth of the demand of Cloud Computing resources executed in Data Centers. Modern Data Centers are complex systems that need management. As distributed computing systems grow, and workloads benefit from such computing environments, the management of such systems increases in complexity. The complexity of resource usage and power consumption on cloud-based applications makes the understanding of application behavior through expert examination difficult. The difficulty increases when applications are seen as "black boxes", where only external monitoring can be retrieved. Furthermore, given the different amount of scenarios and applications, automation is required. To deal with such complexity, Machine Learning methods become crucial to facilitate tasks that can be automatically learned from data. Firstly, this thesis proposes an unsupervised learning technique to learn high level representations from workload traces. Such technique provides a fast methodology to characterize workloads as sequences of abstract phases. The learned phase representation is validated on a variety of datasets and used in an auto-scaling task where we show that it can be applied in a production environment, achieving better performance than other state-of-the-art techniques. Secondly, this thesis proposes a neural architecture, based on Sequence-to-Sequence models, that provides the expected resource usage of applications sharing hardware resources. The proposed technique provides resource managers the ability to predict resource usage over time as well as the completion time of the running applications. The technique provides lower error predicting usage when compared with other popular Machine Learning methods. Thirdly, this thesis proposes a technique for auto-tuning Big Data workloads from the available tunable parameters. The proposed technique gathers information from the logs of an application generating a feature descriptor that captures relevant information from the application to be tuned. Using this information we demonstrate that performance models can generalize up to a 34% better when compared with other state-of-the-art solutions. Moreover, the search time to find a suitable solution can be drastically reduced, with up to a 12x speedup and almost equal quality results as modern solutions. These results prove that modern learning algorithms, with the right feature information, provide powerful techniques to manage resource allocation for applications running in cloud environments. This thesis demonstrates that learning algorithms allow relevant optimizations in Data Center environments, where applications are externally monitored and careful resource management is paramount to efficiently use computing resources. We propose to demonstrate this thesis in three areas that orbit around resource management in server environments
  • SEGURA SALVADOR, ALBERT: High-performance and energy-efficient irregular graph processing on GPU architectures
    Author: SEGURA SALVADOR, ALBERT
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTER ARCHITECTURE
    Department: (DAC)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ARNAU MONTAÑES, JOSÉ MARÍA | GONZÁLEZ COLÁS, ANTONIO MARIA
    Committee:
         PRESIDENT: AAMODT, TOR MICHAEL
         SECRETARI: TUBELLA MURGADAS, JORDI
         VOCAL NO PRESENCIAL: SAHUQUILLO BORRÁS, JULIO
    Thesis abstract: Graph processing is an established and prominent domain that is the foundation of new emerging applications in areas such as Data Analytics and Machine Learning, empowering applications such as road navigation, social networks and automatic speech recognition. The large amount of data employed in these domains requires high throughput architectures such as GPGPU. Although the processing of large graph-based workloads exhibits a high degree of parallelism, memory access patterns tend to be highly irregular, leading to poor efficiency due to memory divergence.In order to ameliorate these issues, GPGPU graph applications perform stream compaction operations which process active nodes/edges so subsequent steps work on a compacted dataset. We propose to offload this task to the Stream Compaction Unit (SCU) hardware extension tailored to the requirements of these operations, which additionally performs pre-processing by filtering and reordering elements processed.We show that memory divergence inefficiencies prevail in GPGPU irregular graph-based applications, yet we find that it is possible to relax the strict relationship between thread and processed data to empower new optimizations. As such, we propose the Irregular accesses Reorder Unit (IRU), a novel hardware extension integrated in the GPU pipeline that reorders and filters data processed by the threads on irregular accesses improving memory coalescing.Finally, we leverage the strengths of both previous approaches to achieve synergistic improvements. We do so by proposing the IRU-enhanced SCU (ISCU), which employs the efficient pre-processing mechanisms of the IRU to improve SCU stream compaction efficiency and NoC throughput limitations due to SCU pre-processing operations. We evaluate the ISCU with state-of-the-art graph-based applications achieving a 2.2x performance improvement and 10x energy-efficiency.
  • SILFA FELIZ, FRANYELL ANTONIO: Energy-efficient architectures for recurrent neural networks
    Author: SILFA FELIZ, FRANYELL ANTONIO
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTER ARCHITECTURE
    Department: (DAC)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ARNAU MONTAÑES, JOSÉ MARÍA | GONZÁLEZ COLÁS, ANTONIO MARIA
    Committee:
         PRESIDENT: KRISHNA, TUSHAR
         SECRETARI: PASTOR LLORENS, ENRIQUE
    Thesis abstract: Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements.RNNs are employed to solve sequence to sequence problems such as Machine Translation. They contain data dependencies among the executions of time-steps hence the amount of parallelism is severely limited. Thus, evaluating them in an energy-efficient manner is more challenging than evaluating other DNN algorithms. This thesis studies applications using RNNs to improve their energy efficiency on specialized architectures. Specifically, we propose novel energy-saving techniques and highly efficient architectures tailored to the evaluation of RNNs. We focus on the most successful RNN topologies which are the Long Short Term memory and the Gated Recurrent Unit. First, we characterize a set of RNNs running on a modern SoC. We identify that accessing the memory to fetch the model weights is the main source of energy consumption. Thus, we propose E-PUR: an energy-efficient processing unit for RNN inference. E-PUR achieves 6.8x speedup and improves energy consumption by 88x compared to the SoC. These benefits are obtained by improving the temporal locality of the model weights. In E-PUR, fetching the parameters is the main source of energy consumption. Thus, we strive to reduce memory accesses and propose a scheme to reuse previous computations. Our observation is that when evaluating the input sequences of an RNN model, the output of a given neuron tends to change lightly between consecutive evaluations.Thus, we develop a scheme that caches the neurons' outputs and reuses them whenever it detects that the change between the current and previously computed output value for a given neuron is small avoiding to fetch the weights. In order to decide when to reuse a previous value we employ a Binary Neural Network (BNN) as a predictor of reusability. The low-cost BNN can be employed in this context since its output is highly correlated to the output of RNNs. We show that our proposal avoids more than 24.2% of computations. Hence, on average, energy consumption is reduced by 18.5% for a speedup of 1.35x.RNN models¿ memory footprint is usually reduced by using low precision for evaluation and storage. In this case, the minimum precision used is identified offline and it is set such that the model maintains its accuracy. This method utilizes the same precision to compute all time-steps.Yet, we observe that some time-steps can be evaluated with a lower precision while preserving the accuracy. Thus, we propose a technique that dynamically selects the precision used to compute each time-step. A challenge of our proposal is choosing a lower bit-width. We address this issue by recognizing that information from a previous evaluation can be employed to determine the precision required in the current time-step. Our scheme evaluates 57% of the computations on a bit-width lower than the fixed precision employed by static methods. We implement it on E-PUR and it provides 1.46x speedup and 19.2% energy savings on average.

DOCTORAL DEGREE IN COMPUTING

  • COMINO TRINIDAD, MARC: Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources
    Author: COMINO TRINIDAD, MARC
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTING
    Department: Department of Computer Science (CS)
    Mode: Normal
    Deposit date: 23/11/2020
    Deposit END date: 07/12/2020
    Thesis director: ANDUJAR GRAN, CARLOS ANTONIO | CHICA CALAF, ANTONIO
    Committee:
         PRESIDENT: WIMMER, MICHAEL
         SECRETARI: SUSIN SANCHEZ, ANTONIO
         VOCAL: OTADUY TRISTÁN, MIGUEL ÁNGEL
    Thesis abstract: Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya.In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments.Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families:- Time-of-flight (terrestrial and aerial LiDAR).- Photogrammetry (street-level, satellite, and aerial imagery).- Human-edited vector data (cadastre and other map sources).Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort.Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking.In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information.Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions:- Effective and feature-preserving simplification of massive point clouds.- Developing normal estimation algorithms explicitly designed for LiDAR data.- Low-stretch panoramic representation for point clouds.- Semantic analysis of street-level imagery for improved multi-view stereo reconstruction.- Color improvement through heuristic techniques and the registration of LiDAR and imagery data.- Efficient and faithful visualization of massive point clouds using image-based techniques.
  • MESSEGUÉ BUISAN, ARNAU: Network creation games: structure vs anarchy
    Author: MESSEGUÉ BUISAN, ARNAU
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN COMPUTING
    Department: Department of Computer Science (CS)
    Mode: Article-based thesis
    Deposit date: 17/11/2020
    Deposit END date: 01/12/2020
    Thesis director: ALVAREZ FAURA, MARIA DEL CARME
    Committee:
         PRESIDENT: PAPADIMITRIOU, CHRISTOS H.
         SECRETARI: SERNA IGLESIAS, MARIA JOSE
         VOCAL: LENZNER, PASCAL
    Thesis abstract: In an attempt to understand how Internet-like network and social networks behave, different models have been proposed and studied throughout history to capture their most essential aspects and properties. Network Creation Games are a class of strategic games widely studied in Algorithmic Game Theory that model these networks as the outcome of decentralised and uncoordinated interactions. In these games the different players model selfish agents that buy links towards the other agents trying to minimise an individual function. This cost is modelled as a function that usually decomposes into the creation cost (cost of buying links) and the usage cost (measuring the quality of the connection to the network). Due to the agents' selfish behaviour, stable configurations in which all players are happy with the current situation, the so-called Nash equilibria, do not have to coincide with any socially optimal configuration that could be established if a centralised authority could decide by all players. In this way, the price of anarchy is the measure that quantifies precisely the ratio between the most costly Nash equilibrium versus any optimal network from a social point of view. In this work, we study the price of anarchy and Nash equilibria in different scenarios and situations, in order to better understand how the selfish behaviour of agents in these networks affects their quality of the resulting networks. We propose this study from two different perspectives. In the first one, we study one of the most emblematic models of Network Creation Games called sum classical network creation game. This is a model that is based on two different parameters: n, the number of nodes, and a, a function of n that models the price per link. Throughout history it has been shown that the price of anarchy is constant for a =O(n^(1-6)) = 1/(log(n)), and for a > 4n-13. It has been conjectured that the price of anarchy is constant regardless of the parameter a. In this first part we show, first of all, that the price of anarchy is constant even when a > n(1+E) with =>0 any positive constant, thus enlarging the range of values a for which the price of anarchy is constant. Secondly, regarding the range a <n/C with C>4 any positive constant, we know that equilibria induce a class of graphs called distance-uniform. Then, we study the diameter of the distance-uniform graphs in an attempt to obtain information about the topology of equilibria for the range a< n/C with C>4 any positive constant. Secondly, we study the diameter of the distance-uniform graphs in an attempt to obtain information about the topology of the equilibria for the range a <n/C with C>4 any positive constant. In the second perspective we propose and study two new models that we call celebrity games. These two models are based on the analysis of decentralized networks with heterogeneous players, that is, players with different degrees of relevance within the corresponding network, a feature that has not been studied in much detail in the literature. To capture this natural property, we introduce a weight for each player in the network. Furthermore, these models take into account a critical distance B, a threshold value. Each player aim to be not farther than ß from the other players and decides whether to buy links to other players depending on the price per link and their corresponding weights. Moreover, the larger is the weight of a player farther than B, larger is the corresponding penalty. Thus, in these new models players strive to have the minimum possible number of links and at the same time they want to minimise as much as possible the penalty for having players farther from B. They differ in how the penalty corresponding to the players further than ß is computed. For both models we obtain upper and lower bounds of the price of anarchy as well as the main topological properties and characteristics of their equilibria.

DOCTORAL DEGREE IN CONSTRUCTION ENGINEERING

  • MAKOOND, NIRVAN CHANDRA: Structural Diagnosis of Masonry Heritage: Contributions to Non-Destructive Testing, Structural Health Monitoring and Risk Assessment.
    Author: MAKOOND, NIRVAN CHANDRA
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN CONSTRUCTION ENGINEERING
    Department: (DECA)
    Mode: Article-based thesis
    Deposit date: 17/11/2020
    Deposit END date: 01/12/2020
    Thesis director: PELA, LUCA | MOLINS BORRELL, CLIMENT
    Committee:
         PRESIDENT: IVORRA CHORRO, SALVADOR
         SECRETARI: GIL ESPERT, LLUIS
         VOCAL NO PRESENCIAL: ADAM MARTÍNEZ, JOSE MIGUEL
    Thesis abstract: Many cultural heritage sites across the globe consist of masonry structures. To ensure the preservation of such structures, anaccurate evaluation of their current structural condition is often of utmost importance. However, recurrent uncertainties regardingmaterial properties and the complex interaction among structural elements often makes this a challenging task. As a consequence,there has been a considerable research effort on the development of methods and tools that can facilitate this task, and expertsresponsible for the evaluation of unique masonry structures usually need to weigh information from various diagnosis activitiesbefore deciding the best course of action for preservation. In a first instance, the research work presented in this thesis contributesto the enhancement of some key methods for the analysis of masonry structures. Specific topics dealing with materials testing,full-scale vibration testing, and static structural health monitoring (SHM) are addressed. Subsequently, this is built upon to developspecific decision support tools that can assist decision-making for risk mitigation.The research in materials testing involved an experimental study on the dynamic elastic properties of brick masonry constituents,which are known to differ from their static counterpart. Despite being a fundamental deformation property, experimentallydetermining the static elastic moduli of brick masonry constituents remains a challenging task. Following an experimentalcampaign, this research proposes a robust procedure based on the synergy of two non-destructive testing methods to reliablyestimate the dynamic elastic and shear moduli of such materials. In addition, an empirical expression to estimate the static elasticmodulus of constituents from its dynamic counterpart is also provided.With respect to vibration testing, the present study deals specifically with masonry bell towers and the operational modal analysis(OMA) techniques used to extract modal parameters from test acquisitions. Despite significant advances in OMA techniques, theaccuracy of resulting estimates from vibration tests are still highly dependent on test conditions, acquisition quality, and on thetechniques employed for modal parameter estimation. This work first aimed to design a suitable acquisition system and programfor the vibration testing of the bell tower of Seu Vella in Lleida, Catalonia. Several system identification and modal analysistechniques were investigated and the most suitable ones for identifying particular modal parameters under varying acquisitionconditions are discussed.The SHM research component is particularly focused on data analysis for static SHM systems, which involve the continuousmeasurement of key slow-varying parameters over long time periods. Although such systems have the potential to identify slowirreversible deterioration mechanisms in masonry structures at a very early stage, the interpretation of acquired data can bedifficult, particularly due to the influence of environmental factors. This research proposes a fully automated data analysisprocedure able to filter out reversible environmental effects and classify monitored responses in pre-defined evolution states. Theprocedure has successfully been used to identify vulnerable areas in two important medieval heritage structures in Spain, namelythe cathedral of Mallorca and the church of the monastery of Sant Cugat.Finally, all the findings in specific topics are built upon to elaborate multi-criteria decision-making (MCDM) tools meant to improvethe objectivity, clarity, and transparency of risk mitigation decisions for masonry heritage. A systematic risk assessment procedureis proposed involving the computation of two MCDM indices: an index related to the estimated risk of damage, and another to theuncertainty behind this estimation. Applications to several case studies are also included to demonstrate the usefulness of theproposed tools.
  • SEGURA DOMINGO, JORGE: Laboratory experimental procedures for the compression and shear characterisation of historical brick masonry.
    Author: SEGURA DOMINGO, JORGE
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN CONSTRUCTION ENGINEERING
    Department: (DECA)
    Mode: Article-based thesis
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ROCA FABREGAT, PEDRO | PELA, LUCA
    Committee:
         PRESIDENT: VARUM, HUMBERTO
         SECRETARI: MOLINS BORRELL, CLIMENT
         VOCAL NO PRESENCIAL: CAMATA, GUIDO
    Thesis abstract: Masonry has been used for millennia to build all sort of constructions. As a result, a significant part of the building stock around the world is made of masonry. In the need of structural assessment, structural analysis tools, as well as strength criteria proposed in building codes, require the knowledge of the mechanical properties of the materials.However, the mechanical characterisation of masonry is still difficult and challenging, due to its composite nature and its complex mechanical behaviour. In fact, it is possible to find contradictions among standards, lack of definition for certain procedures, or even lack of standards for certain tests.This thesis aims to contribute with the critical analysis of some of these testing procedures and provide possible improvements for a specific type of masonry. Four lines of research have been identified, which cover tests in laboratory and in situ to characterise the behaviour in compression and in shear. The specific type of masonry on which the experimental campaigns are carried out is the traditional type of brickwork that was extensively used in Barcelona during the 19th and 20th c. In spite of its relevance, this type of masonry is in need of further characterisation.A preliminary research was necessary to find a historical-like mortar with a relatively fast hardening and low mechanical properties. The modification of hydraulic lime based commercial mortars with the addition of limestone filler is investigated. Small amounts of filler enhance the mechanical properties of the mortar. High amounts of filler reduce the mortars¿ strengths and make it suitable to replicate historical-like masonry in laboratory.The first line of research on testing procedures covered the compressive characterisation of masonry on prismatic standard specimens. European and American standards differ in the type of specimen to consider, running bond walls and stack bond prisms, respectively. This work compares experimental results obtained from both types of specimen and also obtained from two types of loading, monotonic and cyclic.The second line of research involves an experimental campaign that investigates the possibility of using 90 mm cylinders extracted from existing walls to characterise the compressive behaviour of masonry. Four examples of masonry have been investigated, including cylinders extracted from three existing buildings of Barcelona. The results obtained with 90 mm cylinders compare well to those obtained with the well-known 150 mm cylinders.The third line of research deals with the characterisation of the shear response of masonry in laboratory. The standard triplet specimen consisting of three units and two mortar joints present some interpretation problems related to the non-simultaneous failure of the two joints. This experimental campaign studies the possibility of using couplet specimens of only one mortar joint to determine the shear parameters. For the two types of brickwork investigated, couplets provide higher estimations of the shear parameters with respect to triplets.The last line of research investigates the diagonal compression test, a testing procedure applicable both in situ and in laboratory for shear characterisation. First, an experimental campaign is presented. The experimental results are used to calibrate a numerical model, which is applied to investigate the actual states of stresses and to find correlating coefficients between the test results and the mechanical properties of masonry. The combination of all the former researches provides a set of reference values for the mechanical properties of the traditional brickwork of Barcelona. Nevertheless, the scientific findings, methods, and criteria presented in this thesis, even if derived for a specific type of brickwork, may be of application for the characterisation of other types of masonry around the world.

DOCTORAL DEGREE IN EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS

  • VITOLA OYAGA, JAIME: Structural damage monitoring based on machine learning and bio-inspired computing
    Author: VITOLA OYAGA, JAIME
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS
    Department: (DECA)
    Mode: Article-based thesis
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: POZO MONTERO, FRANCESC | TIBADUIZA, DIEGO ALEXANDER
    Committee:
         PRESIDENT: PEDRAZA BONILLA, CESAR AUGUSTO
         SECRETARI: VIDAL SEGUI, YOLANDA
         VOCAL NO PRESENCIAL: TORRES PINZON, CARLOS
    Thesis abstract: For a few decades, systems for supervising structures have become increasingly irnportant. In origin, the strategies had as a goal only the detection of damages. Furthermore, now monitor­ing the civil or military structures permanently and offering sufficient and relevant information helping make the right decisions. The SHM is applicable, carrying out preventive or corrective maintenance decisions, reducing the possibility of accidents, and promoting the reduction of costs that more extensive repairs imply when the damage is detected early. The current work focused on three elements of diagnosis of structural damage: detection, classification, and loca­tion, either in metaltic or cornposite material structures, given their wide use in air, land, rnar­itime transport vehicles, aerospace, wind turbines, civil and military infrastructure. This work used the tools offered by machine leaming and bio-inspired computing. Given the right results to solve complex problems and recognizing pattems. It also involves changes in temperature since it is one of the parameters that influence real environments' structures. Information of a statistical nature applied to recognizing pattems and reducing the size of the information was used with tools such as PCA (principal component analysis), thanks to the experience obtained in works developed by the CoDAlab research group. The document is divided into five parts. The first includes a general description of the problem, the objecti.-es, and the results obtained, in addition to a brief theoretical introduction. Chapters 2, 3, and 4 include articles published in different joumals. Chapter 5 shows the results and conclusions. Other contributions, such as a book chapter and sorne papers presented at conferences, are included in appendix A. Finally, appendix B presents a multiplexing system used to develop the experiments carried out in this work.

DOCTORAL DEGREE IN ELECTRICAL ENGINEERING

  • SÁNCHEZ SÁNCHEZ, ENRIC: Energy-based control schemes of Modular Multilevel Converters for HVDC applications
    Author: SÁNCHEZ SÁNCHEZ, ENRIC
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN ELECTRICAL ENGINEERING
    Department: (DEE)
    Mode: Normal
    Deposit date: 16/11/2020
    Deposit END date: 30/11/2020
    Thesis director: GOMIS BELLMUNT, ORIOL | PRIETO ARAUJO, EDUARDO
    Committee:
         PRESIDENT: MOREL, FLORENT
         SECRETARI: BUENO PEÑA, EMILIO JOSE
         VOCAL: D\'ARCO, SALVATORE
    Thesis abstract: High Voltage Direct Current (HVDC) is a power electronics -based technology that enables the transmission of large amounts of power over long distances, the integration of remote offshore wind power to the main land, and the interconnection ofasynchronous AC systems. The Modular Multilevel Converter (MMC) is the state-of-the-art technology for Voltage Source Converter (VSC) based HVDC applications .As compared to the two-level converter, the MMC presents a more complex control scheme. However, it brings additional flexibility into the system. The present work focuses on the (energy-based) control of the MMC for HVDC applications , aiming to understand the additional degrees of freedom related to the internal energy of the MMC. First, DC voltage regulation in HVDC point-to-point links. Moreover, an experimental validation using a scaled MMC-based point-to-point link is carried out, particularly focusing on a novel experimental design of an HVDC cable emulator. With such a laboratory setup, the simulated system dynamics are contrasted with experiments . Furthermore, a generic controller for the same application is presented, and different optimal tuning techniques are addressed. Thus, the most suitable control gains are obtained automatically based on system constraints. In HVDC applications such as remote offshore wind farm clusters or isolated systems with low or non-existing synchronous generation, the MMC needs to operate as grid-forming. The present work explores the role of the internal energy of the MMC through different control structures . Finally, a multi-terminal HVDC grid where some terminals share the regulation of the DC voltage and others operate in grid-forming mode is considered, addressing the distributed DCvoltage droop control design.

DOCTORAL DEGREE IN ELECTRONIC ENGINEERING

  • SEGURA GARCIA, DANIEL: Development of silicon photonic structures for sensing and signal processing
    Author: SEGURA GARCIA, DANIEL
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN ELECTRONIC ENGINEERING
    Department: Department of Electronic Engineering (EEL)
    Mode: Temporary seizure
    Deposit date: 19/11/2020
    Deposit END date: 03/12/2020
    Thesis director: RODRIGUEZ MARTINEZ, ANGEL
    Committee:
         PRESIDENT: MARSAL GARVI, LLUIS FRANCESC
         SECRETARI: PUIGDOLLERS GONZALEZ, JOAQUIN
         VOCAL: MORENO SERENO, MAURICIO
    Thesis abstract: Silicon photonics is the opportunity that one of the most amazing materials in nature, silicon, has offered once its electronic technological possibilities are reaching its limit. The transparency of this material in the near and mid infrared, along with the extensively developed manufacturing processes, opens the door to the miniaturisation of complex optical systems used in innumerable applications. Ultra-fast communications with low power consumption or highly selective and sensible sensors are some of the most promising ones. In this thesis the optical properties of silicon have been exploded from two differentiated perspectives: the study of macroporous silicon photonic crystals applied to selective gas detection, and the development of in-house fabrication process for integrated silicon photonic circuits. In the first part of the thesis, devoted to macroporous silicon photonic crystal, efforts have been focused on understanding the multiple sources of distortion of the optical response of the same as well as the improvement of some figures of merit such as bandgap width, quality factor or transmittance. The improvement of these figures of merit could lead to the obtaining of photonic structures with potential selective light filtering market applications, in our case, focused to spectroscopic gas detection.On the other hand, in the second part of the thesis, a prospective work on the development of a standardised in-house fabrication-simulation-measurement process has been performed. In our group, and as far as I now at UPC, there was no previous knowledge on the fabrication of these devices. Therefore, we have had to develop all the fabrication/measurement protocols and measurement tools. With these processes optimized, different resonating optical structures have been obtained with high quality factors and relatively low losses. This thesis opens the door to the in-house fabrication of more complex optical circuits, which can be used to many applications, such as integrated optical gas sensors or beam stirring of THz antennas.

DOCTORAL DEGREE IN MARINE SCIENCES

  • LOPEZ-DORIGA SANDOVAL, UXIA: The influence of Climate Change on the coastal Risk Landscape of the Catalan coast.
    Author: LOPEZ-DORIGA SANDOVAL, UXIA
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN MARINE SCIENCES
    Department: (DECA)
    Mode: Normal
    Deposit date: 24/11/2020
    Deposit END date: 09/12/2020
    Thesis director: JIMENEZ QUINTANA, JOSE ANTONIO
    Committee:
         PRESIDENT: BENAVENTE GONZALEZ, JAVIER
         SECRETARI: SIERRA PEDRICO, JUAN PABLO
         VOCAL NO PRESENCIAL: FERNANDES CERVEIRA FERREIRA, ÓSCAR MANUEL
    Thesis abstract: Coastal areas concentrate very high socio-economic and natural values, which will be highly threatened by climate change, particularly by sea-level rise (SLR). Given the intrinsic characteristics of this zone, appropriate risk management requires a holistic analysis in which the multiple components of the coastal system are taken into account. This has been addressed by applying the concept of coastal risk landscape, which can be defined as the set of all the risks to which the coastal zone is exposed. In this work, the two most relevant SLR-driven hazards in terms of induced coastal impacts, erosion and inundation, have been analysed. The multiple functions provided by the coastal system open a wide spectrum of possible management options in general, and of adaptation strategies in particular, as a function of the policy target. This work focuses on the analysis of SLR-induced consequences (impact and adaptation) on recreational and natural functions of the coast due to their importance for the Mediterranean in general, and the Catalan coast in particular, and because they represent well the range of potential targets for coastal management, economy vs. environmental protection. From the recreational point-of-view, beaches are the main asset to be managed so that any variation in the carrying capacity will be translated into an impact on their recreational-tourist use. The expected shoreline-retreat, both due to current evolution rates and SLR-induced erosion, will imply a reduction in the optimal beach width to support the carrying capacity on beaches, an important factor for coastal tourism development, leading to an expected significant and growing economic impact in the next decades. Obtained results show that the Catalan coast is highly vulnerable to erosion and accelerated SLR exacerbates this adverse situation, although with significant spatial variation. Costa Barcelona is the most affected under current evolution rates finding here erosional hotspots such as the Mareme comarca (excluding the Ebro Delta). When SLR is considered, severely affected municipalities will appear within the Costa Brava whose future beach evolution will result in a significant decrease in the potential demand. In these areas, efficient adaptation measures will be required to maintain future carrying capacity within a certain range to sustain the economic contribution of coastal tourism activities. From the environmental perspective, the induced SLR-impact is analysed in terms of the potential damage on existing ecosystems. Flood-prone areas and potential damages are assessed taking into account the intrinsic resilience of some coastal habitats in the face of SLR. Obtained results show that Catalonia has a low sensitivity to SLR-inundation due to its coastal configuration except for low-lying areas (Gulf of Roses, Llobregat Delta and Ebro Delta), which in turn concentrate the highest natural values of the Catalan coast. In spite of their physical vulnerability, existing habitats have a natural adaptation capacity, which permits to maintain providing ecosystem functions although under a modified landscape. In these areas, adaptation strategies based on promoting the natural resilience of coastal habitats to SLR can allow for open up a whole range of adaptation strategies to shift the management perspective to environmental protection and conservation.

DOCTORAL DEGREE IN MECHANICAL, FLUIDS AND AEROSPACE ENGINEERING

  • ALGAR ESPEJO, ANTONIO: Amortiguación de final de carrera de actuadores hidráulicos
    Author: ALGAR ESPEJO, ANTONIO
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN MECHANICAL, FLUIDS AND AEROSPACE ENGINEERING
    Department: Department of Fluid Mechanics (MF)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: CODINA MACIA, ESTEBAN | FREIRE VENEGAS, FRANCISCO JAVIER
    Committee:
         PRESIDENT: GAMEZ MONTERO, PEDRO JAVIER
         SECRETARI: VERNET PEÑA, ANTONIO
         VOCAL: ROCA ENRICH, JOAN
    Thesis abstract: The internal cushioning systems of hydraulic linear actuators, of special interest in mobile machinery, pursue to avoid mechanical shocks at their end of stroke. The design where the piston, with perimeter grooves, regulates the flow by standing in front of the outlet port has not been studied in depth until now. Consequently, the operating fundamentals, influencing factors and optimization of these cushioning designs have been investigated. First, a dynamic model has been developed using the bond graph technique that integrates the mechanical equations of the actuator, the hydraulic circuit and the flow through the studied internal cushion design. This considers the evolution of internal flow during cushioning, characterized in detail by fluid-dynamic simulation. This CFD model has been validated experimentally for its refinement and well-founded determination of discharge coefficients. Subsequently, the complete dynamics of the actuator and, in particular, the radial movement of the piston are experimentally studied by means of the difficult installation of a sophisticated displacement sensor. Finally, the experimental observations and the fluid-dynamic coefficients are integrated into the dynamic model; ultimately, the model aims to predict the experimental behavior of the cushioning during the movement of the arm of an excavator. The radial movement of the observed piston turns it into an active and adjusting element that is essential in cushioning. This radial movement in coherence with the significant drag force estimated in the CFD simulation, generated by the flow through the grooves, where the laminar flow regime predominates. Analytical models are suitable for predicting the behavior of the cushioning system, observing results comparable to those experimentally obtained . There is an optimal behavior, highly influenced by the mechanical stress conditions of the system, subject to a compromise between an increasing section of the grooves and an optimization of the radial gap. In addition, given the difficult direct measurement of the radial movement of the piston, an indirect measurement method has been evaluated using low-cost accelerometers. Thus, a bond graph simulation model predicts the results of the double integration of acceleration, observed experimentally. Influenced by the diverse nature of the existing movements, the severe propagation of measurement errors makes indirect measurement of piston radial motion inadequate.
  • BELLAFONT PERALTA, IGNASI: Study of the beam induced vacuum effects in the cryogenic beam vacuum chamber of the Future Circular Hadron Collider
    Author: BELLAFONT PERALTA, IGNASI
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN MECHANICAL, FLUIDS AND AEROSPACE ENGINEERING
    Department: Department of Fluid Mechanics (MF)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ESCALER PUIGORIOL, FRANCESC XAVIER | QUISPE FLORES, MARCOS OSWALDO
    Committee:
         PRESIDENT: PONT, MONTSERRAT
         SECRETARI: KOUBYCHINE MERKULOV, YOURI ALEXANDROVICH
         VOCAL: FERREIRA SOMOZA, JOSE ANTONIO
    Thesis abstract: The Future Circular Hadron Collider (FCC-hh) is a proposal for a 100 km long hadron collider conceived as the successor of the Large Hadron Collider (LHC). The FCC-hh is designed to reach 50 TeV of beam energy, greatly surpassing the current energy frontier set by the LHC at 7 TeV. Thus, it is expected to expand the current horizon of understanding of the Standard Model of particle physics. The beam vacuum chamber of the FCC-hh will have to cope with unprecedented levels of synchrotron radiation power for hadron colliders, around 160 times higher than its predecessor, dealing simultaneously with a tighter magnet aperture. Since the higher radiation power will result in a much higher gas load, the difficulty to achieve a good vacuum quality increases considerably compared with the LHC. This thesis presents a study of the so-called beam induced vacuum effects in the FCC-hh, meaning the different phenomena which, due to the presence of the particle beam, have a detrimental impact on the accelerator¿s vacuum level. These studied effects are the photon stimulated desorption (PSD), the electron stimulated desorption (ESD) and ion stimulated desorption (ISD). Each effect has been thoroughly studied, calculating analytically and with a series of Monte Carlo simulations the resulting gas density in the chamber for all the common gas species. Finally, the feasibility of the FCC-hh from the vacuum point of view has been assessed. To mitigate the beam induced effects and to improve the vacuum quality in the FCC-hh (essential for a proper machine operation) it was necessary to propose a new beam screen design. The new beam screen features new solutions to mitigate the e- cloud, to handle the synchrotron radiation and a much higher pumping speed, at the expense of a higher manufacturing complexity. A dedicated experimental setup was used by CERN during the design phase to measure the vacuum performance of the developed beam screen prototypes. Finally, the obtained experimental results were compared with the theoretical results to further enhance their validity. It is concluded that thanks to the new beam screen design, the vacuum level in the FCC-hh should be adequate. Using nominal beam parameters, the established FCC-hh vacuum specifications could be met within the first months of conditioning, a reasonable amount of time.
  • GENG, LINLIN: Numerical Investigation and Modelling of the Unsteady Behavior and Erosion Power of Cloud Cavitation
    Author: GENG, LINLIN
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN MECHANICAL, FLUIDS AND AEROSPACE ENGINEERING
    Department: Department of Fluid Mechanics (MF)
    Mode: Article-based thesis
    Deposit date: 25/11/2020
    Deposit END date: 10/12/2020
    Thesis director: ESCALER PUIGORIOL, FRANCESC XAVIER
    Committee:
         PRESIDENT: FARHAT, MOHAMED
         SECRETARI: GUARDO ZABALETA, ALFREDO DE JESUS
         VOCAL: HUANG, XINGXING
    Thesis abstract: Cloud cavitation is a unwanted phenomenon taking place in many hydraulic machines which damages the surfaces of the solid walls due to the erosive aggressiveness induced by the collapse process. Therefore, it is necessary to accurately predict the occurrence of cloud cavitation and quantify its erosion intensity to improve the design and to extend the life cycle of existing machines and systems. The application of numerical simulation (CFD) offers the opportunity to predict unsteady cavitation. For that, it is of paramount importance to investigate how to select the most appropriate models to obtain more accurate results in an efficient way and how to relate the collapsing vapor structures with their erosion power. In the current study, the influence of the different turbulence models was assessed and the performance of cavitation models was improved. The relationship between the unsteady behavior and its erosion character was also considered by implementing an erosion model.For the assessment of the turbulence models, three Unsteady Reynolds Average Navier-Stokes (URANS) turbulence models were employed to simulate the cloud cavitation around a NACA65012 hydrofoil at eight different hydrodynamic conditions. The results indicate that the Shear Stress Transport (SST) model can better capture the unsteady cavity behavior than the k-e and the RNG models if the near wall grid resolution is fine enough. For the improvement of the cavitation models, the influence of the empirical constants of the Zwart model on the cavity dynamics was firstly investigated. The results show that the cavity behavior is sensitive to their variation, and thereby an optimal range is proposed which can provide a better prediction of the vapor volume fraction and of the instantaneous pressure pulse generated by the main cloud cavity collapse. Secondly, the original Zwart and Singhal cavitation models were corrected by taking into account the second order term of the Rayleigh-Plesset equation. The performances of the original and corrected models were compared for two different cavitation patterns. The results for a steady attached cavity demonstrate that the corrected model predicts better the pressure distribution at the cavity closure region and the cavity length in comparison with the experiment observations. The results for unsteady cloud cavitation also confirm that the prediction of the shedding frequency can be improved with the corrected Zwart model.For the investigation of the cavitation erosion power, an erosion model based on the energy balance approach was employed. It has been found that the spatial and temporal distribution of the erosion aggressiveness is sensitive to the selection of the cavitation model and to the collapse driving pressure. In particular, the use of average pressure levels combined with the Sauer cavitation model permit to achieve reliable results. Then, two erosion mechanisms have been observed, one occurs at the closure region of the main sheet cavity characterized by low-intensity collapses but with high frequency, and the other is inducted by the collapse of the shed cloudy cavity which presents a high erosion intensity but with low frequency. Finally, it has been found that the erosion power follows a power law with the main flow velocity with exponents ranging from 3 to 5 depending on the erosion estimate being used.

DOCTORAL DEGREE IN SIGNAL THEORY AND COMMUNICATIONS

  • HUELTES ESCOBAR, ALBERTO: Synthesis of acoustic wave filters. Ladder and transversal topologies. Towards a practical implementation
    Author: HUELTES ESCOBAR, ALBERTO
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN SIGNAL THEORY AND COMMUNICATIONS
    Department: Department of Signal Theory and Communications (TSC)
    Mode: Confidentiality
    Deposit date: 16/11/2020
    Deposit END date: 30/11/2020
    Thesis director: COLLADO GOMEZ, JUAN CARLOS | MATEU MATEU, JORDI
    Committee:
         PRESIDENT: AIGNER, ROBERT
         SECRETARI: GILABERT PINAL, PERE LLUIS
         VOCAL NO PRESENCIAL: VILLANUEVA TORRIJO, LUIS GUILLERMO
    Thesis abstract: The meteoric growth of the mobile communication market for the last three decades has been strongly related with the evolution of the electroacoustic (EA) filter technology. With way over 5 billion cell phone users worldwide in 2017 and 5G standard in the horizon, the radiofrequency spectrum is becoming increasingly crowded whereas demand for mobile data has no expected limits in the short to medium term.In this scenario, where a unique filter needs to be designed for each band of operation, requirements for advanced filtering solutions continue to grow as well as the average value of the RF solutions and the RF content per mobile device.RF and microwave devices based on EA resonators such as Bulk Acoustic Wave (BAW) and Surface Acoustic Wave (SAW) filters overcame the limitations of the existent technologies back in the 1980¿s thanks to its compatibility with the manufacturing process of standard Silicon Integrated Circuits (Si-IC).Nowadays sophisticated RF Front-End (RFFE) Modules based on System-in-Package (SiP) are the key solution to integrate the increasing number of electronic parts - Power Amplifiers (PAs), Low Noise Amplifiers (LNAs) and switches - that accompanies acoustic filtering devices like filters, duplexers and multiplexers.Even though the level of complexity and accuracy present on the current EA devices is extraordinary, the design procedures for acoustic filters are still based on the optimization of built-in performance parameters from behavior-based compact models of resonators and the success on this duty relies mostly on the expertise of the designers. However due to the stringent technological constrains and the increasing complexity of the RFFE module architectures devoted to satisfy demanding specifications of the current and forthcoming communications standards, the challenging work of the designers is never getting easier. The aim of this work is focused on providing valuable insights that might help to overcome some of the existent limitations in the design of EA filters. Wider bandwidths, prescribed inclusion of external elements or multiplexing features are taken into consideration. On one hand we show a synthesis formulation and an automated procedure to carry out the synthesis for current well-known topologies, i.e. ladder, taking into consideration realistic values and specifications. On the other hand, a synthesis formulation for novel topologies, named here transversal, is provided. Success of this novel topology will certainly help to guarantee the prevalence of EA filters in the future communication standards, along with the extension to other applications with more stringent requirements and different operating frequencies. The first part of the work is devoted to explain the basic theory related to filter synthesis and its applicability to filters based on EA resonators. Subsequent chapters elaborate the theory to explain the procedures followed to obtain both synthesis and their limitations. Finally, a Chapter devoted to study cases shows the results of applying the filter synthesis to real case scenarios.
  • KHAN, UMAIR: Self-supervised Deep Learning Approaches To Speaker Recognition
    Author: KHAN, UMAIR
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN SIGNAL THEORY AND COMMUNICATIONS
    Department: Department of Signal Theory and Communications (TSC)
    Mode: Normal
    Deposit date: 16/11/2020
    Deposit END date: 30/11/2020
    Thesis director: HERNANDO PERICAS, FRANCISCO JAVIER
    Committee:
         PRESIDENT: GONZÁLEZ RODRÍGUEZ, JOAQUÍN
         SECRETARI: LUQUE SERRANO, JORGE
         VOCAL NO PRESENCIAL: GHAHABI ESFAHANI, OMID
    Thesis abstract: In speaker recognition, i-vectors have been the state-of-the-art unsupervised technique over the last few years, whereas x-vectors is becoming the state-of-the-art supervised technique, these days. Recent advances in Deep Learning (DL) approaches to speaker recognition have improved the performance but are constrained to the need of labels for the background data. In practice, labeled background data is not easily accessible, especially when large training data is required. In i-vector based speaker recognition, cosine and Probabilistic Linear Discriminant Analysis (PLDA) are the two basic scoring techniques. Cosine scoring is unsupervised whereas PLDA parameters are typically trained using speaker-labeled background data. This makes a big performance gap between these two scoring techniques. The question is: how to fill this performance gap without using speaker labels for the background data? In this thesis, the above mentioned problem has been addressed using DL approaches without using and/or limiting the use of labeled background data. Three DL based proposals have been made.In the first proposal, a Restricted Boltzmann Machine (RBM) vector representation of speech is proposed for the tasks of speaker clustering and tracking in TV broadcast shows. This representation is referred to as RBM vector. The experiments on AGORA database show that in speaker clustering the RBM vectors gain a relative improvement of 12% in terms of Equal Impurity (EI). For speaker tracking task RBM vectors are used only in the speaker identification part, where the relative improvement in terms of Equal Error Rate (EER) is 11% and 7% using cosine and PLDA scoring, respectively.In the second proposal, DL approaches are proposed in order to increase the discriminative power of i-vectors in speaker verification. We have proposed the use of autoencoder in several ways. Firstly, an autoencoder will be used as a pre-training for a Deep Neural Network (DNN) using a large amount of unlabeled background data. Then, a DNN classifier will be trained using relatively small labeled data. Secondly, an autoencoder will be trained to transform i-vectors into a new representation to increase their discriminative power. The training will be carried out based on the nearest neighbor i-vectors which will be chosen in an unsupervised manner. The evaluation was performed on VoxCeleb-1 database. The results show that using the first system, we gain a relative improvement of 21% in terms of EER, over i-vector/PLDA. Whereas, using the second system, a relative improvement of 42% is gained. If we use the background data in the testing part, a relative improvement of 53% is gained.In the third proposal, we will train a self-supervised end-to-end speaker verification system. The idea is to utilize impostor samples along with the nearest neighbor samples to make client/impostor pairs in an unsupervised manner. The architecture will be based on a Convolutional Neural Network (CNN) encoder, trained as a siamese network with two branch networks. Another network with three branches will also be trained using triplet loss, in order to extract unsupervised speaker embeddings. The experimental results show that both the end-to-end system and the speaker embeddings, despite being unsupervised, show a comparable performance to the supervised baseline. Moreover, their score combination can further improve the performance.The proposed approaches for speaker verification have respective pros and cons. The best result was obtained using the nearest neighbor autoencoder with a disadvantage of relying on background i-vectors in the testing. On the contrary, the autoencoder pre-training for DNN is not bound by this factor but is a semi-supervised approach. The third proposal is free from both these constraints and performs pretty reasonably. It is a self-supervised approach and it does not require the background i-vectors in the testing phase.
  • PASCUAL BIOSCA, DANIEL: Design and performance analysis of advanced GNSS-R instruments back-end
    Author: PASCUAL BIOSCA, DANIEL
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN SIGNAL THEORY AND COMMUNICATIONS
    Department: Department of Signal Theory and Communications (TSC)
    Mode: Normal
    Deposit date: 17/11/2020
    Deposit END date: 01/12/2020
    Thesis director: CAMPS CARMONA, ADRIANO JOSE | PARK, HYUK
    Committee:
         PRESIDENT: ZAVOROTNY, VALERY U.
         SECRETARI: BROQUETAS IBARS, ANTONI
         VOCAL NO PRESENCIAL: RODRÍGUEZ ÁLVAREZ, NEREIDA
    Thesis abstract: GNSS reflectometry (GNSS-R) is a set of techniques that uses the reflected GNSS signals over the Earth¿s surface as opportunity signals for remote sensing applications. In 1993, the European Space Agency (ESA) suggested to use those signals as a tool to estimate sea height anomalies in an attempt to detect tsunamis before they reach the shore. Since then, GNSS-R has been a subject of topical interest, and has proven its feasibility for a diverse number of applications in land, ice and water bodies, from ground-based, airborne and space-borne instruments.This Ph.D. thesis has its roots in the ESA project PARIS-IoD, which aimed to study the feasibility of GNSS-R for sea altimetry from a space-borne instrument. Unfortunately, the project was canceled shortly after starting this thesis. However, some of the questions that have arisen during the initial phase of the project, have been the goals of this Ph.D.The thesis has two objectives. The first one is to solve some theoretical issues required for the development of next generation of GNSS-R instruments. The second one is to participate in the development of an airborne GNSS-R instrument mimicking the one intended for the PARIS-IoD, and to use it in field campaigns. The main feature of this instrument is that it has two dual-band (L1/L5) array antennas, with two beams per band that are analog steered towards the desired satellite and reflection point.As for the theoretical objectives; this thesis investigates five topics.Firstly, the thesis gives closed-form expressions of the so-called Woodward Ambiguity Functions (WAFs) of the modern GNSS signals as function of the receiver bandwidth. The motivation behind this work is that these equations can be later used in simulators in order to create reference models based on different physical magnitudes.Secondly, the thesis finds the optimum receiver bandwidth for each GNSS signal in terms of altimetric precision. The idea behind this study is that using a large bandwidth produces sharper waveforms, which translates into a higher resolution. However, the thermal noise increases as well with the bandwidth. Thus, there is an optimum bandwidth that minimizes the altimetric error.Thirdly, the thesis studies the impact of sampling the GNSS signals with 1-level on the GNSS-R observables in terms of precision and of sensitivity to changes in the physical magnitudes. This work has importance because if 1-level samples were to be used, then the hardware requirements in terms of FPGA resources, transmission rate, hard drive write speed, and storage capacity will be reduced.Fourthly, the thesis discusses different architectures for real-time correlators in FPGAs compare to those in GPUs. Results show that GPUs can be used for real-time, with the advantage of being much more flexible and easier to program.Finally, the thesis investigates the cross-talk phenomena, or interference between GNSS satellites in the iGNSS-R technique. Results show that this kind of interference is not a problem in space-borne missions, but it does have an impact on airborne and ground-based ones. In such cases, antenna arrays with large directivity should be used.Returning to the instrument design objective, this thesis has two main goals. Firstly, to build the signal processing units inside the receivers and in the transmitter used to generate the calibration signal, and to develop the software to control both. Secondly, to process the data from different airborne fields campaigns using the GPU software mentioned above. The results have shown the feasibility of MIR for soil moisture estimation, sea altimetry, sea state, topography and water body/land transitions.
  • SALEK SHISHAVAN, FARZIN: From hypothesis testing of quantum channels to secret sharing
    Author: SALEK SHISHAVAN, FARZIN
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN SIGNAL THEORY AND COMMUNICATIONS
    Department: Department of Signal Theory and Communications (TSC)
    Mode: Normal
    Deposit date: 12/11/2020
    Deposit END date: 26/11/2020
    Thesis director: RODRIGUEZ FONOLLOSA, JAVIER | WINTER, ANDREAS
    Committee:
         PRESIDENT: RENNER, RENATO
         SECRETARI: ACÍN DAL MASCHIO, ANTONIO
         VOCAL NO PRESENCIAL: WILDE, MARK M.
    Thesis abstract: The present thesis has three major thrusts: the first thrust presents a broad investigation ofasymptotic binary hypothesis testing, when each hypothesis represents asymptotically manyindependent instances of a quantum channel. Unlike the familiar setting of quantum states ashypotheses, there is a fundamental distinction between adaptive and non-adaptive strategies withrespect to the channel uses, and we introduce a number of further variants of the discrimination tasksby imposing different restrictions on the test strategies. The following results are obtained: (1) The firstseparation between adaptive and non-adaptive symmetric hypothesis testing exponents for quantumchannels, which we derive from a general lower bound on the error probability for non-adaptivestrategies. (2) We prove that for classical-quantum channels, adaptive and non-adaptive strategieslead to the same error exponents both in the symmetric (Chernoff) and asymmetric (Hoeffding)settings. (3) We prove that, in some sense generalizing the previous statement, for general channelsadaptive strategies restricted to classical feed-forward and product state channel inputs are notsuperior to non-adaptive product state strategies. (4) As application of our findings, we address thediscrimination power of quantum channels and show that neither adaptive strategies nor inputquantum memory can increase the discrimination power of an entanglement-breaking channel.In the second thrust, we construct new protocols for the tasks of converting noisy multipartite quantumcorrelations into noiseless classical and quantum ones using local operations and classicalcommunications (LOCC). For the former, known as common randomness (CR) distillation, two newlower bounds are obtained. Our proof relies on a generalization of communication for omniscience(CO). Our contribution here is a novel simultaneous decoder for the compression of correlated classicalsources by random binning with quantum side information at the decoder. For the latter, we derive twonew lower bounds on the rate at which Greenberger-Horne-Zeilinger (GHZ) states can beasymptotically distilled from any given pure state under LOCC. Our approach consists in ¿makingcoherent¿ the proposed CR distillation protocols and recycling of resources.The final thrust studies communication over a single-serving two-receiverquantum broadcast channel with legitimate receiver and eavesdropper. Wefind inner and outer boundary regions for the tradeoff between common, individualized,confidential messages as well as the rate of the dummy randomnessused for obfuscation. As applications, we find one-shot capacity boundson the simultaneous transmission of classical and quantum information andre-derive a number of asymptotic results in the literature.

DOCTORAL DEGREE IN STATISTICS AND OPERATIONS RESEARCH

  • BOFILL ROIG, MARTA: Statistical methods and software for clinical trials with binary and survival endpoints. Efficiency, sample size and two-sample comparison.
    Author: BOFILL ROIG, MARTA
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN STATISTICS AND OPERATIONS RESEARCH
    Department: Department of Statistics and Operations Research (EIO)
    Mode: Normal
    Deposit date: 20/11/2020
    Deposit END date: 04/12/2020
    Thesis director: GÓMEZ MELIS, GUADALUPE
    Committee:
         PRESIDENT: OCAÑA REBULL, JORDI
         SECRETARI: PORTA BLEDA, NÚRIA
         VOCAL: JAKI, THOMAS
         VOCAL: LAMARCA CASADO, ROSA
         VOCAL: POSCH, MARTIN
    Thesis abstract: Defining the scientific question is the starting point for any clinical study. However, even though the main objective is generally clear, how this is addressed is not usually straightforward. Clinical studies very often encompass several questions, defined as primary and secondary hypotheses, and measured through different endpoints.In clinical trials with multiple endpoints, composite endpoints, defined as the union of several endpoints, are widely used as primary endpoints. The use of composite endpoints is mainly motivated because they are expected to increase the number of observed events and to capture more information than by only considering one endpoint. Besides, it is generally thought that the power of the study will increase if using composite endpoints and that the treatment effect on the composite endpoint will be similar to the average effect of its components. However, these assertions are not necessarily true and the design of a trial with a composite endpoint might be difficult.Different types of endpoints might be chosen for different research stages. This is the case for cancer trials, where short-term binary endpoints based on the tumor response are common in early-phase trials, whereas overall survival is the gold standard in late-phase trials. In the recent years, there has been a growing interest in designing seamless trials with both early response outcome and later event times. Considering these two endpoints together could provide a wider characterization of the treatment effect and also may reduce the duration of clinical trials and their costs.In this thesis, we provide novel methodologies to design clinical trials with composite binary endpoints and to compare two treatment groups based on binary and time-to-event endpoints. In addition, we present the implementation of the methodologies by means of different statistical tools. Specifically, in Chapter 2, we propose a general strategy for sizing a trial with a composite binary endpoint as primary endpoint based on previous information on its components. In Chapter 3, we present the ARE (Asymptotic Relative Efficiency) method to choose between a composite binary endpoint or one of its components as the primary endpoint of a trial. In Chapter 4, we propose a class of two-sample nonparametric statistics for testing the equality of proportions and the equality of survival functions. In Chapter 5, we describe the software developed to implement the methods proposed in this thesis. In particular, we present CompARE, a web-based tool for designing clinical trials with composite endpoints and its corresponding R package, and the R package SurvBin in which we have implemented the class of statistics presented in Chapter 4. We conclude this dissertation with general conclusions and some directions for future research in Chapter 6.
  • DUARTE LÓPEZ, ARIEL: Zipf Extensions and their applications for modeling the degree sequences of real networks.
    Author: DUARTE LÓPEZ, ARIEL
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN STATISTICS AND OPERATIONS RESEARCH
    Department: Department of Statistics and Operations Research (EIO)
    Mode: Normal
    Deposit date: 25/11/2020
    Deposit END date: 09/12/2020
    Thesis director: PÉREZ CASANY, MARTA
    Committee:
         PRESIDENT: GINEBRA MOLINS, JOSEP
         SECRETARI: DAUNIS ESTADELLA, JOSEP
         VOCAL: GÓMEZ DÉNIZ, EMILIO
    Thesis abstract: The Zipf distribution, also known as discrete Pareto distribution, attracts considerable attention because it helps describe skewed data from many natural as well as man-made systems. Under the Zipf distribution, the frequency of a given value is a power function of its size. Consequently, when plotting the frequencies versus the size in log-log scale for data following this distribution, one obtains a straight line. Nevertheless, for many data sets the linearity is only observed in the tail and when this happens, the Zipf is only adjusted for values larger than a given threshold. This procedure implies a loss of information, and unless one is only interested in the tail of the distribution, the need to have access to more flexible alternatives distributions is evidenced.The work conducted in this thesis revolves around four bi-parametric extensions of the Zipf distribution. The first two belong to the class of Random Stopped Extreme distributions. The third extension is the result of applying the concept of Poisson-Stopped-Sum to the Zipf distribution and, the last one, is obtained by including an additional parameter to the probability generating function of the Zipf. An interesting characteristic of three of the models presented is that they allow for a parameter interpretation that gives some insights about the mechanism that generates the data. In order to analyze the performance of these models, we have fitted the degree sequences of real networks from different areas as: social networks, protein interaction networks or collaboration networks. The fits obtained have been compared with those obtained with other bi-parametric models such as: the Zipf-Mandelbrot, the discrete Weibull or the negative binomial. To facilitate the use of the models presented, they have been implemented in the zipfextR package available in the Comprehensive R Archive Network.

DOCTORAL DEGREE IN STRUCTURAL ANALYSIS

  • CORNEJO VELÁZQUEZ, ALEJANDRO: A fully Lagrangian formulation for fluid-structure interaction between free-surface flows and multi-fracturing solids
    Author: CORNEJO VELÁZQUEZ, ALEJANDRO
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN STRUCTURAL ANALYSIS
    Department: (DECA)
    Mode: Normal
    Deposit date: 17/11/2020
    Deposit END date: 01/12/2020
    Thesis director: OÑATE IBAÑEZ DE NAVARRA, EUGENIO | ZARATE ARAIZA, JOSE FRANCISCO
    Committee:
         PRESIDENT: CERVERA RUIZ, LUIS MIGUEL
         SECRETARI: LARESE DE TETTO, ANTONIA
         VOCAL NO PRESENCIAL: PEREGO, UMBERTO
    Thesis abstract: It is well known that in civil engineering structures are designed so that they remain,whenever possible, in an elastic regime and with their mechanical properties intact. The truth isthat in reality there are uncertainties either in the execution of the work (geometric errors ormaterial quality) or during its subsequent use (loads not contemplated or its value has beenestimated incorrectly) that can lead to the collapse of the structure. This is why the study of thefailure of structures is inherently interesting and, once is known, its design can be improvedto be the less catastrophic as possible or to dissipate the maximum energy before collapsing.Another area of application of fracture mechanics is that of processes of which interest liesin the breakage or cracking of a medium. Within the mining engineering we can enumerateseveral processes of this nature, namely: hydraulic fracture processes orfracking, blasting fortunnels, explosion of slopes in open pit mines, among others. Equally relevant is the analysis ofstructural failures due to natural disasters, such as large avenues or even tsunamis impactingprotection structures such as walls or dikes. In this work numerous implementations and studieshave been made in relation to the mentioned processes.That said, the objective of this thesis is to develop an advanced numerical method capableof simulating multi-fracture processes in materials and structures. The general approach ofthe proposed method can be seen in various publications made by the author and directorsof this thesis. This methodology is meant to cover the maximum spectrum of engineeringapplications possible. For this purpose, a coupled formulation of theFinite Element Method(FEM) and theDiscrete Element Method(DEM) is used, which employs an isotropic damageconstitutive model to simulate the initial degradation of the material and, once the strength ofthe material has been completely exhausted, thoseFinite Element(FE) are removed from theFEMmesh and a set ofDiscrete Element(DE) are generated at its nodes. In addition to ensurethe conservation of the mass of the system, theseDEprevent the indentation between thefissure planes thanks to the frictional repulsive forces calculated by theDEM, if any.Additionally, in this thesis it has been studied how the proposed coupled method namedFEM-DEMtogether with the smoothing of stresses based on thesuper-convergent patchisable to obtain reasonably mesh-independent results but, as one can imagine, the crack width isdirectly related to the size of the elements that have been removed. This favours the inclusionof an adaptive remeshing technique that will refine the mesh where it is required (according tothe Hessian of a nodal indicator of interest) thus improving the discretization quality of the crackobtained and thereby optimizing the simulation cost. In this sense, the procedures for mappingnodal and internal variables as well as the calculation of the nodal variable of interest will bediscussed.As far as the studies of natural disasters are concerned, especially those related to free-surface water flows such as tsunamis, one more level of coupling between the aforementionedmethodFEM-DEMand oneComputational Fluid Dynamics(CFD) formulation commonlyreferred to asParticle Finite Element Method(PFEM) has been implemented. With this strongcoupled formulation, many cases of wave impacts and fluid flows have been simulated agains solid structures such as walls and dikes, among others.

DOCTORAL DEGREE IN THEORY AND HISTORY OF ARCHITECTURE

  • VILLA CARRERO, JUAN MANUEL: Obra Seminal de Anne Griswold Tyng (1951-1953) y su relación con Louis Isadore Kahn: La búsqueda por integrar espacio y estructura a partir de la geometría de la materia
    Author: VILLA CARRERO, JUAN MANUEL
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN THEORY AND HISTORY OF ARCHITECTURE
    Department: (THATC)
    Mode: Normal
    Deposit date: 20/11/2020
    Deposit END date: 04/12/2020
    Thesis director: PIZZA DE NANNO, ANTONIO
    Committee:
         DIRECTOR: PIZZA DE NANNO, ANTONIO
         PRESIDENT: JUÁREZ CHICOTE, ANTONIO LUIS
         SECRETARI: SANZ ESQUIDE, JOSE ANGEL
         VOCAL NO PRESENCIAL: DIAS SANTOS PEDROSA, PATRÍCIA ALEXANDRA
    Thesis abstract: The emergency of complex phenomena in our time has forced creators to reconnect knowledge in design, since at the moment, most of them have fragmentary and hyper specialized inertia. Due to this, nowadays, in architecture the interest for hidden or little-known historic figures, related to science and technology, that for years looked for answers of design within the interconnection of different dimensions of reality, especially in conjunction with models of generative systems.This condition or problem on the daily chore allows us to approach to more general topics around the way man comes into contact with ideas and how it materializes them. In this case, the specific interest lies in how this happens in the architecture of the North American Northeast, or more exactly in the North American derivations of European theories, around the middle of the 20th century. That is, the moment in which there was a traffic of figures of science and technology between Europe and North America, which nurtured the architecture.Specifically, this text is interested in the figure of the architect A. G. Tyng, who maintained a strong professional and personal relationship with L. I. Kahn, one of the most notable figures of twentieth-century architecture. In this way, the initial concern was; how did she reconnect knowledge and how she materialized her seminal work between 1951 and 1953? Then, the objective of this thesis is to decipher the set of norms and rules that make up the practical theoretical world of A. G. Tyng, through the exegesis of his seminal work between 1951 and 1953, and his special relationship with L. I. Kahn.The answer to our research question was constructed from a qualitative descriptive-critical view, in which the inductive inferences, the abduction and methodologic empiricism prevailed. The physical description was privileged in this document, thanks to its presence in the present, what allowed it we face the theory and phenomena manifesto, as well as its comprehension and its internal laws¿. Couple with the above, it is important to highlight that this thesis approached a different angle than usual, and at the same time, opened a space of historiographical experimentation where the hypotheses of these great stories were tested.The results of the study indicated how the ideal of progress, coupled with science and technology in the mid-20th century United States, fueled a traffic of ideas between the professional and educational worlds. This generated a readjustment in the reductive and totalizing architectural models, dominant at that time, a fact that guided A. G. Tyng in the process of articulating fragmented knowledge in disciplines or fields of knowledge towards interdisciplinary and transdisciplinary reflexive models close to generative systems. Namely, it helped her in her search for tomorrow's structures.In conclusion, this process contributed to the construction of a new critical path for architecture and provided evidence to rearrange the history of L. I. Kahn, one of the great figures of architecture. He also delegated to us the bases of the visionary theories of A. G. Tyng, which represents an invaluable collection for research interested in the affinity to numbers and their similarity with computer-aided design logics or their similarity with design, close to the scientific paradigm or complex thought. Keywords: Tyng, Kahn, Seminal Work, Geometry, Science and technology, Structure.

DOCTORAL DEGREE IN THERMAL ENGINEERING

  • PONT VILCHEZ, ARNAU: A new grey area mitigation technique for DES. Theory and assessment.
    Author: PONT VILCHEZ, ARNAU
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: DOCTORAL DEGREE IN THERMAL ENGINEERING
    Department: Department of Heat Engines (MMT)
    Mode: Normal
    Deposit date: 23/11/2020
    Deposit END date: 07/12/2020
    Thesis director: TRIAS MIQUEL, FRANCESC XAVIER | OLIVA LLENA, ASENSIO
    Committee:
         PRESIDENT: KOZUBSKAYA, TATIANA
         SECRETARI: RIGOLA SERRANO, JOAQUIM
         VOCAL NO PRESENCIAL: MOCKETT, CHARLES
    Thesis abstract: The use of hybrid RANS-LES methods has become widespread during the last decade, as an interesting approach for covering the gap between RANS and LES turbulence models in terms of both computational resources and degree of modelling. In particular, for those situations where the flow unsteadiness needs to be well-captured or those flow configurations where RANS has demonstrated to be unreliable, such as massive flow separation. Within the family of hybrid models, Delayed - Detached Eddy Simulation (DDES) outstands due to its user-friendly non-zonal approach and its proved success in several applications.Despite their benefits, these models usually suffer from a slow RANS to LES transition (named Grey Area), resulting in unphysical delays of critical flow instabilities in sensitive regions, such as Kelvin-Helmholtz structures in free shear layers. This delay in the triggering process could significantly affect the flow dynamics downstream of the flow, as well as those kinds of physics that require high quality unsteady turbulent motion, such as fluid structure interaction and computational aeroacoustics. In this regard, the present thesis aims to perform a consistent study of different techniques for mitigating such delay, as well as presenting a promising easy-to-apply new strategy. Due to the lack of publicly available highly reliable data set, a Direct Numerical Simulation (DNS) of a Backward-Facing Step (BFS) at Re ? = 395 and expansion ratio (ER) 2 has been carried out during the first part of this thesis for comparison purposes. In contrast to the rest of reference cases, it provides a detailed view of the triggering and feeding processes of the flow instabilities through the free shear layer. As a result, the thesis provides a highly reliable data set publicly available on internet, as well as a competitive new technique for mitigating the Grey Area shortcoming.The thesis content is arranged as follows. In the first chapter, a general overview of the different approaches for modelling turbulence is presented, emphasizing the importance of the Hybrid RANS-LES strategies for industrial applications. In second chapter, the DNS of the BFS at Re ? = 395 and ER = 2 is explained in detail. Specialattention is paid on the triggering of the flow instabilities in the free shear layer downstream the step-edge. The third chapter describes the new techniques proposed in this thesis, based on the LES literature, for mitigating the Grey Area shortcoming. These are tested in the fourth chapter, which presents a consistent study of the different methodologies for addressing the unphysical delay of the shear layer instabilities. Finally, the last chapter gathers the main conclusions of the overall thesis and defines possible further work lines.

ERASMUS MUNDUS DOCTORAL DEGREE IN ENVIRONOMICAL PATHWAYS FOR SUSTAINABLE ENERGY SERVICES

  • WEGENER, MORITZ BENJAMIN GUSTAVE: Island-based Polygeneration Systems
    Author: WEGENER, MORITZ BENJAMIN GUSTAVE
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: ERASMUS MUNDUS DOCTORAL DEGREE IN ENVIRONOMICAL PATHWAYS FOR SUSTAINABLE ENERGY SERVICES
    Department: Institute of Energy Technologies (INTE)
    Mode: Change of supervisor
    Deposit date: 17/11/2020
    Deposit END date: 01/12/2020
    Thesis director: ISALGUE BUXEDA, ANTONIO | MALMQUIST, ANDERS | MARTIN, ANDREW
    Committee:
         PRESIDENT: NORD, NATASA
         SECRETARI: PETRAKOPOULOU, FONTINA
         VOCAL: ROSENDAHL, LASSE
    Thesis abstract: The colossal risks and challenges posed by climate change require innovative solutions that must fulfil energy service demands sustainably. The concept of small-scale, biomass-based polygeneration (SBP) is one such technological approach, which optimizes locally supplied fuels to provide several energy services like electricity, heating, cooling, potable water, and/or bio-chemical products. By presenting chosen SBP systems and models employed in various socio-geographic locations, in particular distributed applications, the thesis identifies benefits as well as drawbacks of the SBP concept and aims to promote its wider usage in the field.Because a multitude of technologies can be applied for polygeneration system design, the thesis starts with a thorough review of the highly complex and rapidly evolving field, where relevant literature is presented and assimilated. Based on this review, several models have been created for various solar-assisted SBP systems: Firstly, a small-scale Combined Cooling, Heating, and Power (CCHP) system based on biomass gasification has been investigated for a hotel resort on one of the Andaman Islands, India. Apart from economic and environmental superiority compared to a fossil-fuel reference system, the study also expanded technological aspects by adding a socio-political analysis of the benefits and drawbacks of the system for the entire island community. In the second study, a novel control algorithm was devised for a biogas-based polygeneration system generating electricity and potable water generation for a rural off-grid village in El Pando, Bolivia. It was found that the proposed system could lead to significant cost and emissions reductions paired with greater energy autonomy. In the third study, an optimization model for a combined gasification-based CCHP/Heat Pump (HP) system is presented for a tourist facility in Barcelona considering various climate scenarios. The study reveals that the system design is only slightly affected by future changes in climate and that the CCHP/HP system shows only a moderate economic performance but still considerable CO2-savings potential.The overall findings of these studies reveal that the economic feasibility of SBP systems depends greatly not just on their inherent design but also on their location. However, all proposed polygeneration systems could lower emissions significantly, while excelling in energy efficiency as well as adaptability towards service demands and other technologies. The presented studies contribute to the state of the art by adding innovative polygeneration system designs, proposing new modelling approaches and subsequent models including SBP system enhancing technologies, as well as by investigating the effects of geographical location and climate change on the system design process.

ERASMUS MUNDUS DOCTORAL DEGREE IN SIMULATION IN ENGINEERING AND ENTREPRENEURSHIP DEVELOPMENT

  • OLIVEIRA DE MATOS REIS, JONATHA: Error Estimation and Adaptivity for PGD Solutions
    Author: OLIVEIRA DE MATOS REIS, JONATHA
    Thesis file: (contact the Doctoral School to confirm you have a valid doctoral degree and to get the link to the thesis)
    Programme: ERASMUS MUNDUS DOCTORAL DEGREE IN SIMULATION IN ENGINEERING AND ENTREPRENEURSHIP DEVELOPMENT
    Department: (DECA)
    Mode: Change of supervisor
    Deposit date: 20/11/2020
    Deposit END date: 04/12/2020
    Thesis director: DIEZ MEJIA, PEDRO | MOITINHO DE ALMEIDA, JOSE PAULO BAPTISTA | ZLOTNIK MARTINEZ, SERGIO
    Committee:
         PRESIDENT: CHAMOIN, LUDOVIC
         SECRETARI: SARRATE RAMOS, JOSE
         VOCAL: GONZALEZ IBAÑEZ, DAVID
    Thesis abstract: A significant part of the current research efforts in computational mechanics is focused on analyzing and handling large amounts of data, exploring high dimensional parametric spaces and providing answers to increasingly complex problems in short or real-time. This alludes to concepts (like digital twins) and technologies (like machine learning), methodologies to be considered in combination with classical computational models. Reduced Order Models (ROM) contribute to address these challenges by reducing the number of degrees of freedom of the models, suppressing redundancies in the description of the system to be modeled and simplifying the representation of the mathematical objects quantifying the physical magnitudes.Among these reduce order models, the Proper Generalized Decomposition (PGD) can be a powerful tool, as it provides solutions to parametric problems, without being affected by the "curse of dimensionality", providing explicit expressions, computed a priori, of the parametric solution, making it is well suited to provide real-time responses. The PGD is a well-established reduced order method, but assessing the accuracy of its solutions is still a relevant challenge, particularly when seeking guaranteed bounds of the error of the solutions it provides.There are several approaches to analyze the errors of approximate solutions, but the only way that provides computable and guaranteed error bounds is by applying dual analysis. The idea behind dual analysis is to use a pair of complementary solutions (one that is compatible, and the other equilibrated) for a specified problem and to use the difference between these solutions, which bounds their errors. Dual analysis is also effective to drive mesh adaptivity refinement processes, as it provides information of the contribution of the elements to the error, either in a global or in a local framework.In this work we deal with finite element solutions for solid mechanics problems, computing compatible and equilibrated PGD solutions, using them in the context of dual analysis. The PGD approximations are obtained with an algebraic approach, leading to separable solutions that can be manipulated for an efficient computational implementation. We use these solutions to obtain global and local error bounds and use these bounds to drive an adaptivity process. The meshes obtained through these refinements provide solutions with errors significantly lower than those obtained using a uniform refinement.

Last update: 26/11/2020 06:10:03.