Computational and Applied Mathematics Seminar
Location and Time
University of Wyoming Ross Hall 247, Fridays from 3:10-4:00 (unless otherwise stated).
For Spring 2016, the speakers are as follows:
Date Speaker From/Note 26-Jan (Tue.) Michael Jolly University of Indiana 29-Jan Tim Brewer and Craig C. Douglas University of Wyoming 05-Feb Idriss S. Titi Texas A&M and Weizmann Institute 10-Feb (Wed., 4:00 PM) Gerard T. Schuster KAUST and University of Wyoming 04-Mar Hakima Bessaih University of Wyoming 25-Mar Man-Chung Yeung University of Wyoming 01-Apr Myron Allen University of Wyoming 15-Apr Raghu Raj NCAR and University of Wyoming 22-Apr Jan Mandel University of Colorado at Denver 29-Apr Zachary J. Lebo University of Wyoming 06-May Tareq Dalgamoni University of Wyoming
If you would like to speak next academic year, please contact me by email. I am looking for speakers for Fall 2017!
The schedule, titles, and abstracts from Fall 2015 are here.
Titles and Abstracts
January 26 (Tuesday)
Determining Forms and data Assimilation
Prof. Michael Jolly, Indiana University
A determining form for a dissipative PDE is an ODE in a certain trajectory space where the solutions on the global attractor of the PDE are readily recognized. It is an ODE in the true sense of defining a vector field which is (globally) Lipschitz. We discuss two types of determining forms: one where solutions on the global attractor of the PDE are traveling waves, and another
where they are steady states. Each determining form is related to a certain approach to data assimilation, i.e., the injection of a coarse-grain time series into the model in order to recover the matching full solution. Applications have been made to the 2D incompressible Navier-Stokes, damped- driven nonlinear Schrodinger, damped-driven Korteveg-de Vries andsurface quasigeostrophic equations.
The Advanced Research Computing Cluster and Its Recent, Improved Offerings
Tim Brewster, Information Technology, University of Wyoming
The Mathematics Department Web Pages: What Should Be There and Is It Possible
Prof. Craig Douglas, School of Energy Resources and Department of Mathematics, University of Wyoming
The first half will be a description of what ARCC can provide researchers and a chance to ask Tim variations of a question that President Kennedy asked America in January, 1961, "Ask not what ARCC can do for you, but what can you do with ARCC?" No disrespect is intended to President Kennedy's memory. He had an ideal that the seminar lives for today.
The second half will be a demonstration of how we can better respesent the math department on the web and what the limitations are. The audience can ask for whatever it pleases to be put on the web site, even at the topmost level. You will see if it can be done and at what cost.
An Algorithm for Advancing Slow Features in Fast-Slow Systems without Scale Separation - A Young Measure Approach
Prof. Edriss S. Titi, Texas A&M University and The Weizmann Institute of Science
In the first part of the talk, and in order to set the stage, we will offer a multi-scale and averaging strategy to compute the solution of a singularly perturbed system when the fast dynamics oscillates rapidly; namely, the fast dynamics forms cycle-like limits which advance along with the slow dynamics. We describe the limit as a Young measure with values being supported on the limit cycles, averaging with respect to which induces the equation for the slow dynamics. In particular, computing the tube of the limit cycles establishes a good approximation for arbitrarily small singular parameters. We will demonstrate this by exhibiting concrete numerical examples.
In the second part of the talk we will examine singularly perturbed systems which may not possess a natural split into fast and slow state variables. Once again, our approach depicts the limit behavior as a Young measure with values being invariant measure of the fast contribution to the flow. These invariant measures are drifted by the slow contribution to the value. We keep track of this drift via slowly evolving observables. Averaging equations for the latter lead to computation of characteristic features of the motion and the location the invariant measures. To demonstrate our ideas computationally, we will present some numerical experiments involving a system derived from a spatial discretization of a Korteweg-de Vries-Burgers type equation, with fast dispersion and slow diffusion.
This is a joint work with Z. Artstein, W. Gear, I. Kevrekidis, J. Linshiz and M. Slemrod.
February 10 (Wednesday, 4:00, Encana Auditorium, Energy Innovation Center)
Hominid Seismology at Olduvai Gorge
Prof. Gerard T. Schuster, Earth Science and Engineering, King Abdullah University of Science & Technology (KAUST) and Adjunct Professor of Mathematics University of Wyoming
In 1959 Mary Leakey discovered the first robust Australopithecus (1.8 mya) fossil in Olduvai Gorge, Tanzania. Its discovery radically altered accepted ideas about the time scale of human evolution. In addition, the Leakeys found more than 2,000 stone tools and lithic flakes, most of which they classified as Oldowan (of Olduvai) tools. These finds supported the radical notion that the cradle of humankind was in Africa not southeast Asia as was previously believed. Since then hominid fossils as old as 4 mya have been along the east African Rift zone. Unfortunately, the extent and age of the paleoenvironment of Olduvai Gorge is incomplete because, until now, only a few boreholes and some outcrops in Olduvai gorge have been studied.
In the summer of 2015, for the first time, KAUST geophysicists with the collaboration of paleoanthropologists from Indiana University and a geologist from Leeds University used seismic experiments to image the Olduvai Basin. Both reflection imaging and seismic tomography reveal the depth and extent of the basin and the distribution of tectonic faults that separate one paleoenvironment from the next. This allows us to significantly expand our understanding of how our earliest ancestors lived around the Olduvai region. Our experiments represent the pioneering use of geophysical technology in assisting paleoanthropologists in their study of hominid sites.
Data assimilation with random errors
Prof. Hakima Bessaih, Mathematics Department, University of Wyoming
We analyse the performance of a data-assimilation algorithm based on a linear feedback control when used with observational data that contains measurement errors. Our model problem consists of dynamics governed by the two-dimensional incompressible Navier–Stokes equations, observational measurements given by finite volume elements or nodal points of the velocity field and measurement errors which are represented by stochastic noise. Under these assumptions, the data-assimilation algorithm consists of a system of stochastically forced Navier–Stokes equations. The main result provides explicit conditions on the observation density (resolution) which guarantee explicit asymptotic bounds, as the time tends to infinity, on the error between the approximate solution and the actual solutions which is corresponding to these measurements, in terms of the variance of the noise in the measurements.
If time permits, I will also discuss a second algorithm.
This is a joint work with Eric Olson and Edriss Titi.
On Solving Ill-conditioned Linear Systems
Prof. Man-Chung Yeung, Mathematics Department, University of Wyoming
This talk presents the first results to combine two theoretically sound methods (spectral projection and multigrid methods) together to attack ill-conditioned linear systems. Our preliminary results show that the proposed algorithm applied to a Krylov subspace method takes much fewer iterations to converge for solving an ill-conditioned problem downloaded from a popular online sparse matrix collection.
FRAMEWORK: Front Range Applied Mathematics Exchanges and Workshops
Prof. Myron Allen, Mathematics Department, University of Wyoming
This presentation will introduce a new NSF grant awarded to UW’s Mathematics Department and the Department of Applied Mathematics and Statistics Department at Colorado School of Mines. The NSF authorized the grant under its Enriched Doctoral Training initiative. The idea of the project is for the two departments to cooperate in several aspects of their Ph.D. programs, to provide opportunities for doctoral students to learn skills that are needed in nonacademic jobs. There are four key elements of the project:
- Summer internships
- Annual end-of-summer workshops
- A course exchange program for first-year graduate courses
- Targeted recruitment of new graduate students
Approach to Parallelization of SIFT Algorithm on GPU by Maximizing Occupancy
Raghu Raj, NCAR and Atmospheric Science Department, University of Wyoming
Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible for single threaded implementation to extract local feature descriptors for high-resolution images in real time. In this presentation, an approach to parallelization of the SIFT algorithm is demonstrated using NVIDIA’s General Purpose Graphics Processing Unit (GPGPU). The parallelization design for SIFT on GPUs is divided into two stages, a) Program design: generic design strategies which focuses on data and b) Implementation design: architecture specific design strategies which focuses on optimally using GPU resources for maximum occupancy. A significant decrease in average computational time is achieved by increasing memory latency hiding, eliminating branches and data blocking. Furthermore, it is observed via Paraver tools that our approach to parallelization while optimizing for maximum occupancy allows GPU to execute memory bound SIFT algorithm at optimal performance.
Assimilation of functional data with application to a coupled a fire-atmosphere model driven by satellite active fires detection
Prof. Jan Mandel, Mathematics Department, University of Colorado at Denver
Spatial models based on partial differential equations lead to Bayesian estimation of the model state as a random smooth function from functional data. But even in the simplest case when the entire state is observed and the state distribution and the data error distribution are the same Gaussian measure on infinite dimensional Hilbert space, the posterior probability distribution is undefined. But when the data error distribution is white noise, which has zero correlation between different points, the posterior probability distribution is well defined, and it is again a probability measure on the Hilbert space, even if white noise is not.
As an application, we present a method for the assimilation of active fires detection from satellites into a wildfire spread model coupled with a numerical weather forecasting model. The state of the fire model is encoded as the fire arrival time on a spatial domain and assumed to be a random field with the covariance a negative fractional power of the Laplace operator, i.e., a Gaussian measure. The data likelihood is the probability of fire detection, estimated from the heat output of the fire and statistical properties of the satellite-based sensor at every mesh cell independently, i.e., with zero spatial correlation. A preconditioned steepest descent method can find a practically useful approximation of the maximum aposteriori probability estimate in one or two iterations and tends to avoid local maxima.
This is based on joint work with Aimé Fournier, Ivan Kasanický, Mary Ann Jenkins, Adam K. Kochanski, Sher Schranz, and Martin Vejmelka. It was supported in part by NSF grant DMS1216481 and NASA grant NNX13AH59G.
Aerosols and Deep Convective Clouds: Have We "Resolved" the Problem?
Prof. Zachary J. Lebo, Atmospheric Science Department, University of Wyoming
The potential effects of changes in aerosol loading on deep convective cloud systems have received considerable attention in the recent literature, focusing on the response in precipitation, storm strength, lightning frequency, etc. In this talk, a review of the responses will be presented, focusing on two potential microphysical pathways by which an aerosol perturbation may modify the cloud and dynamical characteristics (i.e., the conventional “invigoration” mechanism and the modification of cold pool strength and/or structure). A detailed comparison between the simulated effects due to changes in aerosol loading and changes in environmental characteristics will be discussed; the effects due to changes in aerosol loading will be shown to be masked by even small changes in environmental characteristics. This conclusion suggests that observing such effects will be extremely difficult. Furthermore, the sensitivity of the model results to grid spacing will be explored and analyzed using a novel approach; this is an important consideration because many of the effects of changes in the aerosol loading are likely reliant on the entrainment/detrainment characteristics of deep convective cloud systems. Lastly, recent numerical solutions will be presented to confirm the cloud-resolving model simulations and pinpoint the processes in which deep convective cloud characteristics are likely to be most susceptible to changes in aerosol loading. These numerical solutions combined with the 3D cloud-resolving simulations will demonstrate that the “invigoration” mechanism is insignificant for strong deep convective cloud systems.
A Finite Volume Element Approximation of an Advection-Dominant Two-Point Boundary Value Problem
Tareq Dalgamoni, Mathematics Department, University of Wyoming
In this talk, we will present a numerical approximation to a one-dimensional advection-diffusion-reaction equation using finite volume element discretization, in which the approximation is sought in a piecewise linear finite element space and local conservation is imposed to govern it. The dominance of the advection component in the equation causes a boundary layer. Unless the characteristic length scale of this layer is sufficiently resolved, instability will appear in the numerical approximations. But this resolution will naturally result in higher dimensional system to solve and should be avoided. We will show a procedure to enrich the finite element space that has the capability to capture the boundary layer. This enrichment maintains efficiency of the approximation and retains its stability. It is based on the recognition that a special basis function is needed to represent the boundary layer effect in the approximation. We will illustrate performance of the method with several examples.