Computational and Applied Mathematics Seminar

Location and Time

University of Wyoming, Ross Hall 247, Fridays from 4:10-5:00 (unless otherwise stated).

Seminar Chairs

Professors Craig C. Douglas and Long Lee.


The CAM seminar series is currently supported through volunteers and the financial contributions by the UW Mathematics Department,, and and an energy grant from ExxonMobil.


For Fall, 2017, the speakers are as follows:

Date Speaker From/Note
February 2** Conner Smith University of Kansas
February 9 Nicholas A. Cheney University of Wyoming
February 16** Tom Trogden University of California at Irvine
March 2 Cameron H. G. Wright University of Wyoming
March 23 Joost Huizinga University of Wyoming
April 13 Brad McCaskill University of Wyoming
April 20 Jan Mandel University of Colorado at Denver
April 27 Stefan Heinz University of Wyoming
May 4 Myron Allen's students University of Wyoming

* Thursday Colloquium in AG 1030, ** Joint CAM - Analysis seminar

We are still looking for speakers for Spring 2018! The topics can be original research, a survey of an area, or an interesting paper or papers that would interest the CAM community.

The schedule, titles, and abstracts from Fall 2017 are here.

Titles and Abstracts

February 2

Metastable Traveling Fronts Arising in Nanoscale Pattern Formation
Prof. Conner Smith, Department of Mathematics, University of Kansas

We study an equation that models ripple formation when a flat surface is bombarded by an ion beam. The experimentally observed ripples consist of parts with a certain positive slope, parts with a certain negative slope, and transitions between the two slopes. The transitions may be modeled as traveling fronts with stable point spectrum but unstable essential spectrum. At first glance this may be written off as “unstable.” However numerical results suggest that the instability has two parts; a convecting part that saturates to some value and an exponentially decaying part that slightly modulates the underlying transition. We define an exponential weight that decays in the direction the first part of the instability travels in, allowing us to focus on the second part of the instability. In this exponentially weighted space we obtain a linear stability result. Inspired by the “repeating” part of the experimental results, we also consider “gluing” together solitary waves in an ad hoc periodic pattern. For some specific methods of gluing the entire spectrum is stable, with the implication that while an individual transition is unstable, the instability is benign enough that it can be stabilized by repeating the transition.

February 9

Prof. Nicholas A. Cheney, Department of Computer Science, University of Wyoming

Artificial Neural Networks have been a part of our machine learning toolset for decades, yet recent advances in these systems have led to stunning applications and results, and dramatic increases in the popularity of these methods. In this talk, I will give a brief introduction to the basics of machine learning with artificial neural networks, and highlight some of the recent advances that have led to their popularity. I will exemplify the problem-solving power of these models, focusing on recent applications as well as specific use-cases within my lab - centered around robotic optimization and computer vision. I will close by highlighting some of the concerns involved when widely applying black-box methods to critical applications, and showcasing some of the ways in which we are attempting to add transparency to the computations performed by these systems.

February 16

Riemann-Hilbert Problems and the Inverse Scattering Transform: From Asymptotics to Computation
Prof. Tom Trogden, Department of Mathematics, University of California at Irvine

The inverse scattering transform (IST) is used to solve the Cauchy problem for integrable nonlinear partial differential equations on the line. Matrix Riemann-Hilbert problems (RHPs) are a key component in the IST. Historically, RHPs have made the IST amenable to rigorous asymptotic analysis with the Deift-Zhou method of nonlinear steepest descent. More recently, techniques for oscillatory singular integral equations have been employed to solve RHPs numerically and compute the IST. Importantly, nonlinear dispersive (and integrable) evolution equations can be solved numerically without any need for time-stepping. Errors are seen to be uniformly small for arbitrarily large times. Combining this approach with the so-called dressing method allows for the computation of a wide class of non-decaying solutions.

March 2

Pattern Recognition and Classification: the Foundation
Prof. Cameron H. G. Wright, Department of Electrical and Computer Engineering, University of Wyoming

Pattern recognition and classification is a field that has both theoretical and application roots that go back many decades. Some would argue it goes back further, such as to the 18th century work of the Reverend Thomas Bayes, or further. But regardless of its beginnings, it has been a constant area of interest for many mathematicians and engineers. Much is being made in the popular press recently with the term AI, or Artificial Intelligence, and many laypeople think we've suddenly solved the problem of computers that can "think" as a human does. While some recent methods, such as deep learning neural networks, have shown some impressive results, the types of problems they solve is still narrow, and deep learning neural nets can make some alarmingly big mistakes. To really understand and appreciate recent advances in the field of pattern recognition and classification, it's necessary to cover the foundation upon which it is built. This presentation briefly covers that foundation, such as Bayesian approaches, parameter estimation, principal component analysis, discriminant analysis (both Fisher's and MDA), nonparametric techniques, linear and nonlinear discriminant functions, stochastic methods, multilayer neural networks, and variations of neural nets such as deep learning.

March 23

Evolving Structural Organization in Artificial Neural Networks
Joost Huizinga, Department of Computer Science, University of Wyoming

Artificial neural networks have been successfully applied to a great number of engineering problems, including image recognition, automatic translation, and robot control. However, despite their success, their performance, adaptability, and robustness still pales when compared to the intelligence of natural animals. Many researchers believe that one of the main contributors to the effectiveness of natural brains is that they are structurally organized, in the sense that they exhibit modularity, regularity, and hierarchy. Here, modularity refers to being composed of groups of nodes that are strongly connected to other nodes within the same group, while being only sparsely connected to nodes in other groups. Modularity can improve the adaptability of a network by allowing modules to be rewired when different functionality is required, and it can increase robustness by allowing changes to connections to have their effect localized to individual modules. Regularity refers to the compressibility of the description of the structure of a network, such that more regular structures require less information to be described. Regularity can increase the efficiency of training a network by allowing information to be reused. Finally, hierarchy refers to the recursive organization of a network, such that each component depends on or consists of several lower level components. Hierarchy can improve the effectiveness of neural networks by allowing high-level and low-level components to adapt separately. This talk will cover my research on how to automatically evolve neural networks that are modular, regular, and hierarchical, and how these networks can solve complex robotics problems. I will also show how structural organization can make evolved images more adaptable, and I will discuss the relationship between the structural organization of the encoding of an individual and the structural organization of the individual itself.

April 13

Numerical Methods for Porous Media Flow: Multiscale Modeling, Uncertainty Quantification, and Continuous Data Assimilation
Mr. Brad McCaskill, Department of Mathematics & Statistics, University of Wyoming

The way in which we manage subsurface resources is directly determined by the availability and quality of information we have about the dynamical systems that govern them. Typically this information is obtained by solving mathematical models that are posed on the domain of interest. Unfortunately, both the construction and process of solving these models can be a nontrivial task. In this dissertation we explore solutions to several problems related to modeling fluid flow through porous media. One aspect of this dissertation is the development of a nonstandard multiscale finite element method for solving elliptic boundary value problems. The so-called multiscale Robin method can be viewed as a merger of traditional domain decomposition methods with the framework of multiscale finite element methods. The novelty of this method is that its efficiency and accuracy are governed by a geometric enrichment of the solution space. An application of the multiscale Robin method to uncertainty quantification through the use of a stochastic representation method is considered. To this end, the multiscale Robin methodology is adapted to the framework of coupled elliptic boundary value problems. A computationally cheap and efficient method for the simulation of two-phase flow through poroelastic media is also proposed. Specifically, through the use of a artificial stabilization term and an element based post processing reasonable estimates of solutions to the associated geomechanic subsystem can be obtained. Finally, we adapt a continuous data assimilation algorithm to a model for miscible flow through porous media. The existence of weak solutions and convergence properties of the resulting model solution are studied. In all chapters a variety of numerical examples are used to evaluate the performance of each proposed solution methodology.

April 20

Assimilation of Fire Perimeters and Satellite Detections by Minimization of the Residual in a Fire Spread Model
Prof. Jan Mandel, Department of Mathematics, University of Colorado at Denver

Assimilation of data into a fire-spread model is formulated as an optimization problem. The level set equation, which relates the fire arrival time and the rate of spread, is allowed to be satisfied only approximately, and we minimize a norm of the residual. This is a nonlinear problem, and also nondifferentiable because of switching of upwinding directions in the discretization. An initial solution is found by solving a linear pseudo-differential equation.  Previous methods based on modification of the fire arrival time either used an additive correction to the fire arrival time, or made a position correction. Unlike additive fire arrival time corrections, the new method respects the dependence of the fire rate of spread on diurnal changes of fuel moisture and on weather changes, and, unlike position corrections, it respects the dependence of the fire spread on fuels and terrain as well. The method is used to interpolate the fire arrival time between two perimeters by imposing the fire arrival time at the perimeters as constraints.

April 27

Prof. Stefan Heinz, Department of Mathematics & Statistics, University of Wyoming


May 4

Prof. Myron Allen's Students, Department of Mathematics & Statistics, University of Wyoming


This web page is maintained by Prof. Craig C. Douglas

Last modified: