**Frank Morgan**Williams College

**Abstract**: A single round soap bubble provides the least-area way to enclose a given volume of air. The Double Bubble Conjecture says that the familiar double soap bubble provides the least-area way to enclose and separate two given volumes of air. I'll talk about the problem, the recent proof (Annals of Math. 2002), the latest results, and open questions. No prerequisites; undergraduates welcome.

**John Oprea**Cleveland State University

**Abstract**: When we look at Nature, we see shapes everywhere. But why do things take the shapes they do? In this talk, we will describe the shape of a Mylar balloon in terms of elliptic functions. (A Mylar balloon is often found at kids' birthday parties and is formed by taking two disks of Mylar, sewing them together along their boundaries and inflating.) This topic is a prime example of the interplay among physical principles, geometry, analysis and symbolic computation. Undergraduates are welcome.

**Ben Worrell**Tulane University

**Abstract**: This talk is concerned with the computational power of clocks. We formalize this via the model of timed automata. These are finite-state automata augmented with clocks which may used to control the behaviour of the automaton. We study timed automaton from the perspective of formal language theory: a timed automaton accepts timed words - sequences of symbols in which a real-valued time stamp is associated with each symbol. Our main results concern the decidability of certain basic questions about timed languages. We show how the decidability of the language inclusion problem depends on the number of clocks an automaton has. This dependence is expressed mathematically in the existence or non-existence of infinite anti-chains in certain partial orders. We also show how the decidability of language inclusion is affected by whether a strictly monotonic model of time (no two events can happen at the same time) or a weakly monotonic model of time is adopted. This is joint work with Joel Ouaknine at CMU.

**Wei-Ming Ni**University of Minnesota

**Abstract**: In this talk I would like to discuss the richness of the dynamics of parabolic equations and systems involving diffusion and cross-diffusion. Examples will be provided to illustrate various approaches in modeling concentration phenomena in pattern formation.

**Larry Hanafy**Tulane University

**Abstract**: This is a special talk which is part of Tulane's VIGRE program and will address broad issues affecting the mathematical community. Larry Hanafy graduated from Tulane in 1963, received a Masters degree from Berkeley and a Ph.D. from Noth Carolina State in Applied Mathematics in 1971. He then spent the next 30 years as a mathematical scientist in industry, rising to the position of Director of Defense Research at Raytheon/Texas Instruments, where he managed a research budget of $130 million per year. The purpose of the talk will be to give an idea of the sort of work that a mathematician might do in an industrial setting, and the sort of preparation a student who would like to pursue such a career should have. He will illustrate these by giving an overview of his career. Undergraduates are especially urged to attend.

**Jim Rogers**Tulane University

**Abstract**: *Not available*

**Deborah Sulsky**University of New Mexico

**Abstract**: Fluid-membrane systems are common and include parachutes, vehicle airbags, inflatable structures, blood vessels and biological cells. These systems are difficult to model numerically when the membrane deforms substantially and has an effect on the fluid flow. The basic material-point method, a "meshless" particle method, will be presented along with extensions to problems of this type. The extensions include the representation of membranes with a set of unconnected points on a surface, the constitutive model and the construction of surface normals. The resulting method combines benefits of interface tracking and interface capturing techniques. Recent work on coupling MPM with non-equilibrium molecular dynamics (NEMD) to model lipid bilayers in biological cell membranes will also be discussed.

**Charles F. Dunkl**University of Virginia, Charlottesville

**Abstract**: The main topic is the family of symmetric and non-symmetric Jack polynomials. These are associated with certain weight functions that are invariant under permutation of coordinates. The weight functions are involved in the Macdonald-Mehta-Selberg integrals. The tools used to analyze these structures include differential-difference operators, and combinatorial objects such as Young tableaux. The presentation is intended for the non-specialist mathematical audience.

**Bill Sudderth**University of Minnesota

**Abstract**: Nonzero sum games and the notion of equilibrium due to John Nash will be introduced. These will be illustrated by a game in which two (or more) players hold cash and bid each day for a non-durable good—say pizza. The good is consumed and the money recirculates to the players according to a stochastic rule that treats them symmetrically. Another pizza arrives the next day and play continues. Simple Nash equilibria exist for this pizza game, whether they exist for more general stochastic games is unknown.

**John Mayer**University of Alabama, Birmingham

**Abstract**: *Not available*

**Robert L. Devaney**Boston University

**Abstract**: In this lecture we describe several folk theorems concerning the Mandelbrot set. While this set is extremely complicated from a geometric point of view, we will show that, as long as you know how to add and how to count, you can understand this geometry completely. We will encounter many famous mathematical objects in the Mandelbrot set, like the Farey tree and the Fibonacci sequence. And we will find many soon-to-be-famous objects as well, like the Devaney sequence. There might even be a joke or two in the talk.

**Peter Moore**Southern Methodist University

**Abstract**: Interpolation error-based (IEB) a posteriori error estimation is a new approach to finding asymptotically exact error estimates for finite element methods. It depends on finding an interpolant that asymptotically equivalent to the finite element solution. IEB error estimation has several advantages over competing strategies: the estimates are cheap to compute; they provide directional information; and they can be used to approximate errors at several orders. I explore this method in the context of reaction-diffusion equations.

**Greg Smith**Carnegie-Mellon University

**Abstract**: The orbifold Chcow ring (an algebraic version of orbifold cohomology) encodes numerical invariants of a singular space arising from string theory. One expects this ring to coincide with the Chow ring of an appropriate resolution of singularities. In this talk, we describe the orbifold Chow ring of a simplicial toric variety and compare it with the Chow ring of a crepant resolution.

**Joel Ouaknine**Carnegie-Mellon University

**Abstract**: Timed automata are finite-state machines constrained by timing requirements so that they accept 'timed traces'—sequences of events in which every event is labeled with a real-valued time. In this talk, we consider the language inclusion problem for timed automata: given two timed automata A and B, are all the timed traces accepted by B also accepted by A? While this problem is known to be undecidable, we show that it becomes decidable if A is restricted to having at most one clock. This is somewhat surprising, since it is well-known that there exist timed automata with a single clock that cannot be complemented. The crux of our proof consists in reducing the language inclusion problem to a reachability question on an infinite graph; we then construct a suitable well-quasi-order on the nodes of this graph, which ensures the termination of our search algorithm.We complete the picture by showing that our restriction to timed automata with a single clock is essentially the only restriction (on the various resources of timed automata) making the language inclusion problem decidable.This is joint work with James Worrell.

**Diane Maclagan**Stanford University

**Abstract**: Toric Hilbert schemes are varieties with broad connections to other areas of mathematics, including optimization, geometric combinatorics, algebraic geometry, and representations of finite groups and quivers. They parameterize all ideals in a polynomial ring with the simplest possible multi-graded Hilbert function. I will introduce these objects, and discuss what is known about them and some of the applications.

**Makram Talih**Hunter College, CUNY

**Abstract**: I will show how one can determine, to a certain extent, the state and behavior of a female tiger from clustering its GPS coordinates in space and time. Characteristics of her home range are of special interest: how long does it take for her to traverse her home range? Are there any habitual resting or hunting spots therein? This work is the first step in a growing collaboration with Sean C. Ahearn, of the Department of Geography at Hunter College.

**David Bao**University of Houston

**Abstract**: Concrete examples will be used to show that Finsler metrics are ubiquitous in everyday life. For such metrics, a fruitful notion of curvature will be presented, together with a completely geometric criterion on what it means to be Einstein. The talk will conclude with a report on a special class of Einstein-Finsler metrics.

**Hà Huy Tài**University of Missouri

**Abstract**: Let X be a projective scheme. An arithmetic Macaulayfication of X is a proper birational map Y · X such that Y has a projective embedding with Cohen-Macaulay homogeneous coordinate ring. In my talk, I will discuss two aspects, the existence and the determination, of the problem of finding arithmetic Macaulayfications.

**Note**: *Special Lecture Friday 4:00 PM*

**Ruchira S. Datta**MSRI & University of California, Davis

**Abstract**: Every real algebraic variety is isomorphic to the set of totally mixed Nash equilibria of some three-person game, and also to the set of totally mixed Nash equilibria of an N-person game in which each player has two pure strategies. From the Nash-Tognoli Theorem it follows that every compact differentiable manifold can be encoded as the set of totally mixed Nash equilibria of some game.

**Note**: *Special Lecture Tuesday 4:00 PM in Tilton 301*

**Jonathan Sondow**New York, NY

**Abstract**: It is notoriously difficult to prove that certain naturally occurring constants are irrational. In the 18th century, Euler showed that e is an irrational number, and Lambert gave the first proof that pi is irrational. (Both proofs involved continued fractions.) Two hundred years later, Apery stunned the mathematical world in 1978 by proving the irrationality of zeta(3)=1+1/2³+1/3³+... . Many other constants are conjectured to be irrational, but no proof is known. Among them are ln(pi) and Euler's constant, gamma, defined as the limit of the difference (1+1/2+...+1/n)-ln(n) as n tends to infinity.

In this talk, I first review this history, then give some simple new integrals, series and infinite products for pi, e, ln(pi), γ and e^{γ}. Using the integrals for ln(pi) and gamma (analogs of integrals used in simplifying Apery's proof), I present numerical evidence that these two constants are irrational. Next I explain how to measure the irrationality of an irrational number, in terms of its distance to a fraction p/q as a function of the denominator q. Finally, I give conditional irrationality measures for ln(pi) and gamma. Undergraduates are welcome. Suggested background reading: my Web page http://home.earthlink.net/~jsondow/ and the excellent survey: D. Huylebrouck, Similarities in irrationality proofs for Pi, ln2, zeta(2) and zeta(3), American Mathematical Monthly 108 (2001) 222-231.

**Eitan Tadmor**Center for Scientific Computation and Mathematical Modeling, University of Maryland

**Abstract**: A trademark of nonlinear, time-dependent, convection-dominated problems is the spontaneous formation of non-smooth macro-scale features, like shock discontinuities and non-differentiable kinks, which pose a challenge for high-resolution computations. We overview recent developments of modern computational methods for the approximate solution of such problems. In these computations, one seeks piecewise smooth solutions which are realized by finite dimensional projections. Computational methods in this context can be classified into two main categories, of local and global methods. Local methods are expressed in terms of point-values (—Hamilton-Jacobi equations), cell averages (—nonlinear conservation laws), or higher localized moments. Global methods are expressed in terms of global basis functions. High resolution central schemes will be discussed as a prototype example for local methods. The family of central schemes offers high-resolution "black-box-solvers'' to an impressive range of such nonlinear problems. The main ingredients here are detection of spurious extreme values, non-oscillatory reconstruction in the directions of smoothness, numerical dissipation and quadrature rules. Adaptive spectral viscosity will be discussed as an example for high-resolution global methods. The main ingredients here are detection of edges from spectral data, separation of scales, adaptive reconstruction, and spectral viscosity.

**George Gratzer**VigreUniversity of Manitoba

**Abstract**: In the early forties, R. P. Dilworth proved his famous result: Every finite distributive lattice D can be represented as the congruence lattice of a finite lattice L. The first published proof of this result is in an early paper of mine with E. T. Schmidt, where the following theorem is proved: Every finite distributive lattice D can be represented as the congruence lattice of a finite sectionally complemented lattice L.

I have been publishing papers on this topic for 45 years (there are about 150 papers on this topic). In this lecture, I will review some of the results: Making L "nice".

If being "nice" is an algebraic property such as being semimodular or sectionally complemented, then we have tried in many instances to prove a stronger form of these results by verifying that every finite lattice has a congruence-preserving extension that is "nice". I shall discuss some of the technique used to construct "nice" lattices and congruence-preserving extensions.

**Note**: *This lecture is on Wednesday, regular time and place.*

**Mac Hyman**Los Alamos National Laboratory

**Abstract**: Mathematical models based on the underlying transmission mechanisms of the disease can help the medical/scientific community understand and anticipate the spread of an epidemic and evaluate the potential effectiveness of different approaches for bringing an epidemic under control. Even more important than the successes with these specific diseases has been the development of frameworks and concepts for understanding epidemiology. The primary goal of our modeling effort is to understand the spread of infectious diseases, including influenza, smallpox, and HIV to estimate and subsequently predict the impact of control measures on their spread. Modeling can reduce the uncertainty of the estimates of disease prevalence and aid in the development of scientific understanding of the mechanisms of the disease and of the epidemic. It can also estimate the benefits and the costs of projected interventions and project the requirements that an epidemic will place on the health care system. Thus, the modeling techniques can join with biological, epidemiological, behavioral, and social science studies to produce better projections and better understanding of the epidemic.

**Peter J. Olver**University of Minnesota

**Abstract**: In this talk, I will introduce "multi-space" as a new geometric foundation for the numerical analysis of differential equations—in the same way that jet space underlies the geometry of differential equations. The multi-space bundle is a significant generalization of the blow-up construction for desingularizing algebraic varieties, but the algebraic and topological features have yet to be fully developed. Extending the construction to functions of several variables requires a new approach to the construction of divided difference formulae and multivariate interpolation theory. Application of the equivariant moving frame method leads to a general framework for constructing symmetry-preserving numerical approximations to differential invariants and invariant differential equations.

**Peter J. Olver**University of Minnesota

**Abstract**: The classical method of moving frames was developed by Elie Cartan into a powerful tool for studying the geometry of curves and surfaces under certain geometrical transformation groups. In this talk, I will discuss a new foundation for moving frame theory based on equivariant maps. The method is completely algorithmic, and can be readily applied to completely general Lie groups and even infinite-dimensional pseudogroup actions. The resulting theory and applications are remarkably wide-ranging, including geometry, classical invariant theory, differential equations, symmetry and object recognition in computer vision, and the design of symmetry-preserving numerical algorithms.

**Peter D. Lax**VigreCourant Institute

**Abstract**: A symmetric matrix is called degenerate by physicists if it has a multiple eigenvalue. Wigner and von Neumann have shown long ago that the degenerate matrices form a variety of codimension two in the space of all symmetric matrices. This explains the phenomenon of "avoidance of crossing". On the other hand, the degenerate matrices are characterised by the single equation discr(S)=0, where discr(S) is the discriminant of S. In this talk, we investigate the nature of the discriminant, especially its representation as a sum of squares.

In the second part it will be shown that any three real symmetric matrices have a real linear combination that is degenerate, provided that the order n of the matrices is congruent 2 mod 4. This result has applications to the propagation of singularities of solutions of symmetric hyperbolic equations, such as the equations of crystal optics.

**David H. Bailey**Lawrence Berkeley National Laboratory

**Abstract**: The author will describe some recent research in which new mathematical identities and relationships have been discovered by means of computational experiments. The best-known of these results is the following formula, which was discovered by a computer program in 1995:

This formula has the property that it permits individual binary or hexadeci-mal digits of *p* to be computed, by means of a simple algorithm that does not require high-precision arithmetic software. What's more, this result has implications for the age-old question of whether (and why) *p* is "normal"—i.e., the binary digits of *p* are "random" in a certain specific sense. Further results in this arena have led to a full- fledged proof of normality for a certain infinite class of real constants.

**Note**: *This is the first lecture in a series called "π Days"*

**Andre Scedrov**University of Pennsylvania

**Abstract**: We describe properties of a process calculus that has been developed for the purpose of analyzing security protocols. The process calculus uses bounded replication and probabilistic polynomial-time expressions allowed in messages and boolean tests.

We develop properties of a form of asymptotic protocol equivalence that allows security to be specified using observational equivalence, a standard relation from programming language theory that involves quantifying over possible environments that might interact with the protocol. We relate process equivalence to cryptographic concepts such as computational indistinguishability.

Using a form of probabilistic bisimulation we develop an equational proof system for reasoning about process equivalence. This proof system is sufficiently powerful to derive the semantic security of ElGamal encryption from the Decision Diffie-Hellman (DDH) assumption. The proof system can also derive the converse: if ElGamal is secure, then DDH holds. While these are not new cryptographic results, these example proofs show the power of probabilistic bisimulation and equational reasoning for protocol security. The work has been carried out in collaborations with P. Lincoln, J. Mitchell, M. Mitchell, A. Ramanathan, and V. Teague.

**Ed Mosteig**VigreLoyola Marymount University

**Abstract**: Gröbner bases are computational tools used in solving systems of polynomial equations by exact means. Currently, they are employed in many fields of mathematics including commutative algebra, algebraic geometry, algebraic combinatorics, statistics, linear programming, numerical analysis, and differential equations. Although they were developed in the 1960s, they have only recently appeared at the forefront of computational mathematics. The advent of the personal computer has permitted computations that were previously impossible to perform by hand. My goal is to introduce Gröbner bases from an elementary standpoint and examine their development as given by Bruno Buchberger. From there, I will highlight a few key results and demonstrate their importance. Along the way, I will show how certain examples from a few different fields of mathematics can be solved using Gröbner bases.

**John Ewing**Executive Director of the AMS

**Abstract**: This is a talk about American mathematics during the twentieth century—not about everything, nor even about most things, but about bits and pieces of mathematical life. It's a talk about trends and patterns of professional life, about the shifts in education, about the growth of the mathematical establishment in America, arising in the midst of good times, followed by bad, and followed again by good. The aim is to paint a broad picture of mathematical life during the past century, helping us to understand what might happen in the next.

**David Gottlieb**Brown University

**Abstract**: Spectral methods involve approximating the solutions of partial differential equations by Fourier series or orthogonal polynomials. The attractive feature of those methods is their high accuracy for smooth solutions.

When the solutions are not smooth the formal accuracy deteriorates and Gibbs oscillations develop. However, there is extensive computational evidence that spectral methods yield high order accuracy when applied to complicated interactions of shock waves with smooth flows.

In this talk we will review the state of the art in applying spectral methods to non-smooth problems. We will review first the approximation theory and discuss the resolution of the Givvs phenomenon, its practical aspects and applications in several fields as medical imaging. We will discuss linear hyperbolic systems of equations and will show that the problem is equivalent to the approximation theory. We then discuss theory and applications for nonlinear hyperbolic problems and especially new results concerning conservation for multi-domain spectral methods.

Mathematics Department, 424 Gibson Hall, New Orleans, LA 70118 504-865-5727 math@math.tulane.edu