**Tom Duchamp**University of Washington

**Abstract**: A surface light field is a function that assigns a color to each ray originating on a surface. Surface light fields are well suited to constructing virtual images of shiny objects under complex lighting conditions. I will discuss work of our 3D photography group at the University of Washington in which we develop a framework for construction, compression, interactive rendering, and rudimentary editing of surface light fields of real objects.

After presenting an overview of our work, I will discuss our compression algorithm in more detail. It relies on the theory of thin plate splines on Riemannian 2-folds and can be viewed as a generalization of both vector quantization and principal component analysis.

**Jon Wolfson**Michigan State University

**Abstract**: *Not available*

**Karl Hofmann**Tulane University / Darmstadt

**Abstract**: Category theory enters concrete work in the category of compact groups in various fashions. The lecture will illustrate this by selecting a few prominent topics (Morphisms, Limits, Topological Cohomology, Homological Algebra) in this category. I hope that I shall be able to make my points without calling on too much technical background knowledge on either compact groups or category theory.

**Jenny Bryan**Berkeley

**Abstract**: Recent developments in microarray technology make it possible to capture the gene expression profiles for thousands of genes at once. One very important use of such data is the identification of groups of genes with similar (and interesting) patterns of expression. Currently, most researchers apply algorithms such as cluster analysis to investigate the structure of this data. However, such exploratory techniques alone do not provide any measures of confidence, nor do they afford any opportunities for purposeful experimental design. We propose the use of a deterministic rule, applied to the parameters of the gene expression distribution, to select a target subset of genes that are of biological interest. We provide an estimator of this subset and establish its consistency under certain conditions. We use the parametric bootstrap to estimate important features of the subset estimator's sampling distribution. We also provide a sample size formula that guarantees the quality of the estimated subsets with a high probability. The practical performance of the method using a cluster-based subset rule is illustrated with a simulated dataset in human colon cancer.

**Cun-Hui Zhang**Statistics Department, Rutgers University

**Abstract**: In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. We develop a general empirical Bayes method for the estimation of the true wavelet coefficients. Our estimators possess the following main properties: (1) uniform ideal adaptivity in all Besov balls to the minimum risk of separable estimators, (2) universal exactly adaptive minimaxity in all Besov balls, and (3) full spatial adaptivity as a consequence of (1) and (2). In addition, our estimators are super-efficient in convergence rates at every point in the Besov spaces satisfying a mild condition on their shape parameters.

**Colin Adams**Williams College

**Abstract**: In 1978, William Thurston revolutionized low-dimensional topology with his realization that the vast majority of 3-manifolds are hyperbolic. Hyperbolic 3-manifolds have since played a fundamental role in many of the most important discoveries in low-dimensional topology over the last 22 years. What are they? Who are they? Where can you buy them? Do we live in one?

**David Muraki**Simon Fraser University

**Abstract**: The most prominent features of the North American weather pattern, as highlighted in most weather segments on the TV news, are the (west-to-east) jetstream and vortical cells, so called "cyclones" and "anticyclones", of relative low and high pressures. The time-dependent interaction of these vortices with the jetstream is what we experience as weather. Mathematical modelling of the atmosphere is really a problem of fluid mechanics, but of a particular sort where density/temperature effects and Coriolis force (due to the Earth's rotation) play dominant roles.

There is a recognized asymmetry, from both observations and computational models, that cyclonic lows tend to be more intense and localized, while anticyclonic highs tend to be weaker and broader. However the best understood theory for atmospheric dynamics, known as "quasigeostrophy", is completely symmetric for cyclones and anticyclones. Thus, despite the pervasiveness of this asymmetry and its importance to atmospheric dynamics, the underlying fluid mechanics behind this bias remains poorly understood. Recent work has produced an asymptotic extension to quasigeostrophy which is applied to this question of symmetry-breaking in the atmosphere, "Why are low pressure cells more intense than highs in the meteorology of the midlatitudes?"

This work is in collaboration with C Snyder (NCAR), R Rotunno (NCAR) and G Hakim (Univ of Washington).

**Xiao-Tong Shen**Statistics Department, Ohio State University

**Abstract**: Most model selection procedures use a fixed penalty penalizing an increase in the size of a model. These non-adaptive selection procedures perform well only in one type of situations but not across a variety of situations. In this talk, we will present an adaptive model selection procedure, which uses a data-driven complexity penalty based on the concept of generalized degrees of freedom. The proposed procedure approximates the best performance of this class of procedures, across a variety of different situations. This class includes many well-known procedures such as AIC, C_p, RIC and BIC. The proposed procedure is applied to variable selection in least squares regression and wavelet thresholding in nonparametric regression. Simulation results and asymptotic analysis support the effectiveness of the proposed procedure. This is a joint work with J. Ye.

**Helene Massam**University of Virginia

**Abstract**: The Wishart distribution is the distribution of the sample covariance matrix for a saturated centered Gaussian model. We therefore often need the moments of this distribution. While the first two moments are well known, moments of higher order tend to have very complicated expressions, impossible to use in practice. Our aim is to give simple, usable expressions for certain moments. For example, if S is a Wishart random matrix, we will present a simple algorithm to compute E(Sn) and E(S-n). We will also consider some moments for the hyper Wishart distribution and the Wishart distribution on homogeneous cones. These distributions arise as the distribution of the maximum likelihood estimate of the standard deviation in graphical Gaussian models.

**Brenda MacGibbon**UQAM

**Abstract**: *Not available*

**Richard Wentworth**Johns Hopkins University

**Abstract**: This talk will be partly a survey of the mapping class group and its relation to the geometry of Teichmueller space. The emphasis will be on how subgroups act on the Thurston boundary, and what this means for harmonic map theory. Using these ideas, we will discuss new results on finiteness for homomorphisms from lattices in Lie groups to mapping class groups.

**Dexter Kozen**Cornell University, CS

**Abstract**: Hoare logic, introduced by C. A. R. Hoare in 1969, was the first formal system for the specification and verification of well-structured programs. This pioneering work initiated the field of program correctness and has inspired hundreds of technical articles and books. For this achievement among others, Hoare received the Turing Award in 1980 and was recently knighted by Queen Elizabeth.

In this talk I will introduce a simple equational system called Kleene algebra with tests (KAT) for capturing the propositional (non-domain specific) part of Hoare logic (PHL). KAT consists of the algebra of regular expressions, familiar from automata theory, along with an embedded Boolean algebra. KAT has the following agreeable properties: (i) It subsumes PHL. (ii) It is purely equational; thus the specialized syntax and deductive apparatus of Hoare logic, involving partial correctness assertions, are inessential and can be replaced by ordinary equational reasoning. (iii) It is deductively complete over relational models (PHL is not). (iv) Its complexity is PSPACE-complete, the same as PHL.

In practice, KAT has been used to verify communication protocols and low-level compiler optimizations. Proofs tend to be simple equational manipulations of code. I will illustrate its use with a few examples.

**Costas Pozrikidis**University of California, San Diego

**Abstract**: Red blood cells are liquid capsules containing a viscous fluid that is enclosed by a biological membrane consisting of a lipid bilayer and a supporting network of proteins. In the absence of flow, the cells assume the shape of a biconcave disk. When subjected to flow, the cells deform in a way that is determined by the type and strength of the flow and by the mechanical properties of the membrane which is known to behave like a viscoelastic sheet. In this talk, an integrated mathematical description of the equations governing the fluid dynamics and membrane mechanics is presented working under the auspices of low-Reynolds-number hydrodynamics coupled with the nonlinear theory of thin shells. The governing equations are solved using a novel implementation of the boundary element method that accounts for the membrane elasticity and bending stiffness. Parametric investigations demonstrate the significance of the membrane properties on the cell deformability and on the magnitude of the developing membrane tensions, and thus establish a relationship between membrane structure and cell behavior in large-scale or capillary blood flow.

**Changfeng Gui**University of Connecticut

**Abstract**: Gradient theory of phase transitions has been studied extensively in the last thirty years. It has been discovered that the interfaces are related to the phenomenon of motion by mean curvature as well as minimal surfaces. In this talk I will discuss some foundamental questions related to gradient theory of phase transitions, in particular those questions regarding the basic configuration near interfaces and multiple junctions. Some recent development on a conjecture of De Giorgi will also be presented.

**Alexander Kurganov**University of Michigan

**Abstract**: I will present new high-resolution adaptive central schemes for multidimensional systems of hyperbolic conservation laws. Their main building block is the semi-discrete central-upwind schemes, whose construction will be briefly described.

A new adaptive technique is based on the weak local truncation error indicator, which measures by how much the computed solution fails to satisfy the system of hyperbolic conservation laws, written in the equivalent weak form.

The adaptive central schemes are applied to the one- and two-dimensional Euler equations of gas dynamics. The numerical experiments clearly demonstrate the higher efficiency of the proposed method.

**Lawrence C. Evans**University of California, Berkeley

**Abstract**: I will discuss some ongoing work with D. Gomes concerning the 'effective Hamiltonian' associated with a Hamiltonian periodic in the position variable. This effective Hamiltonian somehow encodes information about the dynamics governed by the original Hamiltonian, and some new PDE methods help us understand these connections.

**Zhong-Hui Duan**University of Michigan

**Abstract**: The most time consuming part of molecular simulations is the evaluation of the nonbonded (Coulomb and van der Waals) interactions between particles. State-of-the-art methods include cut-off techniques, multipole methods and particle-mesh Ewald method.

I will present a hybrid Ewald-multipole method for evaluating the long-range Coulomb interactions for periodic systems. The real part of Ewald summation is speeded up using a tree code, in which the interactions between particles and distant clusters are approximated using multipole expansions. The timings and accuracies will be reported for water systems. For isolated systems, I will present a tree-code for the energy computation, in which the interactions between distant groups are approximated using multipole expansions.

**Zhiquan Luo**McMaster University

**Abstract**: In this talk, we will briefly review the theory of convex optimization and semidefinite programming, and then show how they can be used to solve some important digital communication problems efficiently. Examples will be given to show that certain nonconvex and semi-infinite spectral mask constraints can be reformulatd as linear matrix inequalities. This facilitates the formulation of a diverse class of filter and beamformer design problems as semidefinite programmes. Our results can be considered as extensions to the well-known Positive-Real and Bounded-Real Lemmas from the systems and control literature.

**Aaron Fogelson**University of Utah

**Abstract**: Thrombosis is the formation of clots within blood vessels and is the immediate cause of most heart attacks and many other severe cardiovascular problems. The main components of this process are platelet aggregation and coagulation. Platelet aggregation involves processes of cell-cell and cell-substrate adhesion and cell signaling and response, all within the moving blood. Coagulation involves a tightly-regulated network of enzyme reactions with the important feature that many of the key reactions occur on surfaces (e.g. platelet surfaces), not in the bulk fluid, while transport of the reactants occurs in the fluid. Coagulation results in the formation of a polymer mesh around the aggregating platelets. This talk will introduce several aspects of our efforts to model these complex dynamic biological systems, discuss some of the modeling and computational challenges in this work, describe some predictions made by the modeling, and outline plans for developing more comprehensive models with which to further probe these critical processes.

**Keith Devlin**St. Mary's College, CA

**Abstract**: To most people, the word "geometry" conjures up an image of the pristine figures of Euclid. But the advent of computers and computer-graphics has enabled mathematicians to develop new geometries that provide a fresh understanding of the living world.

**Ruy Ribeiro**Los Alamos, NM

**Abstract**: Elucidation of T cell turnover rates in vivo is crucial for a better understanding of the immune system, particularly in the context of human immunodeficiency virus (HIV) infection. I have been working in colaboration with experimental and clinical researchers to analyse the results of the most advanced techniques for in vivo labeling of T-cells. The idea is to develop mathematical models for the interpretation of these experiments. In this colloquium, I will give an overview of this work, both from the experimental and theoretical perspectives. Specifically I will document the increased turnover of blood T-cells during infection, and the effect of antiretroviral therapy in re-establishing normalized proliferation patterns.

**Charles Peskin**Courant Institute, NYU

**Abstract**: The molecular machinery within biological cells operates in a regime in which Brownian motion is very important. Indeed, several such motors operate, at least in part, on the Brownian Ratchet principle: in which progress is locally random but forward motion is periodically "locked in" through the expenditure of chemical energy. This talk is concerned with a particular problem faced by Brownian Ratchet motors when they attempt, as biomolecular motors often do, to pull cargo much larger than themselves. In that case, one would think that the speed of the motor would be limited by the small diffusion coefficient of the large cargo, but this is not necessarily the case. Instead, a flexible linkage between the motor and its cargo allows for much more rapid transport than would otherwise be possible.

**Charles Peskin**Courant Institute, NYU

**Abstract**: This talk is about a Virtual Heart, a computer model that can be used to study the mechanical function of the heart in health and disease. The model includes all four chambers of the heart and all four valves. It has muscular walls that contract and relax, and it is connected to large veins and arteries that are equipped with sources and sinks to simulate the rest of the circulation. The muscle, valves, and vessels of the Virtual Heart are all made of fibers that act as force generators on the fluid. Meanwhile, these fibers are being carried along at the local fluid velocity. The computational method that handles this interaction is called the Immersed Boundary Method. It will be briefly described, and results will be shown in the form of a computer-generated video animation of the beating heart.

**Piotr Minc**Auburn University

**Abstract**: Suppose $\varphi: G_1\to G_0$ is a simplicial map between graphs and $h_0$ is an embedding of $G_0$ in the plane. It is not always true that $G_1$ can be embedded in the plane with an embedding arbitrarily close to $h_0\circ \varphi$. We will talk about combinatorial obstructions preventing such embeddings and give a full characterization in the case when $G_1$ is an arc.

**Ian Dinwoodie**Tulane University

**Abstract**: A variety of foundational and computational statistical problems can be resolved with help from the algebra of polynomials. Markov Chain Monte Carlo simulation can be implemented with the Groebner basis for certain types of integer data, and generating functions can be analyzed with normal forms with respect to a Groebner basis. Foundational issues such as sufficiency and parameter identifiability can be settled. We will give practical examples of these and other applications of commutative algebra in statistics.

**Ricardo Cortez**Tulane University

**Abstract**: The approach of Lagrangian numerical methods is to cover a bounded domain with fluid particles and solve the PDE's that model fluid motion by tracking two things: (1) the trajectory of the particles as they move with the fluid and (2) the value of some function at the particle locations. This function may be the vorticity (rotation) in the fluid, the concentration of a solvent, or more abstract quantities like the "impulse" in the fluid. This talk will be an overview of Lagrangian methods assuming no previous knowledge of the subject. It will include mathematical results, applications and current research directions.

**Herbert Medina**Loyola Marymount

**Abstract**: We will present a sequence of polynomials in $\mathbb{Q}[x]$, arising from a simple family of rational functions, that approximates uniformly the classical function arctan(x) on [0,1] (and hence, via standard identities, on all of $\mathbb{R}$). The sequence is attractive and interesting for several reasons including its rate of convergence, its simplicity, its rational approximations to PI, and because its members match the derivatives of arctan(x) at both 0 and 1.

Mathematics Department, 424 Gibson Hall, New Orleans, LA 70118 504-865-5727 math@math.tulane.edu