shadow_tr

Colloquia 2014-2015

Below is the list of talks in the computer science seminar series for the 2014-2015 academic year. If you would like to receive notices about upcoming seminars, you can subscribe to the announcement listserv.

Fall 2014 Colloquia

September 15

Packing and Stacking: Algorithms and Complexity

Helmut Alt Freie Universität Berlin

Abstract: We consider the problem of placing geometric objects in two dimensions so that they occupy an area as small as possible. "Packing" means that the objects may overlap, "stacking" that they may not. Packing has important applications, e.g., in the clothing and steel industries. Reasonably efficient algorithms can be found if the number of objects to be placed is constant. If not, even the most simple variants of the problem are NP-hard, so it is important to find efficient approximation algorithms. In the talk, we will give an introduction into the problem, present its numerous variants, demonstrate its computational complexity, and present some approximation algorithms.


About the Speaker: Helmut Alt is a professor of computer science at Freie Universität Berlin, Germany. He obtained his doctorate from Universität des Saarlandes in 1976. He taught computer science at Universität Kaiserslautern, the Pennsylvania State University, Hochschule Hildesheim, and, since 1986, at Freie Universität. Dr. Alt's research is concerned with algorithms and data structures, in particular computational geometry, with an emphasis on geometric algorithms for pattern and shape comparison and analysis.
October 6

The New 'Shape' of Data and Its Application in Interactive Data Exploration

Chao Chen Rutgers University

Abstract: Biomedical imaging informatics has been rapidly developed in the past few decades. While many models used prior information of the shape of the target object/data, recently, many new applications arose in which the shape-prior is not available. These 'shapeless' data include the neuron structures from Electron Microscopy (EM) images and the fine trabeculae muscle structures within the human heart. When working with the macroscopic level, we face challenges in capturing the shape of data of extremely high dimension.

In this talk, I will discuss how to explore new 'shapes' of these data. To start with, we show that the topology of the data, e.g., handles and connected components, could be very useful priors. Furthermore, we show that the topographical landscape, namely, the mountains and valleys, of the data can help us in data acquisition and analysis.

The new shapes of data that we present also provide new opportunities for interactive exploration. In EM images, human interaction is inevitable due to the extremely high expectation of the accuracy of the extracted structure. In high dimensional data analysis, showing the shape of the data enables domain expert to explore the data and invent new hypothesis to verify.

Results in this talk have been published in NIPS, IPMI, MICCAI and CVPR.


About the Speaker: Dr. Chen obtained his PhD degree from Rensselaer Polytechnic Institute. He worked on computational topology with Prof. Herbert Edelsbrunner. Currently he is working with Prof. Dimitris Metaxas at Rutgers University on biomedical imaging informatics and machine learning.
October 13

Computational Thinking

Jeannette Wing Microsoft Research


Please note that this event will be held at Freeman Auditorium at 3:00 p.m. The Department of Computer Science is co-hosting this event with the Newcomb College Institute as part of the Dorothy K. Daspit Lecture Series.

Abstract: My vision for the 21st Century: Computational thinking will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science, but benefit people in all fields.


About the Speaker: Jeannette M. Wing is Corporate Vice President, Microsoft Research. She joined MSR in 2013 from Carnegie Mellon University where she was President's Professor of Computer Science and twice served as the Head of the Computer Science Department. From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation.
October 20

FDR into The Cloud


A.W
. Roscoe University of Oxford

Abstract: In this talk I will report on a successful extension to the CSP refinement checker FDR3 that permits it to run on clusters of machines. FDR3 is able to scale linearly up to clusters of 64 16-core machines (i.e. 1024 cores), achieving an overall speed-up of over 1000 relative to the sequential case. This speed-up was observed both on dedicated supercomputer facilities, but more notably, also on a commodity cloud computing provider. This enhancement has enabled the verification of a system of 1012 states, which we believe to be the largest refinement check ever attempted by several orders of magnitude.

About the Speaker:Bill Roscoe
Bill Roscoe is Professor of Computing Science at the University of Oxford, UK, and a Fellow of the Royal Society of Engineering. Bill is the foremost expert on the process calculus CSP, which was devised by Tony Hoare and two of his students - Bill and Steve Brookes. FDR is the failures-divergences refinement model checker designed for CSP processes; it was designed and implemented by Bill and his team. CSP and FDR are used to design, specify, refine and verify concurrent processes, and they have been applied effectively in a broad range of areas, from VLSI design to security protocol verification. Bill recently stepped down as the Head of the Computer Science department at Oxford, after two terms during which he oversaw the rapid growth and expansion of the department.



November 17

Wrinkles on Everest: Persistence and Stability in an Omniscalar World

Dmitriy Morozov Lawrence Berkeley National Laboratory

Abstract: In the last decade, persistent homology emerged as a particularly active topic within the young field of computational topology. Homology is a topological invariant that counts the number of cycles in the space: components, loops, voids, and their higher-dimensional analogs. Persistence keeps track of the evolution of such cycles and quantifies their longevity. By encoding physical phenomena as real-valued functions, one can use persistence to identify their significant features.

This talk is an introduction to the subject, discussing the settings in which persistence is effective as well as the methods it employs. As examples, we consider problems of scalar field simplification and non-linear dimensionality reduction. The talk will sketch the budding field of topological data analysis and its future directions.


About the Speaker: Dmitriy is a research scientist in the Computational Research Division of the Lawrence Berkeley National Laboratory. Before coming to LBNL he was a postdoctoral scholar in the Departments of Computer Science and Mathematics at Stanford University. Dmitriy received his Ph.D. in Computer Science from Duke University in 2008. Dmitriy's primary research interests are in computational topology and geometry, especially as these fields apply to data analysis.
November 19

The Technology of Animation

Lincoln Wallen Chief Technology Officer, Dreamworks Animation SKG

This colloquium, offered as the 2014 James Mead Lecture (see more info re: Tulane alumnus James Mead ) will be held on Wednesday, November 19th, in Room 105 of the Boggs Center for Energy and Biotechnology. The hour-long lecture will begin at 4:30 p.m. and will be followed by a 30-minute Q&A session. Please note the special weekday, time, and venue for this event.

Abstract: Media has long been a primary driver of personal computing technology including the chipsets for CPU, graphics and radio interfaces. Full computer generated (CG) movies represent these trends at enterprise scale and drive innovation in storage, servers, and cloud software. Over the last 20 years, DreamWorks Animation has been a leader in producing award-winning CG animated movies such as Shrek, Madagascar and How to Train Your Dragon 2. Lincoln Wallen,CTO, will give insight into the studio's next generation production platform named Apollo, which unites the worlds of micro-computing (scalable multicore) and macro-computing (cloud), pointing the way toward a new approach to product design.


About the Speaker:2014 Lincoln Wallen serves as Chief Technology Officer for DreamWorks Animation and is responsible for providing strategic technology vision and leadership for the company.

He joined DreamWorks Animation in 2008 as Head of Research and Development, where he oversaw the strategic vision, creation, and deployment of the studio's CG production platform and software tools.

Wallen is a recipient of InfoWorld's 2012 "Technology Leadership Awards," the prestigious annual award that honors senior executives who have demonstrated creative, effective leadership in inventing, managing, or deploying technology within their organizations or in the IT community. Under his leadership, DreamWorks Animation was named to the 50 Most Innovative Companies list in MIT's Technology Review.

Prior to joining DreamWorks Animation, Wallen was Chief Technology Officer for Online and Mobile Services at Electronic Arts, where he led the gaming company's technical approach to publishing and videogame development for cell phones. He also served as Vice President at Criterion Software, Ltd, and as Chief Technology Officer for MathEngine.

Before joining the entertainment industry, Wallen enjoyed a distinguished career as a Professor at Oxford University, and was the first Director for the multidisciplinary Smith Institute for Industrial Mathematics and Systems Engineering. He holds a BS in Mathematics and Physics from the University of Durham, United Kingdom, and a PhD in Artificial Intelligence, as well as an honorary doctorate from the University of Edinburgh, United Kingdom.

 

November 20

Sampling-Based Motion Planning: From Intelligent CAD to Crowd Simulation to Protein Folding

Nancy M. Amato TEXAS a & m UNIVERSITY

This event will be held on Thursday, 11/20/2014, at 11:00 a.m. in Stanley Thomas, Room 302. Please note the special weekday and time for this event. We are inviting her as part of the ACM Distinguished Speakers Program; the event will be co-hosted by the Newcomb College Institute.

Abstract: Motion planning arises in many application domains such as computer animation (digital actors), mixed reality systems and intelligent CAD (virtual prototyping and training), and even computational biology and chemistry (protein folding and drug design). Surprisingly, one type of sampling-based planner, the probabilistic roadmap method (PRM), has proven effective on problems from all these domains.

In this talk, we describe the PRM framework and give an overview of some PRM variants developed in our group. We describe in more detail our work related to virtual prototyping, crowd simulation, and protein folding. For virtual prototyping, we show that in some cases a hybrid system incorporating both an automatic planner and haptic user input leads to superior results. For crowd simulation, we describe PRM-based techniques for pursuit evasion, evacuation planning and architectural design. Finally, we describe our application of PRMs to simulate molecular motions, such as protein and RNA folding. More information regarding our work, including movies, can be found at http://parasol.tamu.edu/~amato/.

About the Speaker:


Nancy M. Amato is Unocal Professor of Computer Science and Engineering at Texas A&M University where she co-directs the Parasol Lab. Her main areas of research focus are motion planning and robotics, computational biology and geometry, and parallel and distributed computing. She has graduated 16 PhD students, with all but two going on to careers in academia or research labs, 25 masters students, and has worked with more than 100 undergraduates and 10 high school students.

Amato received undergraduate degrees in Mathematical Sciences and Economics from Stanford University, and M.S. and Ph.D. degrees in Computer Science from UC Berkeley and the University of Illinois, respectively. She was an AT&T Bell Laboratories PhD Scholar, received an NSF CAREER Award, is an ACM Distinguished Speaker, and was an IEEE RAS Distinguished Lecturer (2006-2007). Amato is program chair for IEEE ICRA 2015 and RSS 2016, and served as the Editor-in-Chief of the IEEE/RSJ IROS CPRB (2011-2013). She is a member (elected) of the CRA Board of Directors (2014-2017), is co-Chair of CRA-W (2014-2017), was co-chair of the NCWIT Academic Alliance (2009-2011), and is a member of the CRA-E and the CDC.

Amato received the 2014 CRA Haberman Award, the inaugural NCWIT Harrold/Notkin Research and Graduate Mentoring Award in 2014, the IEEE HP/Harriet Rigas Award in 2013, a Texas A&M AFS university-level teaching award in 2011, and the Unterberger Award for Outstanding Service to Honors Education at Texas A&M in 2013. She is a AAAS Fellow and an IEEE Fellow.

Spring 2015 Colloquia

January 13

What AI Can Do for Multi-Item Sentiment Analysis

Francesca Rossi University of Padova, Italy


This is a joint work with Andrea Loreggia, Umberto Grandi, Vijay A. Saraswat. This event will be held on Tuesday, January 13, at 3:30 p.m. in Stanley Thomas 302. Please note the special weekday and time for this event.

Abstract: Sentiment analysis assigns a positive, negative or neutral polarity to an item or entity, extracting and aggregating individual opinions from their textual expressions by means of natural language processing tools. It then aggregates the individuals' opinions into a collective sentiment about the item under consideration. Current sentiment analysis techniques are satisfactory in case there is a single entity under consideration, but can lead to inaccurate or wrong results when dealing with a set of possibly correlated items. This can be useful, for example, when a company wants to know the collective preference order over a set of products. Or also when we want to predict the outcome of an election over a collection of candidates. We argue that, in order to deal with this more general setting, we should exploit AI techniques such as those used in preference reasoning, multi-agent systems, and computational social choice. Preference modelling and reasoning tools provide the useful ingredients to model individual's preferences in the most faithful way, while computational social choice techniques give methods to aggregate such preferences which satisfy certain desired properties. Other AI techniques can be very useful as well, such as machine learning or recommender system tools to cope with incompleteness in the information provided by each individual. We describe a social choice aggregation rule which combines individuals' sentiment and preference information. We show that this rule satisfies a number of properties which have a natural interpretation in the sentiment analysis domain, and we evaluate its behavior when faced with highly incomplete domains.


About the Speaker: Francesca Rossi is a professor of computer science at the University of Padova, Italy. Currently she is on sabbatical at Harvard with a Radcliffe fellowship. Her research interests include constraint reasoning, preferences, multi-agent systems, computational social choice, and artificial intelligence. She has been president of the international association for constraint programming (ACP) and she is now the president of IJCAI. She has been program chair of CP 2003 and of IJCAI 2013. She is in the editorial board of Constraints, Artificial Intelligence, AMAI, and KAIS. She will be Editor in Chief of JAIR starting January 2015. She has published over 160 articles in international journals, in proceedings of international conferences or workshops, and as book chapters. She has co-authored a book. She has edited 16 volumes, between conference proceedings, collections of contributions, and special issue of international journals. She has co-edited the Handbook of Constraint Programming (Elsevier, 2006). Her h-index is 27. Her most cited papers have more than 560 citations. She has more than 100 co-authors.
January 23

A Public-Key Cryptosystem Based on an Undecidable Problem

Neal R. Wagner University of Texas at San Antonio (Retired)

Abstract: First I will introduce public-key cryptosystems (PKCs). A famous example is the early Merkle-Hellman proposal to use the NP-complete problem Subset Sum (SS) as the basis for a PKC. To encrypt, one forms the sum of a subset of given numbers, but decryption requires solving SS and finding the numbers that made up the sum. For a PKC one creates an instance of SS with a "trapdoor" -- a secret design that allows an instance to be decrypted efficiently. This was the first serious proposal for a PKC. Eventually it was broken in its full generality. Then I will talk briefly about the cryptosystem of Rivest, Shamir, and Adleman (RSA), which is based on the problem of factoring a product of primes (FACTOR). FACTOR is in NP but conjectured not to be NP-complete. Nevertheless, it is intractable right now (until we get quantum computers). I was the first person to propose using an undecidable problem as the basis for a PKC. The problem I used is the undecidable word problem for finitely presented groups (WPG). First I'll discuss the general Word Problem (WP), fairly easily shown to be undecidable. Then I will introduce extra structure to make WP into a group. This last consists of finitely many symbols called generators (each with an inverse), a collection of strings of these generators, and a collection of relators, each of which is a string of generators (and inverses) which is equal in the group to the empty string (the identity element). WPG asks if a given string of generators and inverses can be transformed to the empty string. The talk will give a number of definitions needed to precisely define these groups. They are very complex entities. Then I show how to encrypt a single bit with this PKC. This is a randomized PKC, with infinitely many cyphertexts corresponding to each plaintext bit. In general, decryption would be undecidable, but I will show a way to add extra secret relators to this group to make a new group in which decryption is efficient. The talk requires no prior special knowledge.


About the Speaker:Neal Wagner retired as Associate Professor from the University of Texas at San Antonio. He received a Ph.D. in mathematics from the University of Illinois at Urbana-Champaign in 1970, specializing in topology. He also taught at the University of Texas at El Paso, the University of Houston, and Drexel University. In 1973 he was seduced by the dark side and started looking at computer science. During 1974-76 he worked as a programmer and programming supervisor on real-time simulation of the Space Shuttle at Johnson Space Center. Later, he specialized in cryptography and database security.
February 2

Rigorous OS Design: An Idea Whose Time Has Come

Leonid Ryzhyk Carnegie Mellon University

Abstract: Modern operating systems consist of millions of lines of low-level code, which is hard to understand, maintain, and validate. In this talk I will advocate the rigorous approach to OS design as a way to overcome the problem. I argue that the time has come to re-think the OS design based on a solid mathematical foundation. This long-sought vision is finally becoming feasible due to recent advances in programming languages, model checking, and interactive theorem proving. I will outline a high-level methodology for rigorous OS design, consisting of three components: (1) a theoretical foundation for formal reasoning about OS behaviour at a level of abstraction much higher than C code, (2) a software ecosystem, consisting of domain-specific languages and tools that provide a programmer-friendly interface to the theory, (3) OS implementation, built and verified on top of this ecosystem. As a concrete instantiation of this methodology, I will present the Termite project, which has developed the first tool for automatic synthesis of correct-by-construction device drivers. In Termite, the driver behavior is formalized in terms of the discrete control theory and is automatically synthesized using controller synthesis techniques. I will show how the connection to the control theory enables efficient formal reasoning about complex low-level software. The talk will conclude with a brief demo of Termite.


About the Speaker: Leonid Ryzhyk is a postdoctoral researcher in the Carnegie Mellon University School of Computer Science. Leonid's research focuses on operating systems and in particular using formal methods to build better operating systems. He received his PhD from the University of New South Wales in 2010. Prior to joining CMU, Leonid worked as a researcher at NICTA and a postdoctoral fellow at the University of Toronto.
February 9

Distinguishing Cylinders from Moebius Bands: Projections, Intersections, and Inflections

Thomas Banchoff Sewanee, The University of the South


This event is offered as a joint Math/Computer Science colloquium.

Abstract: On a smooth or polyhedral surface a strip neighborhood of a closed curve is either and orientable cylinder or a non-orientable Moebius band. How can we distinguish which form it has by observing singular fold chains for projections to planes or self-intersection curves of immersions (and non-immersions) into three-space or a new phenomenon, inflection faces? This talk gives an introduction to the geometry of characteristic classes and it features computer graphics images and animations that will make it accessible to novice and expert alike.


About the Speaker: Thomas Banchoff received his PhD under Prof. Shiing-Shen Chern at UC Berkeley in 1964 and he has been teaching ever since, mainly at Brown University. He has received a number of local and national teaching awards and he was president of the MAA in 1999-2000. He is the author of four books and eighty articles on geometry and topology and computer graphics. He gave an invited address at the International Congress of Mathematicians in Helsinki in 1978 on "Computer Animation and the Geometry of Surfaces in Three- and Four-Space" and he was named a fellow of the American Mathematical Society in 2012.
February 25

Tangible Visualization and ICy STEAM: interactive tools for complex computational systems

Brygg Ullmer Louisiana State University


This event will be held on Wednesday, 2/25/15, at 4:00 p.m. in Stanley T 302. Please note the special date and time for this event.

Abstract: Scientific and information visualization have long offered powerful approaches for helping people graphically represent, explore, and understand our universe. Our group investigates tangible visualization. Here, tangible interfaces (which support interaction through systems of computationally-mediated physical artifacts) are used to interactively represent complex systems. We approach this research from two perspectives: studying and engaging specific application domains; and developing underlying architectures supporting realizations of tangible visualization. In this talk, we will consider several applications engaging interactive computational science, technology, engineering, arts, and mathematics (ICy STEAM), with some emphasis on tangible visualization in the context of computational genomics. We will also discuss opportunities and trajectories relating to computational systems research; and briefly consider several intersections between high-performance computing, machine learning, and future computational systems.


About the Speaker: Brygg Ullmer is the Effie C. and Donald M. Hardy associate professor at LSU, jointly in the School of Electrical Engineering and Computer Science (EECS) and the Center for Computation and Technology (CCT). He leads CCT's Cultural Computing focus area (research division), with 16 faculty spanning six departments, and co-leads the Tangible Visualization group. He also serves as director for the NIH-supported Louisiana Biomedical Research Network (LBRN) Bioinformatics, Biostatistics, and Computational Biology (BBC) Core. Ullmer completed his Ph.D. at the MIT Media Laboratory (Tangible Media group) in 2002, where his research focused on tangible user interfaces. He held a postdoctoral position in the visualization department of the Zuse Institute Berlin, internships at Interval Research (Palo Alto) and Sony CSL (Tokyo), and has been a visiting and remote lecturer at Hong Kong Polytechnic's School of Design. His research interests include tangible interfaces, computational genomics (and more broadly, interactive computational STEAM), visualization, and rapid physical and electronic prototyping. He also has a strong interest in computationally-mediated art, craft, and design, rooted in the traditions and material expressions of specific regions and cultures. For more information, please visit: https://cc.cct.lsu.edu/groups/tangviz/ .
February 26

SAT-based Techniques for Reactive Synthesis

Nina Narodytska Carnegie Mellon University


This event will be held on Thursday, February 26, at 2:00 p.m. in Stanley Thomas 302. Please note the special weekday and time for this event.

Abstract: Two-player games are a useful formalism for the synthesis of reactive systems. The traditional approach to solving such games iteratively computes the set of winning states for one of the players. This requires keeping track of all discovered winning states and can lead to space explosion even when using efficient symbolic representations. I will present a new game solving method that works by exploring a subset of all possible concrete runs of the game and proving that these runs can be generalized into a winning strategy on behalf of one of the players. We use counterexample-guided backtracking search, to identify a subset of runs that are sufficient to consider to solve the game. Our search procedure performs learning from both players' failures, gradually shrinking winning regions for both players. In addition, it allows extracting compact winning/spoiling strategies using interpolation procedures. We evaluate our algorithm on several families of benchmarks derived from real-world device driver synthesis problems. Our approach outperforms state-of-the-art BDD solvers on some families of benchmarks. We demonstrated that our strategy extraction procedure is efficient and introduces low overhead on top of the main solver.


About the Speaker: Nina Narodytska is a postdoctoral researcher in the Carnegie Mellon University School of Computer Science. She received her PhD from the University of New South Wales in 2011. Nina works on developing efficient search algorithms for decision and optimization problems. She was named one of "AI's 10 to Watch" young researches in the field of AI. She also received an Outstanding Paper Award at AAAI 2011 and an outstanding program committee member award at the Australasian Joint Conference on Artificial Intelligence 2012. Her EvaMaxSAT Solver was a winner of the Ninth Evaluation of Max-SAT Solvers in 2014.
March 5

Expeditions in Applied High-Performance Distributed Computing

Shantenu Jha Rutgers University


This event will be held on Thursday, March 5th, at 3:30 p.m. in Stanley Thomas 302. Please note the special weekday and time for this event.

Abstract: To support science and engineering applications that are the basis of many societal and intellectual challenges in the 21st Century, there is a need for comprehensive, balanced and flexible distributed cyberinfrastructure. The process of designing and deploying such large scale distributed cyberinfrastructure however, presents a critical and challenging research agenda along at least two different dimension: conceptual and implementation challenges. The first challenge is the ability to architect and federate large scale distributed resources so as to have collective performance that can be designed and predicted as well as the ability to plan and reason about executing distributed workloads. The second challenge -- is to produce tools that provide a step change in the sophistication and scale of problems that can be investigated using DCI, while being extensible, easy to deploy and use, as well as being compatible with a variety of other established tools. In the first part of the talk we will discuss how we are laying the foundations for the design and architecture of the next generation of distributed cyberinfrastructure. In the second part of the talk, we will introduce RADICAL-Cybertools -- a standards-based, abstraction-driven approach to High-Performance Distributed Computing. RADICAL Cybertools builds upon important theoretical advances, production software development best practices and carefully analyzed usage and programming models. We will discuss several science and engineering applications that are currently using RADICAL Cybertools to utilize DCIs in a scalable and extensible fashion. We will conclude with a discussion of the connection between the two challenges.


About the Speaker: Shantenu Jha is an Associate Professor of Computer Engineering at Rutgers University. His research interests lie at the triple point of Cyberinfrastructure R&D, Applied Computing and Computational Science. His research is currently supported by DOE and multiple NSF awards, including CAREER, SI2 (elements, integration and conceptualization), CDI and EarthCube. Shantenu leads the RADICAL-Cybertools (http://radical-cybertools.github.com) project which are a suite of standards-driven and abstractions-based tools used to support large-scale science and engineering applications. He is co-leading a "Conceptualization of an Institute for Biomolecular Simulations" project. He is also designing and developing "MIDAS: Middleware for Data-intensive Analytics and Science" as part of a HPBDS (http://arxiv.org/abs/1403.1528) -- a 2014 NSF DIBBS project. Away from work, Jha tries middle-distance running and biking, tends to indulge in random musings as an economics-junky, and tries to use his copious amounts of free time with a conscience.
March 9

Coping with Intractability in Graphical Models

Justin Domke National ICT Australia

Abstract: Probabilistic graphical models are a natural modeling framework for a range of areas, including computer vision, computational biology, and natural language processing. However, applications of graphical models are typically complicated by the fact that inference (querying a given model) is computationally intractable in general. This talk will describe two broad strategies for coping with this situation. The first is based on restricting consideration to a “tractable” set of parameters. This can be used either for inference (approximately querying an intractable model) or for learning (enforcing tractability in the process). The second strategy is based on learning with a fixed, approximate inference method. The goal of learning is then to make this approximate procedure perform well, rather than to make the model accurate under exact inference.


About the Speaker: Justin Domke received his PhD from the University of Maryland in 2009. From 2009-2012, he was an assistant professor at Rochester Institute of Technology. Since 2012, he has been a Senior Researcher in the Machine Learning group at National ICT Australia and an adjunct research fellow at Australian National University.
March 23

Corruption Networks

Johannes Wachs Corruption Research Center Budapest

Abstract: Network science is a rapidly developing field. In this talk, the speaker will present a few ways in which observed networks deviate structurally from randomness. Among more canonical examples, he will discuss the public procurement network in Hungary and how it relates to corruption.


About the Speaker:Johannes Wachs graduated from Tulane in 2009 with a BS in mathematics and from Central European University in 2012 with an MS. He worked for a time in the strategy and modeling group at Morgan Stanley. Now he works for the Corruption Research Center Budapest and will begin a PhD in network science at Central European University this fall.
April 13

Scalable Processing and User Interactions for Massive Data

Brian Summa University of Utah

Abstract: Scientists, artists, and engineers are producing data at increasingly massive scales. However, visualization/graphics techniques for processing and interaction rarely scale as needed, leaving these users with a challenging scientific or creative process. In this talk, I will discuss how my ongoing work in designing and deploying scalable algorithms and techniques enables users to interactively explore, analyze, and process large data. In this way, my work solves a crucial need for users to understand and manipulate these big data sources. In particular, I will discuss how my work has provided scalable construction of large images as a mosaic of smaller images (panoramas in digital photography). These images can range from megapixels to hundreds of gigapixels in size. My work has brought this offline, slow, and tedious pipeline to a fully interactive and streaming user experience. I will detail a fast, light, and highly scalable approach for the computation of boundaries/seams in a mosaic. This technique has almost perfect linear scaling and provides the first, direct interaction for the editing of seams. In addition, I will describe a streaming, progressive solver for a Poisson system with application in mosaic color correction. This progressive approach allows interactive color correction on-the-fly without the need of a full solution. This technique has also been shown to scale for in-core, out-of-core, and distributed environments. I will conclude my presentation with a discussion of future research directions and emerging needs in large data processing for visualization and graphics.


About the Speaker: Brian Summa is a post doctoral researcher at the Scientific Computing and Imaging Institute at the University of Utah where he studies large data processing, analysis, and interaction. He has particular interests in visualization and computer graphics applications and has published in the top venues in both areas. Brian's work, in collaboration with his interdisciplinary partners, is helping advance several research disciplines such as geology, physics, and neurobiology. His work has supported the creation of large data processing software that scales from mobile devices to supercomputers. Applications from his research have been deployed at several University of Utah departments, Department of Energy national laboratories, a national health science laboratory, and even private corporations including a large oil and gas partner. Brian received his Ph.D. in computer science from the University of Utah where his dissertation focused on large image processing. He received his bachelor's and master's degrees from the University of Pennsylvania.
April 23

Kernel Methods for Unsupervised Domain Adaptation

Boqing Gong University of Southern California


This event will be held on Thursday, April 23rd, at 4:00 p.m. in Stanley Thomas 302. Please note the special weekday and time for this event.

Abstract: In many applications (computer vision, natural language processing, speech recognition, etc.), the curse of domain mismatch arises when the test data (of a target domain) and the training data (of some source domain(s)) come from different distributions. Thus, developing techniques for domain adaptation, i.e., generalizing models from the sources to the target, has been a pressing need. In this talk, I will describe our efforts and results on addressing this challenge. A key observation is that domain adaptation entails discovering and leveraging latent structures in the source and the target domains. To this end, we develop kernel methods. Concretely, our kernel-based adaptation methods exploit various latent structures in the data. In this talk, I will give 3 examples: subspaces for aligning domains, landmarks for bridging the gaps between domains, and clustering in distribution similarity for identifying unknown domains. We demonstrate their effectiveness on well-benchmarked datasets and tasks. I will conclude by describing my future research plans.


About the Speaker: Boqing Gong is a Ph.D. candidate at the University of Southern California. His research lies in the intersection between machine learning and computer vision, and has been focusing on the topics of domain adaptation, zero-shot/transfer learning, and visual analytics. His work is partially supported by the Viterbi School of Engineering Doctoral Fellowship. Boqing holds a M.Phil. degree from the Chinese University of Hong Kong and a B.E. degree from the University of Science and Technology of China.

303 Stanley Thomas Hall, New Orleans, LA 70118 504-865-5785 compsci@tulane.edu