skip to content
 

Multiscale Modelling, Multiresolution and Adaptivity

6th April 2003 to 12th April 2003

Organisers: Mark Ainsworth (University of Strathclyde), Wolfgang Dahmen (RWTH Aachen), Christoph Schwab (ETH Zürich), and Endre Süli (University of Oxford)

Supported by the European Commission, Research DG, Human Potential Programme, High-Level Scientific Conferences HPCF-CT-2002-00106

Workshop Theme

Many computationally challenging problems that arise in science and engineering exhibit multiscale behaviour. Relevant examples of practical interest include: structural analysis of composite and foam materials, fine-scale laminates and crystalline microstructures, flow through porous media, dendritic solidification, turbulent transport in high Reynolds number flows, weather forecasting, large-scale data visualization, spray combustion and detonation, many-body galaxy formation, large-scale molecular dynamic simulations, ab-initio physics and chemistry, and a multitude of others.

In stark contrast with the multiscale behaviour exhibited by such problems, classical computational methods for the numerical simulation of multiscale physical phenomena have been designed to operate at a certain preselected scale fixed by the choice of a discretisation parameter. As a result of this, the task of numerically computing or even representing all of the physically relevant scales present in the problem by classical numerical techniques results in excessive algorithmic complexity. The consequential difficulties manifest themselves in various guises: An attempt to represent all relevant scales in the physical model may lead to an extremely large set of unknowns, requiring a tremendous amount of computer memory and CPU time; For certain multiscale problems one is not actually interested in the fine scale information; however, due to the presence of nonlinearities in the model, the effect of the fine, unresolvable scale information on coarse scales cannot be ignored and must be precisely incorporated in order to achieve physically meaningful computational results; Exacerbating the computational problem is the fundamental question of optimal data representation for multiscale problems where it is known that even modern wavelet basis representations can yield overall suboptimal algorithmic complexity when the quantity of interest contains embedded low-dimensional manifolds across which the function or its derivatives exhibit discontinuities (as is the case in nonlinear hyperbolic conservation laws, for example).

Multiscale problems will remain computationally expensive or completely intractable for the foreseeable future unless new algorithmic paradigms of computation are developed which fundamentally embrace the multiscale nature of these problems.

The meeting is devoted to addressing this question, with focus on recent developments, by bringing together leading experts from applied mathematics, material science, and various branches of scientific computation, who work on different aspects of multiscale modelling.

Thus far, there has been little interaction between these communities despite a considerable overlap in their scientific objectives. The aim of the meeting is to stimulate interactions and cross-fertilisation between the various subject areas involved, an assessment by leading researchers of the state-of-the-art in the field, identification of key problems and obstacles to progress, and indication of promising directions for future research.

Technical topics

Multiscale modelling techniques in science and engineering:
Mathematical modelling of multiscale phenomena; Homogenisation theory for partial differential equations; Hierarchical modelling.

Computational multiscale modelling:
Computational/algebraic homogenisation; Multiscale finite element methods; Subgrid scale modelling and upscaling; Variational multiscale methods and residual free bubble algorithms.

Optimal-complexity and adaptive algorithms for multiscale problems:
Multiresolution algorithms; Adaptive wavelet algorithms; Adaptive h- and hp- version finite element algorithms based on a posteriori error analysis.

University of Cambridge Research Councils UK
    Clay Mathematics Institute The Leverhulme Trust London Mathematical Society Microsoft Research NM Rothschild and Sons