Discovering drag reduction strategies in wall-bounded turbulent flows using deep reinforcement learning

Speaker(s) Luca Guastoni KTH - Royal Institute of Technology
Date 24 April 2023 – 14:30 to 15:30
Venue INI Seminar Room 1
Session Title Discovering drag reduction strategies in wall-bounded turbulent flows using deep reinforcement learning
Chair Omar Matar
Event [DDEW03] Computational Challenges and Emerging Tools
Abstract

Deep reinforcement learning (DRL) is a mathematical framework that has been used to design and learn control policies in different domains, and several applications in physics research have been proposed, as well. Here we introduce a reinforcement learning (RL) environment to design control strategies for drag reduction in turbulent fluid flows enclosed in a channel. The control is applied in the form of blowing and suction at the wall, while the observable state is the velocity in the streamwise and wall-normal directions, at a given distance from the wall.
Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies.
In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, the state-of-the-art turbulence-control strategy from the literature, and a commonly-used DRL algorithm, deep deterministic policy gradient. Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively.

Co-authors: Jean Rabault (Norwegian Meteorological Institute), Philipp Schlatter (KTH - Royal Institute of Technology), Hossein Azizpour (KTH - Royal Institute of Technology) and Ricardo Vinuesa (KTH - Royal Institute of Technology)

Supported By