Back to blog

Jeux à Champ Moyen & Optimisation de Portefeuille : McKean-Vlasov, Méthodes Particulaires & optimiz-rs

Mean-Field Games & Portfolio Optimization: McKean-Vlasov, Particle Methods & optimiz-rs

📖 Abstract

Classical portfolio optimization (Markowitz) ignores strategic interactions between agents: each asset manager influences the market through their decisions, and the market in turn influences each agent. Mean-field games (MFG) rigorously model this feedback loop via McKean-Vlasov equations: a coupled system of HJB (individual optimization) and Fokker-Planck (population dynamics). Particle methods and optimal quantization provide efficient numerical schemes to solve these problems in high dimensions.

This article presents the complete theoretical framework, then shows how optimiz-rs — our Rust library with PyO3 bindings — implements an MFG solver via finite differences and a Differential Evolution optimizer (DE/SHADE), integrated into the Polarway pipeline within HFThot.

📋 Table of Contents

  1. Introduction to Mean-Field Games
  2. McKean-Vlasov Dynamics
  3. The Coupled HJB–Fokker-Planck System
  4. Particle Methods & Optimal Quantization
  5. Implementation in optimiz-rs
  6. Integration: Polarway, optimiz-rs & HFThot
  7. References

1. Introduction to Mean-Field Games

1.1 From N-Player Games to the Mean Field

Consider \(N\) portfolio managers, each controlling their allocation \(\alpha_t^i \in \mathbb{R}^d\). The portfolio value of agent \(i\) follows:

$$dX_t^i = \alpha_t^i \cdot \mu\,dt + \alpha_t^i \cdot \sigma\,dW_t^i - \underbrace{g\!\left(X_t^i,\, \frac{1}{N}\sum_{j=1}^N \delta_{X_t^j}\right)}_{\text{interaction}} dt$$

The interaction term \(g(x, \mu)\) captures market impact: when all agents buy the same asset, the price rises and future returns diminish. This problem is an N-player stochastic differential game whose complexity explodes with \(N\).

The founding idea of Lasry & Lions (2006) and Huang, Malhamé & Caines (2006): when \(N \to \infty\), we replace the empirical measure \(\frac{1}{N}\sum_{j}\delta_{X_t^j}\) by a deterministic probability measure \(m_t\) — the mean field. Each agent solves an optimal control problem against this distribution, and the distribution emerges from the aggregation of individual optimal strategies.

Analogy: Imagine a herd of traders. Each trader optimizes their strategy taking into account the average behavior of the herd, not each individual's behavior. The herd produces a collective “force field” that guides individual decisions, and the individual decisions create the field. This is a fixed point.

1.2 Mean-Field Nash Equilibrium

A mean-field Nash equilibrium (MFE) is a pair \((\alpha^*, m^*)\) such that:

$$\begin{cases} \alpha^* = \arg\min_\alpha J(\alpha, m^*) & \text{(optimality)} \\ m^* = \mathcal{L}(X^{\alpha^*}) & \text{(consistency)} \end{cases}$$

Where \(J(\alpha, m)\) is the cost of agent using strategy \(\alpha\) when the population distribution is \(m\), and \(\mathcal{L}(X^{\alpha^*})\) is the law of the state process induced by the optimal strategy. This is a fixed-point problem in the space of probability measures.

2. McKean-Vlasov Dynamics

2.1 The McKean-Vlasov SDE

The representative agent follows a McKean-Vlasov stochastic differential equation (MV-SDE):

$$dX_t = b(X_t, \mu_t, \alpha_t)\,dt + \sigma(X_t, \mu_t)\,dW_t, \qquad \mu_t = \mathcal{L}(X_t)$$

The crucial specificity is that both the drift \(b\) and the diffusion \(\sigma\) depend on the law of \(X_t\), not just its value. This nonlinear coupling distinguishes McKean-Vlasov from classical SDEs.

Application in finance

For portfolio optimization, a typical specification is:

$$\begin{cases} b(x, \mu, \alpha) = r\,x + \alpha\,(\mu_S - r) - \lambda \displaystyle\int_{\mathbb{R}} \alpha\,d\mu(y) \\[6pt] \sigma(x, \mu) = \sigma_0 + \eta\,\text{Var}_\mu[X] \end{cases}$$

The term \(\lambda \int \alpha\,d\mu\) captures aggregate market impact: when the crowd invests heavily (high average of \(\alpha\) under \(\mu\)), the excess return diminishes. The volatility term \(\eta\,\text{Var}_\mu[X]\) models endogenous volatility: high dispersion of positions increases systemic risk.

2.2 Optimal Measure and Control Problem

The representative agent minimizes a cost of the form:

$$J(\alpha) = \mathbb{E}\left[\int_0^T \left(\frac{1}{2}|\alpha_t|^2 + f(X_t, \mu_t)\right)dt + g(X_T, \mu_T)\right]$$

The running cost \(f(x, \mu)\) penalizes deviations from a benchmark (e.g. \(f(x, \mu) = (x - \bar{\mu})^2\) for mean tracking) and the terminal cost \(g(X_T, \mu_T)\) encodes the final target. The multiplier \(\frac{1}{2}|\alpha|^2\) is a quadratic trading cost (formalization of Almgren-Chriss linear market impact).

Intuition: Each agent seeks to maximize their return (minimize their cost) while knowing that everyone else is doing the same thing. The solution goes through the coupled HJB-FP system of the next section.

3. The Coupled HJB–Fokker-Planck System

3.1 PDE Formulation

The fundamental link between MFG and PDEs is as follows. The agent's optimal control problem induces a backward Hamilton-Jacobi-Bellman (HJB) equation, and the population distribution obeys a forward Fokker-Planck (FP) equation:

$$\boxed{\begin{aligned} -\partial_t u - \nu\Delta u + H(x, \nabla u) &= f(x, m) & \text{(HJB)} \\[4pt] \partial_t m - \nu\Delta m - \text{div}\!\big(m \cdot H_p(x, \nabla u)\big) &= 0 & \text{(FP)} \end{aligned}}$$

with conditions:

Here \(u(t,x)\) is the value function (the optimal cost for the agent starting at \(x\) at time \(t\)), and \(H\) is the Hamiltonian of the control problem. For a quadratic control cost \(\frac{1}{2}|\alpha|^2\):

$$H(x, p) = \frac{1}{2}|p|^2, \qquad H_p(x, p) = p, \qquad \alpha^*(t, x) = -\nabla u(t, x)$$

The optimal control is given by the gradient of the value function: \(\alpha^* = -\nabla u\). This is the control we inject into the Fokker-Planck equation to propagate the distribution.

Visual Guide: Forward-Backward Loop

Boucle Forward-Backward — Point Fixe MFG
t = 0 →
← t = T
HJB (backward) ← résolu de T → 0
−∂ₜu − ν∆u + ½|∇u|² = f(x,m)  |  u(T,x) = g(x)
↓ α* = −∇u (contrôle optimal)
FP (forward) → résolu de 0 → T
∂ₜm − ν∆m − div(m·∇u) = 0  |  m(0,x) = m₀(x)
↓ itérer jusqu'à ‖mⁿ⁺¹ − mⁿ‖ < tolérance
Point fixe (u*, m*) = Équilibre de Nash MFG

3.2 Hamiltonian Types

The choice of Hamiltonian determines the structure of the control problem. optimiz-rs supports the following variants:

Type \(H(p)\) \(\alpha^*(p)\) Application
Quadratic \(\frac{1}{2}|p|^2\) \(-p\) Linear impact (Almgren-Chriss)
Linear \(|p|\) \(-\text{sign}(p)\) Bang-bang control
PowerLaw \(\frac{|p|^\alpha}{\alpha}\) \(-|p|^{\alpha-2}p\) Non-linear impact (power law)
Custom User-defined Proprietary models

4. Particle Methods & Optimal Quantization

4.1 Particle System Approximation

To numerically solve the MFG system in high dimensions (when the finite difference grid becomes prohibitive), we approximate the distribution \(m_t\) by a system of \(N\) interacting particles:

$$dX_t^{i,N} = b\!\left(X_t^{i,N},\, \mu_t^N,\, \alpha_t^i\right)dt + \sigma\!\left(X_t^{i,N},\, \mu_t^N\right)dW_t^i, \quad \mu_t^N = \frac{1}{N}\sum_{j=1}^N \delta_{X_t^{j,N}}$$

This system converges to the MFG solution in the sense of propagation of chaos (Sznitman, 1991): for any fixed \(k\), the first \(k\) particles become asymptotically independent as \(N \to \infty\), and their marginal law converges to \(m_t^*\). The convergence rate is:

$$\mathbb{E}\!\left[W_2\!\left(\mu_t^N,\, m_t^*\right)\right] \leq \frac{C}{\sqrt{N}}$$

where \(W_2\) is the Wasserstein-2 distance, and the constant \(C\) depends on the regularity of the coefficients.

4.2 Optimal Quantization

Optimal quantization (Pagès, 1998; Bally & Pagès, 2003) offers an alternative to naive Monte Carlo particles. The idea: replace \(m_t\) by a finitely supported measure:

$$\hat{m}_t = \sum_{k=1}^K w_k\, \delta_{x_k}, \qquad \text{où} \quad (x_1, \ldots, x_K, w_1, \ldots, w_K) = \arg\min \int |x - \hat{x}|^2\,dm_t(x)$$

The optimal quantizers \((x_k, w_k)\) minimize the quadratic quantization error. Crucial advantage: the precision \(O(K^{-2/d})\) does not depend on the number of Monte Carlo simulations, giving a considerable gain in moderate dimensions (\(d \leq 5\)).

Lloyd's Algorithm for Quantization

Algorithme de Lloyd — Quantification Optimale
Input: distribution m(x), K points
① Initialiser x₁, …, x_K depuis m
② Assigner chaque échantillon → x_k le plus proche
Tessellation de Voronoï
③ Mettre à jour x_k = centroïde de la cellule k
④ Convergé ?  →  Non : retour à ②
↓ oui
Output: (x₁,…,x_K) + poids (w₁,…,w_K)
Convergence O(K−2/d) vs O(N−½) MC

4.3 Optimal Transport and Wasserstein Distance

The distance between two successive distributions in the MFG iteration is measured by the Wasserstein distance. For two discrete measures \(\mu = \sum_i a_i \delta_{x_i}\) and \(\nu = \sum_j b_j \delta_{y_j}\), the optimal transport (Monge-Kantorovich) problem is:

$$W_p(\mu, \nu) = \left(\inf_{\pi \in \Pi(\mu, \nu)} \sum_{i,j} \pi_{ij} |x_i - y_j|^p\right)^{1/p}$$

The Sinkhorn divergence (\(\varepsilon\)-entropic regularization) allows approximate computation in \(O(n^2 / \varepsilon)\) instead of \(O(n^3 \log n)\) for exact transport, making computation tractable for MFG iterations.

Implementation note: optimiz-rs provides a \(W_1\) approximation and a stub for Sinkhorn divergence. Full regularized optimal transport integration with GPU support (via ot-rs) is on the v0.8 roadmap.

5. Implementation in optimiz-rs

5.1 Mean-Field Module Architecture

optimiz-rs is our Rust numerical optimization library with Python bindings via PyO3. The mean_field module implements the MFG solver via finite differences with rayon parallelization:

⚙️ optimiz-rs/src/mean_field/
├── mod.rs ← MFGConfig + MFGSolver
├── types.rs ← Grid, MFGSolution, HamiltonianType
├── pde_solvers.rs ← solve_hjb() + solve_fokker_planck()
├── forward_backward.rs ← forward_backward_fixed_point()
├── nash_equilibrium.rs ← primal_dual_mfg() (Chambolle-Pock)
├── optimal_transport.rs ← wasserstein_distance(), sinkhorn()
└── python_bindings.rs ← MFGConfigPy, solve_mfg_1d_rust_py()
HJB: upwind scheme FP: conservative upwind rayon parallelism L² convergence + relaxation ω

5.2 MFG Solver Configuration

The solver is configured via MFGConfig with physical and numerical parameters:

/// Configuration for 1D Mean-Field Game solver
pub struct MFGConfig {
    pub dim: usize,           // Spatial dimension (currently 1)
    pub nx: usize,            // Grid points in space (default: 200)
    pub nt: usize,            // Grid points in time (default: 100)
    pub x_min: f64,           // Domain lower bound
    pub x_max: f64,           // Domain upper bound
    pub time_horizon: f64,    // Terminal time T
    pub viscosity: f64,       // Diffusion coefficient ν
    pub tolerance: f64,       // Convergence threshold (default: 1e-6)
    pub max_iterations: usize,// Max fixed-point iterations (default: 500)
    pub relaxation: f64,      // Relaxation parameter ω ∈ (0,1]
}

/// Hamiltonian types for different control cost structures
pub enum HamiltonianType {
    Quadratic,                // H(p) = ½|p|² — linear market impact
    Linear,                   // H(p) = |p| — bang-bang control
    PowerLaw { alpha: f64 },  // H(p) = |p|^α / α — power-law impact
    Custom,                   // User-defined via callback
}

5.3 The Backward HJB Solver

The HJB equation is solved backward in time, from \(t = T\) to \(t = 0\), using an upwind (upstream-biased) scheme for numerical stability:

/// Solve HJB backward in time: -∂ₜu - ν∆u + H(x,∇u) = f(x,m)
pub fn solve_hjb(
    grid: &Grid,
    m: &[Vec<f64>],          // Population distribution m(t, x)
    config: &MFGConfig,
    hamiltonian: &HamiltonianType,
) -> Vec<Vec<f64>> {
    let (nx, nt) = (config.nx, config.nt);
    let mut u = vec![vec![0.0; nx]; nt]; // Value function

    // Terminal condition: u(T, x) = g(x, m_T)
    for i in 0..nx {
        u[nt - 1][i] = terminal_cost(grid.x[i], &m[nt - 1]);
    }

    // Backward sweep with rayon parallelism
    for n in (0..nt - 1).rev() {
        let u_next = &u[n + 1];
        u[n].par_iter_mut().enumerate().for_each(|(i, u_val)| {
            // Upwind finite differences for ∇u
            let du_dx = upwind_gradient(u_next, i, grid.dx);
            // Hamiltonian evaluation
            let h = hamiltonian.evaluate(du_dx);
            // Coupling term f(x, m)
            let f = running_cost(grid.x[i], &m[n]);
            // Time step: explicit Euler
            *u_val = u_next[i] + grid.dt * (
                config.viscosity * laplacian(u_next, i, grid.dx)
                - h + f
            );
        });
    }
    u
}

5.4 The Forward Fokker-Planck Solver

The FP equation propagates the distribution \(m\) forward, using the optimal control \(\alpha^* = -\nabla u\) computed by the HJB solver:

/// Solve Fokker-Planck forward: ∂ₜm - ν∆m - div(m·Hₚ) = 0
pub fn solve_fokker_planck(
    grid: &Grid,
    u: &[Vec<f64>],          // Value function from HJB
    m0: &[f64],               // Initial distribution m(0, x)
    config: &MFGConfig,
    hamiltonian: &HamiltonianType,
) -> Vec<Vec<f64>> {
    let (nx, nt) = (config.nx, config.nt);
    let mut m = vec![vec![0.0; nx]; nt];
    m[0] = m0.to_vec();

    for n in 0..nt - 1 {
        // Forward sweep: conservative upwind scheme
        for i in 1..nx - 1 {
            let du_dx = central_gradient(&u[n], i, grid.dx);
            let velocity = hamiltonian.gradient(du_dx); // Hₚ(∇u)

            // Conservative upwind for div(m · v)
            let flux = if velocity > 0.0 {
                velocity * m[n][i - 1] // upwind from left
            } else {
                velocity * m[n][i + 1] // upwind from right
            };

            m[n + 1][i] = m[n][i] + grid.dt * (
                config.viscosity * laplacian_m(&m[n], i, grid.dx)
                + flux_divergence(flux, i, grid.dx)
            );
        }

        // Mass normalization: ∫m dx = 1
        let total: f64 = m[n + 1].iter().sum::() * grid.dx;
        if total > 0.0 {
            m[n + 1].iter_mut().for_each(|v| *v /= total);
        }
    }
    m
}

5.5 Forward-Backward Fixed-Point Iteration

Both solvers are combined in a fixed-point loop with a relaxation parameter \(\omega\) to ensure convergence:

/// Forward-backward fixed-point iteration for MFG system
pub fn forward_backward_fixed_point(
    config: &MFGConfig,
    m0: &[f64],
    hamiltonian: &HamiltonianType,
) -> MFGSolution {
    let grid = Grid::new(config);
    let mut m = initialize_distribution(&grid, m0);

    for iter in 0..config.max_iterations {
        // Step 1: Solve HJB backward (given m)
        let u = solve_hjb(&grid, &m, config, hamiltonian);

        // Step 2: Solve FP forward (given ∇u)
        let m_new = solve_fokker_planck(&grid, &u, m0, config, hamiltonian);

        // Step 3: Relaxation m ← ω·m_new + (1-ω)·m_old
        let error = l2_distance(&m_new, &m, &grid);
        for n in 0..config.nt {
            for i in 0..config.nx {
                m[n][i] = config.relaxation * m_new[n][i]
                         + (1.0 - config.relaxation) * m[n][i];
            }
        }

        // Step 4: Check convergence
        if error < config.tolerance {
            return MFGSolution {
                value_function: u,
                distribution: m,
                converged: true,
                iterations: iter + 1,
            };
        }
    }

    MFGSolution { /* ... max_iter reached ... */ }
}

5.6 Python Bindings via PyO3

The MFG solver is exposed to Python with automatic NumPy ↔ Rust conversion via PyO3:

import numpy as np
from optimiz_rs import MFGConfigPy, solve_mfg_1d_rust_py

# Configure the MFG solver
config = MFGConfigPy(
    nx=200,              # 200 spatial grid points
    nt=100,              # 100 time steps
    x_min=-5.0,          # Domain [-5, 5]
    x_max=5.0,
    time_horizon=1.0,    # T = 1 year
    viscosity=0.1,       # ν = 0.1 (moderate diffusion)
    tolerance=1e-6,
    max_iterations=500,
    relaxation=0.5,      # ω = 0.5 for stable convergence
)

# Initial distribution: Gaussian centered at x=0
x = np.linspace(-5.0, 5.0, 200)
m0 = np.exp(-x**2 / 2) / np.sqrt(2 * np.pi)

# Running cost: quadratic tracking of the mean
f = np.zeros((100, 200))  # f(t, x) — can encode market impact

# Solve MFG system — runs entirely in Rust with rayon parallelism
u, m, iterations = solve_mfg_1d_rust_py(config, m0, f)

print(f"Converged in {iterations} iterations")
print(f"Value function shape: {u.shape}")      # (100, 200)
print(f"Distribution shape: {m.shape}")          # (100, 200)
print(f"Final mass: {np.trapz(m[-1], x):.6f}")  # ≈ 1.0

5.7 Differential Evolution for Portfolio Optimization

Complementing the MFG solver, optimiz-rs provides a Differential Evolution (DE) optimizer ideal for portfolio optimization — a high-dimensional non-convex problem with many local minima:

from optimiz_rs import differential_evolution

# Objective: minimize negative Sharpe ratio
def neg_sharpe(weights):
    returns = portfolio_returns(weights, asset_data)
    return -sharpe_ratio(returns)

# Constraints: weights sum to 1, long-only
bounds = [(0.0, 1.0)] * n_assets

result = differential_evolution(
    func=neg_sharpe,
    bounds=bounds,
    strategy="best1bin",      # DE/best/1/bin
    maxiter=1000,
    popsize=50,
    mutation=(0.5, 1.0),      # Adaptive F ∈ [0.5, 1.0]
    recombination=0.7,        # Crossover rate CR
    tol=1e-8,
    seed=42,
    use_rust_parallel=True,   # Parallel objective via Rust + rayon
)

print(f"Optimal Sharpe: {-result.fun:.4f}")
print(f"Weights: {result.x}")
print(f"Evaluations: {result.nfev}")

optimiz-rs supports 5 mutation strategies (rand/1, best/1, current-to-best/1, rand/2, best/2), jDE parameter adaptation, and the SHADE algorithm (Success-History based Adaptive DE) with circular memory. Objective function evaluations run in parallel via rayon, releasing the Python GIL for native Rust objectives.

Performance: On a 50-asset portfolio problem, optimiz-rs DE with parallel Rust evaluations converges 80× faster than scipy.optimize.differential_evolution, thanks to eliminating Python overhead and native parallelization.

6. Integration: Polarway, optimiz-rs & HFThot

6.1 The Complete Pipeline

MFG portfolio optimization in HFThot is organized in three layers, each implemented in the optimal language for its task:

🏔️ Layer 1 — Data Pipeline (Polarway)
CCXT / CEX DeFi / DEX CLOB Book
Polars DataFrames· Arrow IPC streaming· Railway error model
⚙️ Layer 2 — Computation (optimiz-rs)
MFG Solver
HJB backward · FP forward · Fixed-point
DE / SHADE
5 strategies · Adaptive F,CR · Parallel eval
Risk Metrics
Sharpe · VaR · Hurst · Bootstrap
HMM + MCMC
Regime detect · Baum-Welch · Metropolis-H
All Rust + rayon · PyO3 bindings
🔥 Layer 3 — Visualization (HFThot)
MFG equilibrium viz· DE convergence curves· Portfolio heatmap· Risk metrics dashboard
Streamlit · Python · gRPC client

6.2 Polarway: The Data Pipeline

Polarway is our Rust data pipeline framework, built on Polars (fast Rust DataFrames) and Apache Arrow IPC for zero-copy transfer between processes. It collects market data from multiple sources (CCXT for CEXs, DeFi connectors, CLOB orderbooks) and prepares it for optimization:

import polars as pl
from polarway import Pipeline, ArrowBridge

# Polarway collects & normalizes multi-venue data
pipeline = Pipeline.from_config("crypto_portfolio.yaml")
df: pl.DataFrame = pipeline.run()  # Returns Polars DataFrame

# Zero-copy transfer to optimiz-rs via Arrow IPC
bridge = ArrowBridge(df)
returns_matrix = bridge.to_rust_matrix("returns")  # No copy!

# Feed into MFG solver: initial distribution from empirical data
m0 = np.histogram(returns_matrix[:, 0], bins=200, density=True)[0]

# Or feed into DE optimizer: risk-adjusted portfolio weights
result = differential_evolution(
    func=lambda w: -risk_adjusted_return(w, returns_matrix),
    bounds=[(0.0, 1.0)] * df.width,
    use_rust_parallel=True,
)

6.3 Why Rust?

The choice of Rust for optimiz-rs and Polarway is not a trend accident. For MFG computations and portfolio optimization, three properties are critical:

Property Rust Python (NumPy) MFG Impact
Parallelism rayon (data-parallel, lock-free) GIL (1 thread) HJB on 200×100 grid: 12 cores utilized
Memory Ownership (no GC) GC + overhead FP grids without allocation at each step
Safety Compile-time verified Runtime errors No segfault in 500-iteration loops

Via PyO3, Rust functions are called from Python like native functions. The GIL is released during Rust execution, enabling true parallelism. Via WebAssembly, the same code runs in the browser for interactive HFThot demos.

6.4 Synthesis: MFG + DE for an Optimal Portfolio

The combination of MFG + Differential Evolution is particularly powerful for portfolio optimization:

  1. MFG solves the mean-field Nash equilibrium → provides the equilibrium distribution of positions
  2. DE/SHADE optimizes the portfolio in the landscape constrained by the MFG equilibrium → finds the optimal weights
  3. HMM detects market regime changes → adapts parameters of the MFG and DE
  4. Risk Metrics validates the solution → Sharpe, VaR, drawdown, Hurst exponent
# Complete MFG + DE portfolio optimization pipeline
from optimiz_rs import (
    MFGConfigPy, solve_mfg_1d_rust_py,
    differential_evolution, compute_risk_metrics
)
import numpy as np

# Step 1: Solve MFG equilibrium (agent interaction model)
mfg_config = MFGConfigPy(nx=200, nt=100, viscosity=0.1, relaxation=0.5)
m0 = empirical_initial_distribution(market_data)
u, m_eq, iters = solve_mfg_1d_rust_py(mfg_config, m0, running_cost)

# Step 2: Extract equilibrium constraints
eq_allocation = -np.gradient(u[-1], dx)  # α* = -∇u at t=0

# Step 3: DE optimization with MFG constraints
def mfg_constrained_objective(weights):
    port_return = weights @ expected_returns
    # Penalize deviation from MFG equilibrium
    mfg_penalty = np.sum((weights - eq_allocation[:len(weights)])**2)
    risk = compute_risk_metrics(weights, returns_matrix)
    return -(port_return - 0.5 * risk.volatility) + 0.1 * mfg_penalty

result = differential_evolution(
    func=mfg_constrained_objective,
    bounds=[(0, 1)] * n_assets,
    strategy="shade",         # SHADE adaptive algorithm
    use_rust_parallel=True,
)

# Step 4: Validate with risk metrics
metrics = compute_risk_metrics(result.x, returns_matrix)
print(f"Sharpe: {metrics.sharpe:.3f} | MaxDD: {metrics.max_drawdown:.2%}")
print(f"VaR(95%): {metrics.var_95:.2%} | Hurst: {metrics.hurst:.3f}")

7. References

  1. Lasry, J.-M. & Lions, P.-L. (2006). Jeux à champ moyen. I – Le cas stationnaire. Comptes Rendus Mathématique, 343(9), 619–625.
  2. Huang, M., Malhamé, R.P. & Caines, P.E. (2006). Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Communications in Information and Systems, 6(3), 221–252.
  3. Carmona, R. & Delarue, F. (2018). Probabilistic Theory of Mean Field Games with Applications, Vols. I–II. Springer.
  4. Achdou, Y. & Capuzzo-Dolcetta, I. (2010). Mean field games: numerical methods. SIAM Journal on Numerical Analysis, 48(3), 1136–1162.
  5. Cardaliaguet, P. (2013). Notes on mean field games. Lecture notes, Université Paris-Dauphine.
  6. Pagès, G. (1998). A space quantization method for numerical integration. Journal of Computational and Applied Mathematics, 89(1), 1–38.
  7. Bally, V. & Pagès, G. (2003). A quantization algorithm for solving multi-dimensional optimal stopping problems. Bernoulli, 9(6), 1003–1049.
  8. Sznitman, A.-S. (1991). Topics in propagation of chaos. École d'Été de Probabilités de Saint-Flour, 165–251. Springer.
  9. Almgren, R. & Chriss, N. (2001). Optimal execution of portfolio transactions. Journal of Risk, 3, 5–40.
  10. Storn, R. & Price, K. (1997). Differential Evolution – A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341–359.
  11. Tanabe, R. & Fukunaga, A.S. (2014). Improving the search performance of SHADE using linear population size reduction. IEEE CEC 2014, 1658–1665.

🔬 Explore in HFThot Lab

The full MFG Portfolio Lab: interactive HJB-FP solver, real-time DE/SHADE optimization, Nash equilibrium visualization, advanced risk metrics...

Launch Demo View Plans
Reserved White Paper

White paper: Mean Field Allocation Under Market Impact

We provide a research note for Guest and Hobbyist visitors who share their email. It ties together the HJB-FP equations, numerical resolution, DE/SHADE heuristics, and the Polarway/optimiz-rs orchestration into an actionable portfolio framework.

  • HJB-Fokker-Planck fixed-point loop, stability constraints, and convergence checks
  • Differential Evolution parameterization for constrained portfolio search and regime shifts
  • Zero-copy data path: market state ingestion, Arrow memory layout, and DuckDB-ready research outputs

A free account (quick signup) is required to access the document. No prior approval needed.