User Tools

Site Tools


docs:math:newton_krylov_hookstep

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
docs:math:newton_krylov_hookstep [2014/12/01 09:10]
gibson [The Newton algorithm]
docs:math:newton_krylov_hookstep [2014/12/01 09:15]
gibson
Line 21: Line 21:
  
 Let Let
- +\begin{eqnarray*}
-<​latex>​ $ \begin{align*} +
     f^t(u) &= \text{the time-t map of Navier-Stokes computed by DNS}.\\     f^t(u) &= \text{the time-t map of Navier-Stokes computed by DNS}.\\
     \sigma &= \text{a symmetry of the flow}     \sigma &= \text{a symmetry of the flow}
-\end{align*} $ </​latex>​+\end{eqnarray*}
  
 We seek solutions of the equation We seek solutions of the equation
  
-<​latex>​+\begin{eqnarray*}
    ​G(u,​\sigma,​t) = \sigma f^t(u) - u = 0                           ​G(u,​\sigma,​t) = \sigma f^t(u) - u = 0                       
-</​latex>​+\end{eqnarray*}
  
 //t// can be a fixed or a free variable, as can σ. For plane //t// can be a fixed or a free variable, as can σ. For plane
 Couette the possible continuously varying variables in σ are  Couette the possible continuously varying variables in σ are 
 //a_x,a_z// in the phase shifts //x → x + a_x L_x// and //z → z + a_z L_z//. //a_x,a_z// in the phase shifts //x → x + a_x L_x// and //z → z + a_z L_z//.
 +
 ===== Discretization ===== ===== Discretization =====
  
Line 51: Line 51:
 between continuous fields u and the discrete representation {u_m}. ​ between continuous fields u and the discrete representation {u_m}. ​
  
-<​latex>​ $ \begin{align*}+\begin{eqnarray*}
    ​\{u_m\} &= P(u) \\    ​\{u_m\} &= P(u) \\
      u_m &= P_m(u), ​ \;​\;​\; ​ 1 \leq m \leq M          ​      u_m &= P_m(u), ​ \;​\;​\; ​ 1 \leq m \leq M          ​
-\end{align*} $ </​latex>​ +\end{eqnarray*}
  
 Then //x = (u, σ, t)// is an //M+3// dimensional vector Then //x = (u, σ, t)// is an //M+3// dimensional vector
  
-<​latex>​+\begin{eqnarray*}
    x_m = (u_1, u_2, ..., u_M, ax, az, t)    x_m = (u_1, u_2, ..., u_M, ax, az, t)
-</​latex>​+\end{eqnarray*}
  
 and the discrete equation to be solved is and the discrete equation to be solved is
  
-<​latex>​ $ \begin{align*}+\begin{eqnarray*}
   G(x) &= 0     ​\\ ​                                         ​   G(x) &= 0     ​\\ ​                                         ​
   G_m(x) &= P_m(\sigma f^t(u) - u),  \;\;\; 1 \leq m \leq M                    ​   G_m(x) &= P_m(\sigma f^t(u) - u),  \;\;\; 1 \leq m \leq M                    ​
-\end{align*} $ </​latex>​+\end{eqnarray*}
  
 This is //M// equations in //M+3// unknowns. The indeterminacy follows from the This is //M// equations in //M+3// unknowns. The indeterminacy follows from the
Line 119: Line 118:
    A \, dx_N = b                                                 A \, dx_N = b                                             
 \end{eqnarray*} \end{eqnarray*}
 +
 ===== Krylov subspace solution of the Newton equation ===== ===== Krylov subspace solution of the Newton equation =====
  
Line 127: Line 127:
 with finite differencing and time integration:​ with finite differencing and time integration:​
  
-<​latex>​ $ \begin{align*}+\begin{eqnarray*}
   DG(x) dx &= 1/e \; (G(x + \epsilon dx) - G(x)) \text { where } \epsilon |dx| << 1. \\   DG(x) dx &= 1/e \; (G(x + \epsilon dx) - G(x)) \text { where } \epsilon |dx| << 1. \\
            &​= 1/e \; P[(\sigma+d\sigma) f^{t+dt}(u+du) - (u+du) - (\sigma f^{t}(u) - u)] \\            &​= 1/e \; P[(\sigma+d\sigma) f^{t+dt}(u+du) - (u+du) - (\sigma f^{t}(u) - u)] \\
            &​= 1/e \; P[(\sigma+d\sigma) f^{t+dt}(u+du) - \sigma f^{t}(u) - du]             &​= 1/e \; P[(\sigma+d\sigma) f^{t+dt}(u+du) - \sigma f^{t}(u) - du] 
-\end{align*} $ </​latex>​+\end{eqnarray*}
  
 The last three entries of //A dx// are simple evaluations of the inner products //(du, du/dt), The last three entries of //A dx// are simple evaluations of the inner products //(du, du/dt),
Line 141: Line 141:
 algorithm to a constrained minimization algorithm called the algorithm to a constrained minimization algorithm called the
 "​hookstep",​ specially adapted to work with GMRES. "​hookstep",​ specially adapted to work with GMRES.
 +
 ===== The hookstep algorithm ===== ===== The hookstep algorithm =====
  
Line 154: Line 155:
 matrices //Q_n, Q_{n-1},// and //H_n// such that matrices //Q_n, Q_{n-1},// and //H_n// such that
  
-<​latex>​+\begin{eqnarray*}
   A Q_n = Q_{n-1} H_n                                              ​   A Q_n = Q_{n-1} H_n                                              ​
-</​latex>​+\end{eqnarray*}
  
 where //Q_n// is //M x n//, //Q_{n-1}// is //M x (n+1)//, and //H_n// is //(n+1) x n//. where //Q_n// is //M x n//, //Q_{n-1}// is //M x (n+1)//, and //H_n// is //(n+1) x n//.
Line 164: Line 165:
 constrained optimization. Namely, we minimize constrained optimization. Namely, we minimize
  
-<​latex>​ $ \begin{align*}+\begin{eqnarray*}
   || Q_{n-1}^T (A \, dx_H - b) || &= || Q_{n-1}^T A Q_n s - Q_{n-1}^T b || \\   || Q_{n-1}^T (A \, dx_H - b) || &= || Q_{n-1}^T A Q_n s - Q_{n-1}^T b || \\
                          &​= || Q_{n-1}^T Q_{n-1} H_n s  - Q_{n-1}^T b || \\                          &​= || Q_{n-1}^T Q_{n-1} H_n s  - Q_{n-1}^T b || \\
                          &​= || H_n s - Q_{n-1}^T b ||            ​                          &​= || H_n s - Q_{n-1}^T b ||            ​
-\end{align*} $ </​latex>​+\end{eqnarray*}
  
 subject to //​||dx_H||^2 = || s ||^2 ≤ δ^2//. Note that the quantity in subject to //​||dx_H||^2 = || s ||^2 ≤ δ^2//. Note that the quantity in
Line 177: Line 178:
 Then the RHS of the previous equation becomes Then the RHS of the previous equation becomes
  
-<​latex>​ $ \begin{align*}+\begin{eqnarray*}
   || H_n s - Q_{n-1}^T b || &= || U D V^T s - Q_{n-1}^T b || \\   || H_n s - Q_{n-1}^T b || &= || U D V^T s - Q_{n-1}^T b || \\
                         &= || D V^T s - U^T Q_{n-1}^T b || \\                         &= || D V^T s - U^T Q_{n-1}^T b || \\
                         &= || D \hat{s} - \hat{b} ||                         &= || D \hat{s} - \hat{b} ||
-\end{align*} $ </​latex>​+\end{eqnarray*}
  
 where // \hat{s} = V^T s// and //\hat{b} = U^T Q_{n-1}^T b//. Now we need to where // \hat{s} = V^T s// and //\hat{b} = U^T Q_{n-1}^T b//. Now we need to
Line 191: Line 192:
 The solution is The solution is
  
-<​latex>​+\begin{eqnarray*}
   \hat{s}_i = (\hat{b}_i d_i)/(d^2_i + \mu)   \hat{s}_i = (\hat{b}_i d_i)/(d^2_i + \mu)
-</​latex>​+\end{eqnarray*}
  
 (no Einstein summation) for the //μ// such that //​||\hat{s}||^2 = δ^2//​. ​ (no Einstein summation) for the //μ// such that //​||\hat{s}||^2 = δ^2//​. ​
 This value of //μ// can be found with a 1d Newton search for the zero of This value of //μ// can be found with a 1d Newton search for the zero of
  
-<​latex>​+\begin{eqnarray*}
   \Phi(\mu) = ||\hat{s}(\mu)||^2 - \delta^2   \Phi(\mu) = ||\hat{s}(\mu)||^2 - \delta^2
-</​latex>​+\end{eqnarray*}
  
 A straight Newton search a la //μ → μ - Φ(μ)/​Φ'​(μ)//​ is suboptimal A straight Newton search a la //μ → μ - Φ(μ)/​Φ'​(μ)//​ is suboptimal
Line 208: Line 209:
 solution //​\hat{s}(μ)//,​ we compute the hookstep solution //s// from solution //​\hat{s}(μ)//,​ we compute the hookstep solution //s// from
  
-<​latex>​+\begin{eqnarray*}
   dx_H = Q_n s = Q_n V \hat{s}   dx_H = Q_n s = Q_n V \hat{s}
-</​latex>​+\end{eqnarray*}
  
 ===== Determining the trust-region radius δ ===== ===== Determining the trust-region radius δ =====
docs/math/newton_krylov_hookstep.txt · Last modified: 2014/12/01 09:16 by gibson