How to properly apply the Lie Series
I am trying to solve this problem from Symmetry Methods for Differential Equations A Beginner's Guide (Peter E. Hydon).
Use the Lie Series $$F(\hat{x},\hat{y})=\sum_{j=0}^{\infty}\frac{\varepsilon^j}{j!}X^j F(x,y)$$
where $$X = \xi(x,y)\partial_x +\eta(x,y)\partial_y$$
to verify, that $$\hat{x}=\exp{(\varepsilon X)}x,$$ $$\hat{y}=\exp{(\varepsilon X)}y$$
holds for
$$a) X = x\partial_x-y\partial_y$$ $$b) X = x^2\partial_x+xy\partial_y$$ $$c) X = -y\partial_x+x\partial_y$$
I don't even know where to start ... . A step by step solution for one of the $X$ would be nice :).
$\endgroup$ 02 Answers
$\begingroup$A relevant reference is found here:
It is advised to absorb this one-dimensional theory first, before proceeding to 2-D.$c)\; X = -y\, \partial_x+x\, \partial_y$
Disclaimer. In our (LaTeX) notes we have $f$ instead of $F$ , $(x_1,y_1)$ instead of $(\hat{x},\hat{y})$ , $\theta$ instead of $\varepsilon$ , $k$ instead of $j$ , and more. I didn't replace notations because my eyes are bad and it is expected that the danger of making mistakes is greater than the advantage of being consistent with the question.An example of a Continuous Transformation in two dimensions is a Rotation over an angle $\theta$:
$$
\left\{
\begin{array}{c} x_1 = \cos(\theta) . x - \sin(\theta) . y \\ y_1 = \sin(\theta) . x + \cos(\theta) . y
\end{array}
\right.
$$
It might be asked how rotation of the coordinate system works out for a
function of these variables. With other words, how the following function would
be expanded as a Taylor series expansion around the original $f(x,y)$:
$$ f_1(x,y) = f(x_1,y_1) = f(\,\cos(\theta).x - \sin(\theta).y\, , \, \sin(\theta).x + \cos(\theta).y\, )
$$
Define other (polar) variables $(r,\phi)$ as:
$$ x = r.\cos(\phi) \quad \mbox{and} \quad y = r.\sin(\phi)
$$
Giving for the transformed variables:
$$ x_1 = r.\cos(\phi).\cos(\theta) - r.\sin(\phi).\sin(\theta) = r.\cos(\phi+\theta)
\\ y_1 = r.\cos(\phi).\sin(\theta) + r.\sin(\phi).\cos(\theta) = r.\sin(\phi+\theta)
$$
We see that $\phi$ is a proper canonical variable. Another function $g(\phi)$ is defined with this canonical variable as the independent one:
$$ g(\phi) = f(\,r.\cos(\phi)\, ,\,r.\sin(\phi)\,) = f(x,y)
$$
Now rotating $f(x,y)$ over an angle $\theta$ corresponds with a translation of
$g(\phi)$ over a distance $\theta$. Therefore $g(\phi+\theta)$ can be developed
into a Taylor series around the point of departure:
$$ g(\phi+\theta) = g(\phi) + \theta.\frac{dg(\phi)}{d\phi} + \frac{1}{2} \theta^2.\frac{d^2g}{d\phi^2} + ...
$$
Working back to the original variables $(x,y)$ with a well known chain rule for
partial derivatives:
$$ \frac{dg}{d\phi} = \frac{\partial g}{\partial x}\frac{dx}{d\phi} + \frac{\partial g}{\partial y}\frac{dy}{d\phi}
$$
Where:
$$ \frac{dx}{d\phi} = - r.\sin(\phi) = - y
\quad \mbox{and} \quad \frac{dy}{d\phi} = + r.\cos(\phi) = + x
\quad \Longrightarrow
\\ \frac{dg}{d\phi} = \frac{\partial g}{\partial x}.(-y) + \frac{\partial g}{\partial y}.(+x)
\quad \Longrightarrow \quad \frac{d}{d\phi} = x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x}
$$
Herewith we find that $X = (x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x})$ is the infinitesimal operator for Plane Rotations.
It is equal to differentiation with respect to the canonical variable, as expected. The end-result is:
$$ f_1(x,y) = \sum_{k=0}^{\infty} \frac{1}{k!} \left[ \theta \left(x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x}\right) \right]^k f(x,y) = e^{ \theta (x\, \partial / \partial y - y\, \partial / \partial x) } f(x,y)
$$
This is true for any function $f(x,y)$. In particular, the independent
variables themselves can be conceived as such functions. Which means that:
$$ x_1 = e^{ \theta (x\, \partial / \partial y - y\, \partial / \partial x) } x \quad \mbox{and} \quad y_1 = e^{ \theta (x\, \partial / \partial y - y\, \partial / \partial x) } y
$$
It is easily demonstrated that:
$$ (x\frac{\partial}{\partial y} - y\frac{\partial}{\partial x}) x = - y
\quad \mbox{and} \quad (x\frac{\partial}{\partial y} - y\frac{\partial}{\partial x}) y = x
$$
Herewith we find:
$$ \sum_{k=0}^{\infty} \left[ \theta (x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x}) \right]^k x = 1 - \theta.y - \frac{1}{2} \theta^2.x + \frac{1}{3!} \theta^3.y + \frac{1}{4!} \theta^4.x + ...
\\ = \cos(\theta).x - \sin(\theta).y = x_1
$$
Likewise we find:
$$ \sum_{k=0}^{\infty} \left[ \theta (x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x} ) \right]^k y = 1 + \theta.x - \frac{1}{2} \theta^2.y - \frac{1}{3!} \theta^3.x + \frac{1}{4!} \theta^4.y + ...
\\ = \sin(\theta).x + \cos(\theta).y = y_1
$$
Thus, indeed, the formulas for a far-form-infinitesimal rotation over an finite
angle $\theta$ can be reconstructed from the expansions.
$a)\; X = x\, \partial_x-y\, \partial_y$
Read the 1-D reference. We have the following results there: $$ e^{ln(\lambda) \,x \frac{d}{dx}} f(x) = f(\lambda\,x) $$ Where $\lambda$ is a positive scaling factor. We also have: $$ e^{-ln(\lambda) \,x \frac{d}{dx}} f(x) = e^{ln(1/\lambda) \,x \frac{d}{dx}} f(x) = f(x/\lambda) $$ These results translate to 2-D in the following manner: $$ e^{ln(\lambda) \,x \frac{\partial}{\partial x}} f(x,y) = f(\lambda\,x,y) \\ e^{-ln(\lambda) \,y \frac{\partial}{\partial y}} f(x,y) = f(x,y/\lambda) $$ The two exponents are commutative, so we can write, with $\;\ln(\lambda)=\mu\;\Longrightarrow\;\lambda=e^\mu=\exp(\mu)$ : $$ e^{\mu(x\, \partial_x - y\, \partial_y)}\; f(x,y) = f\left(e^\mu x,e^{-\mu} y\right) $$ In particular, with $\;X = x \frac{\partial}{\partial x} - y \frac{\partial}{\partial y}$ : $$ \exp(\mu X) x = exp(\mu) x \quad \mbox{and} \quad \exp(\mu X) y = \exp(-\mu) y $$$b)\; X = x^2\,\partial_x+xy\,\partial_y$
As for this case, I don't see how we can say more than, in the OP's notation: $$ F(\hat{x},\hat{y})=\sum_{j=0}^{\infty}\frac{\varepsilon^j}{j!}X^j F(x,y) = \exp{(\varepsilon X)} F(x,y) $$ Then it follows that, for special functions $F(x,y)=x$ or $F(x,y)=y$ : $$ \hat{x}=\exp{(\varepsilon X)}x \\ \hat{y}=\exp{(\varepsilon X)}y $$ Please don't tell me that's all you want .. $\endgroup$ 2 $\begingroup$An alternative and much more effective approach is proposed in this second answer. References:
In the first reference, the following formula is proved: $$ x_1(t) = e^{t\,g(x)\frac{d}{dx}} x \quad \Longleftrightarrow \quad \dot{x}_1(t) = g(x_1(t)) \quad \mbox{with} \quad x = x_1(0) $$ But I've found in my old notes that there is a far more general result. For any one $(t)$ parameter Lie group the following theorem holds: $$ {\bf x}_1(t) = e^{t X} {\bf x} = e^{t\,{\bf g(x)}\cdot\nabla} {\bf x} \quad \Longleftrightarrow \quad \dot{{\bf x}}_1(t) = {\bf g}({\bf x}_1(t)) \quad \mbox{with} \quad {\bf x} = {\bf x}_1(0) $$ Meaning that the problem of finding the Lie series for the independent variables ${\bf x}$ can be reduced to solving a system of ordinary differential equations. In the two dimensional case: $$ X = {\bf g(x)}\cdot \nabla = \xi(x,y)\,\partial_x +\eta(x,y)\,\partial_y $$ So we only have to solve the ODE system: $$ \left\{\begin{matrix}\dot{x}_1 = \xi(x_1,y_1) \\ \dot{y}_1 = \eta(x_1,y_1)\end{matrix}\right. $$ With boundary conditions, always the same: $$ \left\{\begin{matrix} x_1(0) = x \\ y_1(0) = y\end{matrix}\right. $$ Now let's do it for the operators at hand.$a)\; X = x\, \partial_x-y\, \partial_y$
Accompanying ODE system: $$ \left\{\begin{matrix}\dot{x}_1 = x_1 \\ \dot{y}_1 = -y_1\end{matrix}\right. $$ Together with the boundary conditions giving a solution as has been found in the first answer: $$ \left\{\begin{matrix} x_1 = x\,e^t \\ y_1 = y\,e^{-t} \end{matrix}\right. $$
$b)\; X = x^2\,\partial_x+xy\,\partial_y$
Accompanying ODE system: $$ \left\{\begin{matrix}\dot{x}_1 = x_1^2 \\ \dot{y}_1 = x_1 y_1\end{matrix}\right. $$ Solve the first equation and substitute the solution into the second one. Solve again and apply the boundary conditions: $$ \left\{\begin{matrix} x_1 = {\Large \frac{x}{1-t\,x}} \\ y_1 = {\Large \frac{y}{1-t\,x}} \end{matrix}\right. $$ This solution deserves some attention because it is not in the first answer.
$c)\; X = -y\, \partial_x+x\, \partial_y$
Accompanying ODE system:
$$
\left\{\begin{matrix}\dot{x}_1 = -y_1 \\ \dot{y}_1 = x_1\end{matrix}\right.
$$
Two separate equations for $x_1$ and $y_1$ can be found from this:
$$
\left\{\begin{matrix}\ddot{x}_1 + x_1 = 0 \\ \ddot{y}_1 + y_1 = 0 \end{matrix}\right. \quad \Longrightarrow \quad
\left\{\begin{matrix}x_1 = A\cos(t) + B\sin(t) \\ y_1 = C\cos(t) + D\sin(t) \end{matrix}\right.
$$
Employing the original ODE : $\;\dot{y}_1=x_1$ gives $\;-C = B\;$ and $\;D = A\;$ .
At last, apply the boundary conditions:
$$
\left\{\begin{matrix}x_1 = x\cos(t) - y \sin(t) \\ y_1 = x\sin(t) + y \cos(t)\end{matrix}\right.
$$
As has been found in the first answer too.