All posts by ajb

Integration of odd variables II

Abstract
We now proceed to define integration with respect to odd variables.

The fundamental theorem of calculus for odd variables
Let us consider just one odd variable. This will be sufficient for our purposes for now. Following the direct analogy with integration of functions over a circle the fundamental theorem of calculus states

\(\int D\theta \frac{\partial f(\theta)}{\partial \theta} =0\).

We use the notation \(D\theta \) for the measure rather than \(d\theta \) as the measure cannot be associated with a one-form. We will discuss this in more detail another time.

Definition of integration
Recall that the general form of a function in one odd variable is

\(f(\theta) = a + \theta b\),

with a and b being real numbers. Thus from the fundamental theorem we have

\(\int D\theta b =0\).

In particular this implies

\(\int D\theta =0\).

Then we have

\(\int D\theta f(\theta) = a \int D\theta + b \int D\theta \:\: \theta = b \int D\theta\:\: \theta \).

Thus to define integration all we have to do is define the normalisation

\(\int D\theta\:\: \theta\).

The choice made by Berezin was to set this to unity. Other choices are also just as valid. Thus,

\(\int D\theta f(\theta) = b\).

Integration for several odd variables
For the case of more than one odd variable one simply uses

\(\int D(\theta_{1}, \theta_{2} , \cdots \theta_{n})f(\theta) = \int D\theta_{1} \int D \theta_{2} \cdots \int D\theta_{n} f(\theta)\).

example Consider two odd variables.

\(\int D(\overline{\theta}, \theta) \left( f_{0} + \theta \:f + \overline{\theta}\: \overline{f} + \theta \overline{\theta}F \right) = F \).

The general rule is that (taking care with signs) the integration with respect to the measure \(D(\theta_{1}, \theta_{2} , \cdots \theta_{n})\) of a function picks out the coefficient of the \(\theta_{1}, \theta_{2} , \cdots \theta_{n}\) term.

Integration and differentiation are the same!
From the above we see that differentiation with respect to an odd variable is the same as integration with respect to the odd variable. This explains why we cannot associate a “top-form” with the measure. This will become more apparent when we discuss changes of variables.

What next?
Next we will examine how changing variables in the integration effects the measure. We will see that things look “upside down” as compared with the integration of real variables. This is anticipated by the equivalence of integration and differentiation.

Integration of odd variables I

Abstract
Before we consider odd variables, let us describe how to algebraically define integration of functions over the circle.

Functions on the circle
Recall the Fourier expansion. It is well known that any continuous function on the circle is of the form

\(f(x) = \frac{a_{0}}{2} + \sum_{n=1}^{\infty}\left( a_{n} \cos(nx) + b_{n}\sin(nx) \right) \),

with the a’s and b’s being constants, i.e. independent of the variable x.

The fundamental theorem of calculus
The fundamental theorem of calculus states that

\(\int_{S^{1}} dx \: \frac{\partial f(x)}{\partial x } = 0 \),

as functions on the circle are periodic.

Integration of functions
It turns out that integration of functions over the circle can be defined algebraically up to a choice in measure. To see this observe

\(\int_{S^{1}} dx f(x) = \int_{S^{1}} dx \frac{a_{0}}{2} + \int_{S^{1}} dx \sum_{n=1}^{\infty}\left( a_{n} \cos(nx) + b_{n}\sin(nx) \right)\)

Then we can write

\(\int_{S^{1}} dx f(x) = \frac{a_{0}}{2} \int_{S^{1}} dx + \int_{S_{1}} dx \frac{\partial }{\partial x} \sum_{n=1}^{\infty} \left ( \frac{a_{n}}{n}\sin(nx) + \frac{- b_{n}}{n} \cos(nx) \right)\)

to get via the fundamental theorem of calculus

\(\int_{S^{1}} dx f(x) = \frac{a_{0}}{2} \int_{S^{1}} dx\).

So we have just about defined integration completely algebraically from the fundamental theorem of calculus. All we have to do is specify the normalisation

\(\int_{S^{1}} dx \).

The standard choice would be

\(\int_{S^{1}} dx = 2 \pi\),

to get back to our usual notion of integration of periodic functions. Though it would be quite consistent to consider some other normalisation, say to unity.

Anyway, up to a normalisation the integration of functions over the circle selects the “constant term” of the corresponding Fourier expansion.

What next?
So, the above construction demonstrates that integration of functions over a domain without boundaries can be defined algebraically, up to a normalisation. This served as the basis for Berezin who defined the notion of integration of odd variables.

Recall that odd variables have no topology and no boundaries. The integration with respect to such variables cannot be in the sense of Riemann. However, thinking of functions of odd variables in analogy to periodic functions integration can be defined algebraically. We will describe this next time.

Differential calculus of odd variables.

Abstract
Here we will define the notion of differentiation with respect to an odd variable and examine some basic properties.

Definition
Differentiation with respect to an odd variable is completely and uniquely defined via the following rules:

  1. \(\frac{ \partial \theta^{\beta} }{\partial \theta^{\alpha}} = \delta_{\alpha}^{\beta} \).
  2. Linearity:
    \(\frac{\partial}{\partial \theta }(a f(\theta)) = a \frac{\partial}{\partial \theta } f(\theta)\).
    \(\frac{\partial}{\partial \theta }( f(\theta) + g(\theta)) = \frac{\partial}{\partial \theta }f(\theta) + \frac{\partial}{\partial \theta } g(\theta)\).
  3. Leibniz rule:
    \(\frac{\partial}{\partial \theta }( f(\theta)g(\theta)) = \frac{\partial f(\theta)}{\partial \theta } + (-1)^{\widetilde{f}} f \frac{\partial g(\theta)}{\partial \theta } \).

The operator \(\frac{\partial }{\partial \theta }\) is odd, that is it changes the parity of the function it acts on. This must be taken care of when applying Leibniz’s rule.

Elementary properties
It is easy to see that

\(\frac{\partial}{\partial \theta^{\alpha}}\frac{\partial}{\partial \theta^{\beta}}+ \frac{\partial}{\partial \theta^{\beta}}\frac{\partial}{\partial \theta^{\alpha}}=0\),

in particular

\(\left( \frac{\partial}{\partial \theta} \right)^{2}=0\).

Example
\(\frac{\partial}{\partial \theta} (a + \theta b+ \overline{\theta}c + \theta \overline{\theta} d ) = b + \overline{\theta}d\).

Example
\(\frac{\partial}{\partial \overline{\theta}} (a + \theta b+ \overline{\theta}c + \theta \overline{\theta} d ) = c- \theta d\).

Changes of variables
Under changes of variable of the form \(\theta \rightarrow \theta^{\prime}\) the derivative transforms as standard

\(\frac{\partial}{\partial \theta^{\prime}} = \frac{\partial\theta}{\partial \theta^{\prime}} \frac{\partial}{ \partial \theta}\).

We will have a lot more to say about changes of variables (coordinates) another time.

What next?
We now know how to define and use the derivative with respect to an odd variable. Note that this was done algebraically with no mention of limits. As the functions in odd variables are polynomial the derivative was simple to define.

Next we will take a look at integration with respect to an odd variable. We cannot think in terms of boundaries, limits or anything resembling the Riemann or Lebesgue notions of integration. Everything will need to be done algebraically.

This will lead us to the Berezin integral which has the strange property that integration and differentiation with respect to an odd variable are the same.

Elementary algebraic properties of superalgebras

Abstract
Here we will present the very basic ideas of Grassmann variables and polynomials over them.

Grassmann algebra
Consider a set of n odd variables \(\{ \theta^{1}, \theta^{2}, \cdots \theta^{n} \}\). By odd we will mean that they satisfy

\( \theta^{a}\theta^{b} + \theta^{b} \theta^{a}=0\).

Note that in particular this means \(\theta^{2}=0\). That is the generators are nilpotent.

The Grassmann algebra is then defined as the polynomial algebra in these variables. Thus a general function in odd variables is

\(f(\theta) = f_{0} + \theta^{a}f_{a} + \frac{1}{2!} \theta^{a} \theta^{b}f_{ba} + \cdots + \frac{1}{n!} \theta^{a_{1}} \cdots \theta^{a_{n}}f_{a_{n}\cdots a_{1}}\).

The coefficients we take as real and antisymmetric. Note that the nilpotent property of the odd variables means that the Grassmann algebra is complete as polynomials.

Example If we have the algebra generated by a single odd variable \(\theta \) then polynomials are of the form

\(a + \theta b\).

Example If we have two odd variables \(\theta\) and \(\overline{\theta}\) then polynomials are of the form

\(a + \theta b + \overline{\theta} c + \theta \overline{\theta} d\).

It is quite clear that the polynomials in odd variables forms a vector space. You can add such functions and multiply by a real number and the result remains a polynomial. It is also straightforward to see that we have an algebra. One can multiply two such functions together and get another.

The space of all such functions has a natural \(\mathbb{Z}_{2}\)-grading, which we will call parity given by the number of odd generators in each function mod 2. If the function has an even/odd number of odd variables then the function is even/odd. We will denote the parity by of a function \(\widetilde{f}= 0/1\), if it is even/odd.

Example \(a +\theta \overline{\theta} d \) is an even function and \(\theta b + \overline{\theta} c \) is an odd function.

Let us define the (super)commutator of such functions as

\([f,g] = fg -(-1)^{\widetilde{f} \widetilde{g}} gf\).

If the functions are not homogeneous, that is even or odd the commutator is extended via linearity. We see that the commutator of any two functions in odd variables vanishes. Thus we say that the algebra of functions in odd variables forms a supercommutative algebra.

Specifically note that this means the ordering of odd functions is important.

Superspaces
The modern approach to geometry is to define and deal with “spaces” in terms of the functions upon them. Geometrically we can think of the algebra generated by n odd variables as defining the space \(\mathbb{R}^{0|n}\). Note that no such “space” in the classical sense exists. In fact such spaces consist of only one point!

If we promote the coefficients in the polynomials to be functions of m real variables then we have the space \(\mathbb{R}^{m|n}\). We are now most of the way to defining supermanifolds, but this would be a digression from the current issues.

Noncommutative superalgebras
Of course superalgebras for which the commutator generally is non-vanishing can be defined and are naturally occurring. We will encounter such things when dealing with first order differential operators acting on functions in odd variables. Geometrically these are the vector fields. Recall that the Lie bracket between vector fields over a manifold is in general non-vanishing.

What next?
Given the basic algebraic properties of functions in odd variables we will proceed to algebraically define how to differentiate with respect to odd variables.

Introduction to Superanalysis

Forward
Following a conversation on a popular science chat room the subject of Grassmann variables and in particular the Berezin integral arose. Thus I decided to with a short introduction to the basic theory of superalgebras, particularly supercommutative algebras and their calculus.

We will be primarily interested in algebras that involve the Koszul sign rule, that is include an extra minus sign when you interchange odd elements:

\(ab = – ba\).

Ancient History
The beginning of all supermathematics can be traced back to 1885 and the work of Hermann Günther Grassmann on linear algebra. He introduced variables that involve a minus sign when interchanging their order. Élie Cartan’s theory of differential forms is also in hindsight a “super-theory”. Many other constructions in algebra and topology can be thought of as “super” and involve a sign factor when interchanging the order.

Physics
By the early 1950’s odd variables appeared in quantum field theory as a semiclassical description of fermions. Initially the analysis was based on the canonical description of quantisation and so confined to derivatives with respect to odd variables. Berezin in 1961 introduced the integration theory for odd variables and this was promptly applied to the path integral approach to quantisation.

Supermanifolds
In these early works odd variables were understood very formally in an algebraic way. That is they were not associated with with any general notion of a space. Berezin’s treatment of even and odd variables convinced him that there should be a way to treat them analogously to real and complex variables in complex geometry. The bulk of this work was carried out by Berezin and his collaborators between 1965 and 1975. Berezin introduced general non-linear transformations that mix even and odd variables as well as generalisation of the determinant to integration over even and odd variables. This work led to the notion of superspaces and supermanifolds. In essence one thinks of a supermanifold as a “manifold” with even (commuting) and odd (anticommuting) coordinates. A detailed discussion of supermanifolds is out of the scope of this introduction.

Supersymmetric field theories
The nomenclature super comes from physics. Gol’fand & Likhtman extended the Poincare group to include “odd translations”. These operators are fermionic in nature and thus require anticommutators in the extended Poincare algebra. Supersymmetry is a remarkable symmetry that mixed bosonic and fermionic degrees of freedom. Lagrangians (or actions) that exhibit supersymmetry have some very attractive features. The surprising result is that supersymmetry can cancel most or even all of the divergences of certain quantum field theories. A detailed discussion of supersymmetric field theories is outside the scope of this introduction.

Gauge theories and the BRST symmetry
The use of odd variables is also necessary in (perturbative) non-abelian gauge theories (in the covariant gauges at least), even if one initially restricts attention to theories without fermions. There are several complications that do not arise in abelian gauge theory. These originate primarily from the gauge fixing, which effects the path integration measure in a non-trivial way. Feynman in 1963 showed that using standard quantisation methods available at the time, Yang-Mills theory was not unitary. Feynman also showed that counter terms, now known as ghosts could be added that remove the nonunitary parts. Originally these ghost, which are odd but violate the spin-statistics theorem were seen as ad-hoc. Later Faddeev and Popov showed that these ghost arise in the theory by considering the so called Faddeev-Popov determinant.

It was noticed that the gauge fixed Lagrangian possess a new global (super)symmetry that rotates the gauge fields into ghosts. This symmetry is named after it’s discoverers Becchi, Rouet, Stora and independently Tyutin, thus BRST symmetry. As this is a global symmetry no new degrees of freedom can be eliminated.

The BRST symmetry is now a fundamental tool when dealing with quantum gauge theories. For example the BRST symmtery is important when considering the remormalisability and absence of anomalies for a given theory. We will not say any more about gauge theories in this introduction.

Mathematical applications
Odd elements can be employed very successfully in pure mathematics. For example, the de Rham complex of a manifold can be completely understood in terms of functions and vector fields over a particular supermanifold. Multivector fields can also be thought of in a similar way in terms of a supermanifold and an odd analogue of a Poisson bracket.

Various algebraic structures can be encoded in superalgebras that come equipped with a homological vector field. That is an odd vector field that “squares to zero”

\(Q^{2} = \frac{1}{2}[Q,Q]=0\).

Common examples include Lie algebras, \(L_{\infty}\)-algebras, Lie algebroids, \(A_{\infty}\)-algebra etc.

Guide to this introduction
I hope that these opening words have convinced you that the study of superalgebras and Grassmann odd variables is useful in physics and pure mathematics.

I will be quite informal in presentation and attitude. The intention is to convey the main ideas without over burdening the reader.

A tentative guide is as follows:

  1. Elementary algebraic properties of superalgebras.
  2. Differential calculus of odd variables.
  3. Integration with respect to odd variables: the Berezin integral.

Quick guide to references
The mathematical theory of Grassmann algebras, superalgebras and supermanifolds is well established and can be found in several books. Any book on quantum field theory will say something about the algebra and calculus of odd variables. The mathematical books that I like include:

  • Gauge Field Theory and Complex Geometry, Yuri I. Manin, Springer; 2nd edition (June 27, 1997).
  • Geometric Integration Theory on Supermanifolds, Th. Th. Voronov, Routledge; 1 edition (January 1, 1991).
  • Supersymmetry for Mathematicians: An Introduction, V. S. Varadarajan, American Mathematical Society (July 2004).

Other books that deserve a mention are

  • Supermanifolds, Bryce DeWitt, Cambridge University Press; 2 edition (June 26, 1992).
  • Supermanifolds: Theory and Applications, A. Rogers, World Scientific Publishing Company (April 18, 2007).

Quantum Algebra?

“Quantum algebra” is used as one top-level mathematics categories on the arXiv. However, to me at least it is not very clear what is meant by the term.

Topics in this section include

  • Quantum groups and noncommutative geometry
  • Poisson algebras and generalisations
  • Operads and algebras over them
  • Conformal and Topological QFT

Generally these include things that are not necessarily commutative.

What is a commutative algebra? Intentionally being very informal, an algebra is a vector space over the real or complex numbers (more generally any field) endowed with a product of two elements.

So let us fix some vector space \(\mathcal{A}\) say over the real numbers. It is an algebra if there is a notion of multiplication of two elements that is associative

\(a(bc) = (ab)c\)

and distributive

\(a(b+c) = ab + ac\),

with \(a,b,c \in \mathcal{A}\). There may also be a unit

\(ea = ae \) for all \(a \in \mathcal{A}\). Sometimes there may be no unit.

An algebra is commutative if the order of the multiplication does not matter. That is

\(ab = ba\).

For example, if \(a\) and \(b\) are real or complex numbers then the above holds. So real numbers and complex numbers can be thought of as “commutative algebras over themselves”.

It is common to define a commutator as

\([a,b] = ab – ba\).

If the commutator is zero then the algebra is commutative. If the commutator is non-zero then the algebra is noncommutative. In the second case the order of multiplication matters

\(ab \neq ba \)

in general.

The first example here is the algebra of 2×2 matrices.

So why “quantum”? Of course noncommutative algebras were known to mathematicians before the discovery of quantum mechanics. However, they were not generally known by physicists. The algebras used in classical mechanics, say in the Hamiltonian description are all commutative. Here the phase space is described by coordinates \(x,p \) or equivalently by the algebra of functions in these variables. This algebra is invariably commutative.

In quantum mechanics something quite remarkable happens. The phase space gets replaced by something noncommutative. We can think of “local coordinates” \(\hat{x}, \hat{p}\) that are no longer commutative. In fact we have

\([\hat{x}, \hat{p}] = i \hbar \),

which is known as the canonical commutation relation and is really the fundamental equation in quantum mechanics. The constant \(\hbar\) is known as Planck’s constant and sets the scale of quantum theory.

The point being that quantum mechanics means that one must consider noncommutative algebras. Thus the relatively informal bijection “quantum” \(\leftrightarrow \) “noncommutative”.

We can also begin to understand Einstein’s dislike of quantum mechanics, as pointed out by Dirac. The theories of special and general relativity are by their nature very geometric. As I have suggested, a space can be thought of as being defined by the algebra of functions on it. Einstein’s theories are based on commutative algebra. Quantum mechanics on the other hand is based on noncommutative algebra and in particular the phase space is some sort of “noncommutative space”. The thought of a noncommutative space, “the coordinates do not commute” should make you shudder the first time you hear this!

One place you should pause for reflection is the notion of a point. In noncommutative geometry there is no elementary intuitive notion of a point. Noncommutative geometry is pointless geometry!

We can understand this via the quantum mechanical phase space and the Heisenberg uncertainty relation. Recall that the uncertainty principle states that one cannot know simultaneously the position and momentum of a quantum particle. One cannot really “select a point” in the phase space. The best we have is

\(\delta \hat{x} \delta \hat{p} \approx \frac{\hbar}{2}\).

The phase space is cut up into fuzzy Bohr-Heisenberg cells and does not consist of a collection of points.

At first it seems that all geometric intuition is lost. This however is not the case if we think of a space in terms of the functions on it. A great deal of noncommutative geometry is rephrasing things in classical differential geometry in terms of the functions on the space (the structure sheaf). Then the notion may pass to the noncommutative world. I should say more on noncommutative geometry another time.

Lie-Infinity Algebroids? II

This post should be considered as part two of the earlier post Lie-Infinity Algebroids?

The term \(L_{\infty} \) -algebroid seems not to be very well established in the literature. A nice discussion of this can be found at the nLab.

To quickly recall, the definition I use is that the Q-manifold \((\Pi E, Q)\) is an \(L_{\infty} \) -algebroid, where \(E \rightarrow M \) is a vector bundle and \(Q \) is an arbitrary weight homological vector field. The weight is provided by the assignment of zero to the base coordinates and one to the fibre coordinates. If the homological vector field is of weight one, then we have a Lie algebroid.

It is by now quite well established that a Lie algebroid, as above is equivalently described by

i) A weight minus one Schouten structure on the total space \(\Pi E^{*}\).
ii) A weight minus one Poisson structure on the total space of \(E^{*}\).

In other words, Lie algebroids are equivalent to certain graded Schouten or Poisson algebras. Recall, a Schouten algebra is an odd version of a Poisson algebra. The point is ignoring all gradings and parity, we have a Lie algebra such that the Lie bracket satisfies a Leibniz rule over the product of elements of the Lie algebra. We need a notion of multiplication, in this case it is just the “point-wise” product of functions.

Thus, there is a close relation between Poisson/Schouten algebras (or manifolds) and Lie algebroids.

The natural question now is “does something similar happen for \(L_{\infty}\)-algebroids?”

The answer is “yes”, but we now have to consider homotopy versions of Schouten and Poisson algebras.

Definition: A homotopy Schouten/Poisson algebra is a suitably “superised” \(L_{\infty}\)-algebra (see here) such that the n-linear operations (“brackets”) satisfy a Leibniz over the supercommutative product of elements.

This definition requires that we don’t have just an underlying vector space structure, but that of a supercommutative algebra. I will assume we also have a unit. Though, I think that noncommutative and non-unital algebras are no problem. The point is, I have in mind (at least for now) algebras of functions over (graded) supermanifolds.

Theorem: Given an \(L_{\infty}\)-algebroid \((\Pi E, Q)\) one can canonically construct
i) A total weight one higher Schouten structure on the total space of \(\Pi E^{*}\).
ii) A total weight one higher Poisson structure on the total space of \(E^{*}\).

Proof and details of the assignment of weights can be found in [1].

So, the point is that there is a close relation between homotopy versions of Poisson/Schouten algebras \(L_{\infty}\)-algebroids. To my knowledge, this has not appeared in the literature before. The specific case of \(L_{\infty}\)-algebras (algebroids over a “point”) also seems not to be discussed in the literature before.

The way we interpret this is interesting. We think of a Lie algebroid as a generalisation of the tangent bundle and a Lie algabra. The homological vector field \(Q \) “mixes” the de Rham differential over a manifold and the Chevalley-Eilenberg differential of a Lie algebra \(\mathfrak{g}\). Furthermore, we have a Poisson bracket on \(C^{\infty}( E^{*})\) which “mixes” the canonical Poisson on \(T^{*}M\) with the Lie-Poisson bracket on \(\mathfrak{g}^{*}\). Similar statements hold for the Schouten bracket.

For \(L_{\infty}\)-algebroids the homological vector field again generalises the de Rham and Chevalley-Eilenberg differentials, but it is now inhomogeneous. It resembles a “mix or higher order BRST-like” operator [3]. A homotopy version of the Maurer-Cartan equation naturally appears here. It is clear that we can consider the homotopy Schouten/Poisson algabras associated with an \(L_{\infty}\)-algebra as playing the role of the Lie-Poisson algebras, however there is no obvious higher brackets to consider on the cotangent bundle. It is not clear to me what should replace the tangent bundle here, if anything.

Exactly what technical use the theorem above is awaits to be explored. There are some interesting related notion in Mehta [2], I have yet to fully assimilate them. Maybe more on that another time.

References
[1] From \(L_{\infty}\)-algebroids to higher Schouten/Poisson structures. Andrew James Bruce, arXiv:1007.1389 [math-ph]

[2]On homotopy Poisson actions and reduction of symplectic Q-manifolds. Rajan Amit Mehta, arXiv:1009.1280v1 [math.SG]

[3] Higher order BRST and anti-BRST operators and cohomology for compact Lie algebras. C. Chryssomalakos, J. A. de Azcarraga, A. J. Macfarlane, J. C. Perez Bueno, arXiv:hep-th/9810212v2

TULCZYJEW TRIPLES AND HIGHER POISSON/SCHOUTEN STRUCTURES ON LIE ALGEBROIDS

My paper “Tulczyjew triples and higher Poisson/Schouten structures on Lie algebroids” arXiv:0910.1243v4 [math-ph] is going to appear in Reports on Mathematical Physics, Vol. 66, No 2, (2010).

Abstract
We show how to extend the construction of Tulczyjew triples to Lie algebroids via graded manifolds. We also provide a generalisation of triangular Lie bialgebroids as higher Poisson and Schouten structures on Lie algebroids.

Lie infinity-Algebras

As \(L_{\infty}\)-algebras play a large role in my research, and more generally in mathematical physics, homotopy theory, modern geometry etc I thought it maybe useful to say a few words about them.

One should think of \(L_{\infty}\)-algebras as “homotopy relatives” of Lie algebras. In a sense I think of them as differential graded Lie algebras + “more”. I hope to make this a little clearer.

Definition: A supervector space \(V = V_{0} \oplus V_{1}\) is said to be an \(L_{\infty}\)-algebra if it comes equipped with a series of parity odd \(n\)-linear operations (\(n \geq 0\) ), which we denote as “brackets” \((, \cdots , )\) that

1) are symmetric \(( \bullet , \cdots, a, b , \cdots, \bullet) = (-1)^{\widetilde{a}\widetilde{b} }( \bullet , \cdots, b, a , \cdots, \bullet) \), \(a,b \in V\).

2) satisfy the homotopy Jacobi identities

\(\sum_{k+l=n-1} \sum_{(k,l)-\textnormal{unshuffels}}(-1)^{\epsilon}\left( (a_{\sigma(1)}, \cdots , a_{\sigma(k)}), a_{\sigma(k+1)}, \cdots, a_{\sigma(k+l)} \right)=0\)

hold for all \(n \geq 1\). Here \((-1)^{\epsilon}\) is a sign that arises due to the exchange of homogenous elements \(a_{i} \in V\). Recall that a \((k,l)\)-unshuffle is a permutation of the indices \(1, 2, \cdots k+l \) such that \(\sigma(1)\) < \(\cdots\) < \(\sigma(k)\) and \(\sigma(k+1)\) < \(\cdots \) < \(\sigma(k+l)\). The LHS of the above are referred to as Jacobiators.

So, we have a vector space with a series of brackets; \((\emptyset)\), \((a,b)\) , \((a,b,c)\) etc. If the zero bracket \((\emptyset)\) is zero then the \(L_{\infty}\)-algebra is said to be strict. Often the definition of \(L_{\infty}\)-algebra assumes this. With a non-vanishing zero bracket the algebra is often called “weak”, “with background” or “curved”.

Let us examine the first few Jacobi identities in order to make all this a little clearer. First let us assume a strict algebra and we will denote the one bracket as \(d\) (this will become clear).

1) \(d^{2}a = 0 \).

That is we have a differential graded algebra.

2) \(d (a,b) + (da, b) + (-1)^{\widetilde{a} \widetilde{b}} (db, a) =0\).

So the one bracket (the differential) satisfied a derivation rule over the 2-bracket.

3) \(d (a,b,c) + (da,b,c) + (-1)^{\widetilde{a} \widetilde{b}}(db, a, c) + (-1)^{\widetilde{c}(\widetilde{a} + \widetilde{b})} (dc, a, b)\)
\( + ((a,b), c) + (-1)^{\widetilde{b}\widetilde{c}}((a,c), b) + (-1)^{\widetilde{a}(\widetilde{b}+ \widetilde{c})} ((b,c), a)= 0\).

So we have the standard Jacobi identity up to something exact.

The higher Jacobi identities are not so easy to interpret in terms of things we all know. There are higher homotopy relations and thus the word “strong”. This should make it clearer what I mean by “differential graded Lie algebra + more”.

Note that the conventions here are not quite the same as originally used by Stasheff. In fact he used a \(\mathbb{Z}\)-grading where we use a \(\mathbb{Z}_{2}\)-grading. The brackets of Stasheff are skew-symmetric and (with superisation) they are even/odd parity for even/odd number of arguments. By employing the parity reversion function and including a few extra sign factors one can construct a series of brackets on \(\Pi V\) that are closer to Stasheff’s conventions, of course “superised”. This series of brackets on \(\Pi V\) then directly includes Lie superalgebras.

There are other “similarities” between Lie algebras and \(L_{\infty}\)-algebras. I may post more about some of these another time.

A few words about applications. \(L_{\infty}\)-algebras can be found behind the BV (BFV) formalism, deformation quantistion of Poisson manifolds and closed string field theory, for example.

Lie-Infinity Algebroids?

I have been thinking a little bit recently about \(L_{\infty}\)-algebroids. So what is such an object?

Let us work with super-stuff from the start. I will be lax about signs etc, so this should not course any real confusion. First we need a little background.

Heuristically, a supermanifold is a “manifold” in which the coordinates have an underlying \(\mathbb{Z}_{2}\)-grading. Morphisms between charts are smooth and respect this grading. In more physical language, we have bosonic coordinates and even coordinates. The bosonic coordinates commute as where the fermionic coordinates anticommute. I will refer to bosonic coordinates as even parity and fermionic as odd parity. To set this up properly one needs the theory of locally ringed spaces. However, we will not need this here.

A graded manifold is a supermanifold, in which the coordinates are assigned an additional weight in \(\mathbb{Z}^{n}\) and the changes of coordinates respect the parity as well as the additional weight. In general the parity and weight are completely independent.

A Q-manifold is a supermanifold (or a graded manifold ) that comes equipped with a homological vector field, usually denoted by Q. That is we have an odd parity vector field that “self-commutes” under the Lie bracket,

\([Q,Q] = 0\).

Note that as the homological vector field is odd, this is a non-trivial condition. Sometimes, if the supermanifold is a graded manifold then conditions on the weight of Q can be imposed.

Now we can describe \(L_{\infty}\)-algebroids. The best way to describe them is as follows:

Definition:
A vector bundle \(E \rightarrow M\) is said to have an \(L_{\infty}\)-algebroid structure if there exists a homological vector field, denote as \(Q\) on the total space of \(\Pi E\), thought of as a graded manifold.

That is the pair \((\Pi E, Q)\) is a Q-manifold. We call this pair an \(L_{\infty}\)-algebroid.

The weight, in this case just in \(\mathbb{Z}\) is assigned by equipping the base coordinates of \(E \) with weight zero and the fibre coordinates with weight one (or some other integer). The “\(\Pi \)” is the parity reversion functor. It shifts the parity of the fibre coordinates. So, a coordinate that is originally even\odd get replaced by a coordinate that is odd\even. It does nothing to the weight. Note that this shift is fundamental here and not just for convenience.

It is very easy to see that the original vector bundle \(E \rightarrow M\) is equivalent to the graded manifold \(\Pi E\). Stronger than this, the equivalence is functorial. That is we have equivalent categories.

Further note that there is no condition on the weight of the homological vector bundle in this definition, nor is there any “compatibility condition” with the vector bundle (or graded) structure.

Definition:
An \(L_{\infty}\)-algebroid is said to be strict if the restriction of Q to the “base manifold” \(M \subset \Pi E \) is a genuine homological vector field on \(M \).

This does not sound very invariant at first, but simply put restriction of Q to the weight zero “part” of \(\Pi E\) should still be homological.

For those that know Lie algebroids and \(L_{\infty}\)-algebras, the question is why call them \(L_{\infty}\)-algebroids? An \(L_{\infty}\)-algebroid is to an \(L_{\infty}\)-algebra what a Lie algebroid is to a Lie algebra.

It also turns out that some of the main constructions relating to Lie algebroids carry over to \(L_{\infty}\)-algeboids, see [1]. (I may say more another time.) This may also be of use for \(L_{\infty}\)-algebras. I am currently also pondering this.

So maybe I should end for now on a little motivation as to why such things are interesting. First, if we insist on the homological vector field being of weight one we recover Lie algebroids. If we insist on the vector bundle being over a point we recover \(L_{\infty}\)-algebras. (I should post on these later) Also, very similar things appear in quantum field theory via the BV and BFV formalisms (again I should post on these another time). However, at the moment it is not exactly clear how \(L_{\infty}\)-algebroids fit in here. One “barrier” is that Q is inhomogenous in weight, in the BV and BFV formulations the homological vector field is homogeneous in “ghost number”. It would also be interesting to see if these structures can be used in the BV-AKSZ formalism.

—————————–
References

[1] Andrew James Bruce, From \(L_{\infty}\)-algebroids to higher Schouten\Poisson structures. Submitted for publication, available as arXiv:1007.1389v2 [math-ph]