All posts by ajb

Differential calculus of odd variables.

Here we will define the notion of differentiation with respect to an odd variable and examine some basic properties.

Differentiation with respect to an odd variable is completely and uniquely defined via the following rules:

  1. \(\frac{ \partial \theta^{\beta} }{\partial \theta^{\alpha}} = \delta_{\alpha}^{\beta} \).
  2. Linearity:
    \(\frac{\partial}{\partial \theta }(a f(\theta)) = a \frac{\partial}{\partial \theta } f(\theta)\).
    \(\frac{\partial}{\partial \theta }( f(\theta) + g(\theta)) = \frac{\partial}{\partial \theta }f(\theta) + \frac{\partial}{\partial \theta } g(\theta)\).
  3. Leibniz rule:
    \(\frac{\partial}{\partial \theta }( f(\theta)g(\theta)) = \frac{\partial f(\theta)}{\partial \theta } + (-1)^{\widetilde{f}} f \frac{\partial g(\theta)}{\partial \theta } \).

The operator \(\frac{\partial }{\partial \theta }\) is odd, that is it changes the parity of the function it acts on. This must be taken care of when applying Leibniz’s rule.

Elementary properties
It is easy to see that

\(\frac{\partial}{\partial \theta^{\alpha}}\frac{\partial}{\partial \theta^{\beta}}+ \frac{\partial}{\partial \theta^{\beta}}\frac{\partial}{\partial \theta^{\alpha}}=0\),

in particular

\(\left( \frac{\partial}{\partial \theta} \right)^{2}=0\).

\(\frac{\partial}{\partial \theta} (a + \theta b+ \overline{\theta}c + \theta \overline{\theta} d ) = b + \overline{\theta}d\).

\(\frac{\partial}{\partial \overline{\theta}} (a + \theta b+ \overline{\theta}c + \theta \overline{\theta} d ) = c- \theta d\).

Changes of variables
Under changes of variable of the form \(\theta \rightarrow \theta^{\prime}\) the derivative transforms as standard

\(\frac{\partial}{\partial \theta^{\prime}} = \frac{\partial\theta}{\partial \theta^{\prime}} \frac{\partial}{ \partial \theta}\).

We will have a lot more to say about changes of variables (coordinates) another time.

What next?
We now know how to define and use the derivative with respect to an odd variable. Note that this was done algebraically with no mention of limits. As the functions in odd variables are polynomial the derivative was simple to define.

Next we will take a look at integration with respect to an odd variable. We cannot think in terms of boundaries, limits or anything resembling the Riemann or Lebesgue notions of integration. Everything will need to be done algebraically.

This will lead us to the Berezin integral which has the strange property that integration and differentiation with respect to an odd variable are the same.

Elementary algebraic properties of superalgebras

Here we will present the very basic ideas of Grassmann variables and polynomials over them.

Grassmann algebra
Consider a set of n odd variables \(\{ \theta^{1}, \theta^{2}, \cdots \theta^{n} \}\). By odd we will mean that they satisfy

\( \theta^{a}\theta^{b} + \theta^{b} \theta^{a}=0\).

Note that in particular this means \(\theta^{2}=0\). That is the generators are nilpotent.

The Grassmann algebra is then defined as the polynomial algebra in these variables. Thus a general function in odd variables is

\(f(\theta) = f_{0} + \theta^{a}f_{a} + \frac{1}{2!} \theta^{a} \theta^{b}f_{ba} + \cdots + \frac{1}{n!} \theta^{a_{1}} \cdots \theta^{a_{n}}f_{a_{n}\cdots a_{1}}\).

The coefficients we take as real and antisymmetric. Note that the nilpotent property of the odd variables means that the Grassmann algebra is complete as polynomials.

Example If we have the algebra generated by a single odd variable \(\theta \) then polynomials are of the form

\(a + \theta b\).

Example If we have two odd variables \(\theta\) and \(\overline{\theta}\) then polynomials are of the form

\(a + \theta b + \overline{\theta} c + \theta \overline{\theta} d\).

It is quite clear that the polynomials in odd variables forms a vector space. You can add such functions and multiply by a real number and the result remains a polynomial. It is also straightforward to see that we have an algebra. One can multiply two such functions together and get another.

The space of all such functions has a natural \(\mathbb{Z}_{2}\)-grading, which we will call parity given by the number of odd generators in each function mod 2. If the function has an even/odd number of odd variables then the function is even/odd. We will denote the parity by of a function \(\widetilde{f}= 0/1\), if it is even/odd.

Example \(a +\theta \overline{\theta} d \) is an even function and \(\theta b + \overline{\theta} c \) is an odd function.

Let us define the (super)commutator of such functions as

\([f,g] = fg -(-1)^{\widetilde{f} \widetilde{g}} gf\).

If the functions are not homogeneous, that is even or odd the commutator is extended via linearity. We see that the commutator of any two functions in odd variables vanishes. Thus we say that the algebra of functions in odd variables forms a supercommutative algebra.

Specifically note that this means the ordering of odd functions is important.

The modern approach to geometry is to define and deal with “spaces” in terms of the functions upon them. Geometrically we can think of the algebra generated by n odd variables as defining the space \(\mathbb{R}^{0|n}\). Note that no such “space” in the classical sense exists. In fact such spaces consist of only one point!

If we promote the coefficients in the polynomials to be functions of m real variables then we have the space \(\mathbb{R}^{m|n}\). We are now most of the way to defining supermanifolds, but this would be a digression from the current issues.

Noncommutative superalgebras
Of course superalgebras for which the commutator generally is non-vanishing can be defined and are naturally occurring. We will encounter such things when dealing with first order differential operators acting on functions in odd variables. Geometrically these are the vector fields. Recall that the Lie bracket between vector fields over a manifold is in general non-vanishing.

What next?
Given the basic algebraic properties of functions in odd variables we will proceed to algebraically define how to differentiate with respect to odd variables.

Introduction to Superanalysis

Following a conversation on a popular science chat room the subject of Grassmann variables and in particular the Berezin integral arose. Thus I decided to with a short introduction to the basic theory of superalgebras, particularly supercommutative algebras and their calculus.

We will be primarily interested in algebras that involve the Koszul sign rule, that is include an extra minus sign when you interchange odd elements:

\(ab = – ba\).

Ancient History
The beginning of all supermathematics can be traced back to 1885 and the work of Hermann Günther Grassmann on linear algebra. He introduced variables that involve a minus sign when interchanging their order. Élie Cartan’s theory of differential forms is also in hindsight a “super-theory”. Many other constructions in algebra and topology can be thought of as “super” and involve a sign factor when interchanging the order.

By the early 1950’s odd variables appeared in quantum field theory as a semiclassical description of fermions. Initially the analysis was based on the canonical description of quantisation and so confined to derivatives with respect to odd variables. Berezin in 1961 introduced the integration theory for odd variables and this was promptly applied to the path integral approach to quantisation.

In these early works odd variables were understood very formally in an algebraic way. That is they were not associated with with any general notion of a space. Berezin’s treatment of even and odd variables convinced him that there should be a way to treat them analogously to real and complex variables in complex geometry. The bulk of this work was carried out by Berezin and his collaborators between 1965 and 1975. Berezin introduced general non-linear transformations that mix even and odd variables as well as generalisation of the determinant to integration over even and odd variables. This work led to the notion of superspaces and supermanifolds. In essence one thinks of a supermanifold as a “manifold” with even (commuting) and odd (anticommuting) coordinates. A detailed discussion of supermanifolds is out of the scope of this introduction.

Supersymmetric field theories
The nomenclature super comes from physics. Gol’fand & Likhtman extended the Poincare group to include “odd translations”. These operators are fermionic in nature and thus require anticommutators in the extended Poincare algebra. Supersymmetry is a remarkable symmetry that mixed bosonic and fermionic degrees of freedom. Lagrangians (or actions) that exhibit supersymmetry have some very attractive features. The surprising result is that supersymmetry can cancel most or even all of the divergences of certain quantum field theories. A detailed discussion of supersymmetric field theories is outside the scope of this introduction.

Gauge theories and the BRST symmetry
The use of odd variables is also necessary in (perturbative) non-abelian gauge theories (in the covariant gauges at least), even if one initially restricts attention to theories without fermions. There are several complications that do not arise in abelian gauge theory. These originate primarily from the gauge fixing, which effects the path integration measure in a non-trivial way. Feynman in 1963 showed that using standard quantisation methods available at the time, Yang-Mills theory was not unitary. Feynman also showed that counter terms, now known as ghosts could be added that remove the nonunitary parts. Originally these ghost, which are odd but violate the spin-statistics theorem were seen as ad-hoc. Later Faddeev and Popov showed that these ghost arise in the theory by considering the so called Faddeev-Popov determinant.

It was noticed that the gauge fixed Lagrangian possess a new global (super)symmetry that rotates the gauge fields into ghosts. This symmetry is named after it’s discoverers Becchi, Rouet, Stora and independently Tyutin, thus BRST symmetry. As this is a global symmetry no new degrees of freedom can be eliminated.

The BRST symmetry is now a fundamental tool when dealing with quantum gauge theories. For example the BRST symmtery is important when considering the remormalisability and absence of anomalies for a given theory. We will not say any more about gauge theories in this introduction.

Mathematical applications
Odd elements can be employed very successfully in pure mathematics. For example, the de Rham complex of a manifold can be completely understood in terms of functions and vector fields over a particular supermanifold. Multivector fields can also be thought of in a similar way in terms of a supermanifold and an odd analogue of a Poisson bracket.

Various algebraic structures can be encoded in superalgebras that come equipped with a homological vector field. That is an odd vector field that “squares to zero”

\(Q^{2} = \frac{1}{2}[Q,Q]=0\).

Common examples include Lie algebras, \(L_{\infty}\)-algebras, Lie algebroids, \(A_{\infty}\)-algebra etc.

Guide to this introduction
I hope that these opening words have convinced you that the study of superalgebras and Grassmann odd variables is useful in physics and pure mathematics.

I will be quite informal in presentation and attitude. The intention is to convey the main ideas without over burdening the reader.

A tentative guide is as follows:

  1. Elementary algebraic properties of superalgebras.
  2. Differential calculus of odd variables.
  3. Integration with respect to odd variables: the Berezin integral.

Quick guide to references
The mathematical theory of Grassmann algebras, superalgebras and supermanifolds is well established and can be found in several books. Any book on quantum field theory will say something about the algebra and calculus of odd variables. The mathematical books that I like include:

  • Gauge Field Theory and Complex Geometry, Yuri I. Manin, Springer; 2nd edition (June 27, 1997).
  • Geometric Integration Theory on Supermanifolds, Th. Th. Voronov, Routledge; 1 edition (January 1, 1991).
  • Supersymmetry for Mathematicians: An Introduction, V. S. Varadarajan, American Mathematical Society (July 2004).

Other books that deserve a mention are

  • Supermanifolds, Bryce DeWitt, Cambridge University Press; 2 edition (June 26, 1992).
  • Supermanifolds: Theory and Applications, A. Rogers, World Scientific Publishing Company (April 18, 2007).

Quantum Algebra?

“Quantum algebra” is used as one top-level mathematics categories on the arXiv. However, to me at least it is not very clear what is meant by the term.

Topics in this section include

  • Quantum groups and noncommutative geometry
  • Poisson algebras and generalisations
  • Operads and algebras over them
  • Conformal and Topological QFT

Generally these include things that are not necessarily commutative.

What is a commutative algebra? Intentionally being very informal, an algebra is a vector space over the real or complex numbers (more generally any field) endowed with a product of two elements.

So let us fix some vector space \(\mathcal{A}\) say over the real numbers. It is an algebra if there is a notion of multiplication of two elements that is associative

\(a(bc) = (ab)c\)

and distributive

\(a(b+c) = ab + ac\),

with \(a,b,c \in \mathcal{A}\). There may also be a unit

\(ea = ae \) for all \(a \in \mathcal{A}\). Sometimes there may be no unit.

An algebra is commutative if the order of the multiplication does not matter. That is

\(ab = ba\).

For example, if \(a\) and \(b\) are real or complex numbers then the above holds. So real numbers and complex numbers can be thought of as “commutative algebras over themselves”.

It is common to define a commutator as

\([a,b] = ab – ba\).

If the commutator is zero then the algebra is commutative. If the commutator is non-zero then the algebra is noncommutative. In the second case the order of multiplication matters

\(ab \neq ba \)

in general.

The first example here is the algebra of 2×2 matrices.

So why “quantum”? Of course noncommutative algebras were known to mathematicians before the discovery of quantum mechanics. However, they were not generally known by physicists. The algebras used in classical mechanics, say in the Hamiltonian description are all commutative. Here the phase space is described by coordinates \(x,p \) or equivalently by the algebra of functions in these variables. This algebra is invariably commutative.

In quantum mechanics something quite remarkable happens. The phase space gets replaced by something noncommutative. We can think of “local coordinates” \(\hat{x}, \hat{p}\) that are no longer commutative. In fact we have

\([\hat{x}, \hat{p}] = i \hbar \),

which is known as the canonical commutation relation and is really the fundamental equation in quantum mechanics. The constant \(\hbar\) is known as Planck’s constant and sets the scale of quantum theory.

The point being that quantum mechanics means that one must consider noncommutative algebras. Thus the relatively informal bijection “quantum” \(\leftrightarrow \) “noncommutative”.

We can also begin to understand Einstein’s dislike of quantum mechanics, as pointed out by Dirac. The theories of special and general relativity are by their nature very geometric. As I have suggested, a space can be thought of as being defined by the algebra of functions on it. Einstein’s theories are based on commutative algebra. Quantum mechanics on the other hand is based on noncommutative algebra and in particular the phase space is some sort of “noncommutative space”. The thought of a noncommutative space, “the coordinates do not commute” should make you shudder the first time you hear this!

One place you should pause for reflection is the notion of a point. In noncommutative geometry there is no elementary intuitive notion of a point. Noncommutative geometry is pointless geometry!

We can understand this via the quantum mechanical phase space and the Heisenberg uncertainty relation. Recall that the uncertainty principle states that one cannot know simultaneously the position and momentum of a quantum particle. One cannot really “select a point” in the phase space. The best we have is

\(\delta \hat{x} \delta \hat{p} \approx \frac{\hbar}{2}\).

The phase space is cut up into fuzzy Bohr-Heisenberg cells and does not consist of a collection of points.

At first it seems that all geometric intuition is lost. This however is not the case if we think of a space in terms of the functions on it. A great deal of noncommutative geometry is rephrasing things in classical differential geometry in terms of the functions on the space (the structure sheaf). Then the notion may pass to the noncommutative world. I should say more on noncommutative geometry another time.

Lie-Infinity Algebroids? II

This post should be considered as part two of the earlier post Lie-Infinity Algebroids?

The term \(L_{\infty} \) -algebroid seems not to be very well established in the literature. A nice discussion of this can be found at the nLab.

To quickly recall, the definition I use is that the Q-manifold \((\Pi E, Q)\) is an \(L_{\infty} \) -algebroid, where \(E \rightarrow M \) is a vector bundle and \(Q \) is an arbitrary weight homological vector field. The weight is provided by the assignment of zero to the base coordinates and one to the fibre coordinates. If the homological vector field is of weight one, then we have a Lie algebroid.

It is by now quite well established that a Lie algebroid, as above is equivalently described by

i) A weight minus one Schouten structure on the total space \(\Pi E^{*}\).
ii) A weight minus one Poisson structure on the total space of \(E^{*}\).

In other words, Lie algebroids are equivalent to certain graded Schouten or Poisson algebras. Recall, a Schouten algebra is an odd version of a Poisson algebra. The point is ignoring all gradings and parity, we have a Lie algebra such that the Lie bracket satisfies a Leibniz rule over the product of elements of the Lie algebra. We need a notion of multiplication, in this case it is just the “point-wise” product of functions.

Thus, there is a close relation between Poisson/Schouten algebras (or manifolds) and Lie algebroids.

The natural question now is “does something similar happen for \(L_{\infty}\)-algebroids?”

The answer is “yes”, but we now have to consider homotopy versions of Schouten and Poisson algebras.

Definition: A homotopy Schouten/Poisson algebra is a suitably “superised” \(L_{\infty}\)-algebra (see here) such that the n-linear operations (“brackets”) satisfy a Leibniz over the supercommutative product of elements.

This definition requires that we don’t have just an underlying vector space structure, but that of a supercommutative algebra. I will assume we also have a unit. Though, I think that noncommutative and non-unital algebras are no problem. The point is, I have in mind (at least for now) algebras of functions over (graded) supermanifolds.

Theorem: Given an \(L_{\infty}\)-algebroid \((\Pi E, Q)\) one can canonically construct
i) A total weight one higher Schouten structure on the total space of \(\Pi E^{*}\).
ii) A total weight one higher Poisson structure on the total space of \(E^{*}\).

Proof and details of the assignment of weights can be found in [1].

So, the point is that there is a close relation between homotopy versions of Poisson/Schouten algebras \(L_{\infty}\)-algebroids. To my knowledge, this has not appeared in the literature before. The specific case of \(L_{\infty}\)-algebras (algebroids over a “point”) also seems not to be discussed in the literature before.

The way we interpret this is interesting. We think of a Lie algebroid as a generalisation of the tangent bundle and a Lie algabra. The homological vector field \(Q \) “mixes” the de Rham differential over a manifold and the Chevalley-Eilenberg differential of a Lie algebra \(\mathfrak{g}\). Furthermore, we have a Poisson bracket on \(C^{\infty}( E^{*})\) which “mixes” the canonical Poisson on \(T^{*}M\) with the Lie-Poisson bracket on \(\mathfrak{g}^{*}\). Similar statements hold for the Schouten bracket.

For \(L_{\infty}\)-algebroids the homological vector field again generalises the de Rham and Chevalley-Eilenberg differentials, but it is now inhomogeneous. It resembles a “mix or higher order BRST-like” operator [3]. A homotopy version of the Maurer-Cartan equation naturally appears here. It is clear that we can consider the homotopy Schouten/Poisson algabras associated with an \(L_{\infty}\)-algebra as playing the role of the Lie-Poisson algebras, however there is no obvious higher brackets to consider on the cotangent bundle. It is not clear to me what should replace the tangent bundle here, if anything.

Exactly what technical use the theorem above is awaits to be explored. There are some interesting related notion in Mehta [2], I have yet to fully assimilate them. Maybe more on that another time.

[1] From \(L_{\infty}\)-algebroids to higher Schouten/Poisson structures. Andrew James Bruce, arXiv:1007.1389 [math-ph]

[2]On homotopy Poisson actions and reduction of symplectic Q-manifolds. Rajan Amit Mehta, arXiv:1009.1280v1 [math.SG]

[3] Higher order BRST and anti-BRST operators and cohomology for compact Lie algebras. C. Chryssomalakos, J. A. de Azcarraga, A. J. Macfarlane, J. C. Perez Bueno, arXiv:hep-th/9810212v2


My paper “Tulczyjew triples and higher Poisson/Schouten structures on Lie algebroids” arXiv:0910.1243v4 [math-ph] is going to appear in Reports on Mathematical Physics, Vol. 66, No 2, (2010).

We show how to extend the construction of Tulczyjew triples to Lie algebroids via graded manifolds. We also provide a generalisation of triangular Lie bialgebroids as higher Poisson and Schouten structures on Lie algebroids.

Lie infinity-Algebras

As \(L_{\infty}\)-algebras play a large role in my research, and more generally in mathematical physics, homotopy theory, modern geometry etc I thought it maybe useful to say a few words about them.

One should think of \(L_{\infty}\)-algebras as “homotopy relatives” of Lie algebras. In a sense I think of them as differential graded Lie algebras + “more”. I hope to make this a little clearer.

Definition: A supervector space \(V = V_{0} \oplus V_{1}\) is said to be an \(L_{\infty}\)-algebra if it comes equipped with a series of parity odd \(n\)-linear operations (\(n \geq 0\) ), which we denote as “brackets” \((, \cdots , )\) that

1) are symmetric \(( \bullet , \cdots, a, b , \cdots, \bullet) = (-1)^{\widetilde{a}\widetilde{b} }( \bullet , \cdots, b, a , \cdots, \bullet) \), \(a,b \in V\).

2) satisfy the homotopy Jacobi identities

\(\sum_{k+l=n-1} \sum_{(k,l)-\textnormal{unshuffels}}(-1)^{\epsilon}\left( (a_{\sigma(1)}, \cdots , a_{\sigma(k)}), a_{\sigma(k+1)}, \cdots, a_{\sigma(k+l)} \right)=0\)

hold for all \(n \geq 1\). Here \((-1)^{\epsilon}\) is a sign that arises due to the exchange of homogenous elements \(a_{i} \in V\). Recall that a \((k,l)\)-unshuffle is a permutation of the indices \(1, 2, \cdots k+l \) such that \(\sigma(1)\) < \(\cdots\) < \(\sigma(k)\) and \(\sigma(k+1)\) < \(\cdots \) < \(\sigma(k+l)\). The LHS of the above are referred to as Jacobiators.

So, we have a vector space with a series of brackets; \((\emptyset)\), \((a,b)\) , \((a,b,c)\) etc. If the zero bracket \((\emptyset)\) is zero then the \(L_{\infty}\)-algebra is said to be strict. Often the definition of \(L_{\infty}\)-algebra assumes this. With a non-vanishing zero bracket the algebra is often called “weak”, “with background” or “curved”.

Let us examine the first few Jacobi identities in order to make all this a little clearer. First let us assume a strict algebra and we will denote the one bracket as \(d\) (this will become clear).

1) \(d^{2}a = 0 \).

That is we have a differential graded algebra.

2) \(d (a,b) + (da, b) + (-1)^{\widetilde{a} \widetilde{b}} (db, a) =0\).

So the one bracket (the differential) satisfied a derivation rule over the 2-bracket.

3) \(d (a,b,c) + (da,b,c) + (-1)^{\widetilde{a} \widetilde{b}}(db, a, c) + (-1)^{\widetilde{c}(\widetilde{a} + \widetilde{b})} (dc, a, b)\)
\( + ((a,b), c) + (-1)^{\widetilde{b}\widetilde{c}}((a,c), b) + (-1)^{\widetilde{a}(\widetilde{b}+ \widetilde{c})} ((b,c), a)= 0\).

So we have the standard Jacobi identity up to something exact.

The higher Jacobi identities are not so easy to interpret in terms of things we all know. There are higher homotopy relations and thus the word “strong”. This should make it clearer what I mean by “differential graded Lie algebra + more”.

Note that the conventions here are not quite the same as originally used by Stasheff. In fact he used a \(\mathbb{Z}\)-grading where we use a \(\mathbb{Z}_{2}\)-grading. The brackets of Stasheff are skew-symmetric and (with superisation) they are even/odd parity for even/odd number of arguments. By employing the parity reversion function and including a few extra sign factors one can construct a series of brackets on \(\Pi V\) that are closer to Stasheff’s conventions, of course “superised”. This series of brackets on \(\Pi V\) then directly includes Lie superalgebras.

There are other “similarities” between Lie algebras and \(L_{\infty}\)-algebras. I may post more about some of these another time.

A few words about applications. \(L_{\infty}\)-algebras can be found behind the BV (BFV) formalism, deformation quantistion of Poisson manifolds and closed string field theory, for example.

Lie-Infinity Algebroids?

I have been thinking a little bit recently about \(L_{\infty}\)-algebroids. So what is such an object?

Let us work with super-stuff from the start. I will be lax about signs etc, so this should not course any real confusion. First we need a little background.

Heuristically, a supermanifold is a “manifold” in which the coordinates have an underlying \(\mathbb{Z}_{2}\)-grading. Morphisms between charts are smooth and respect this grading. In more physical language, we have bosonic coordinates and even coordinates. The bosonic coordinates commute as where the fermionic coordinates anticommute. I will refer to bosonic coordinates as even parity and fermionic as odd parity. To set this up properly one needs the theory of locally ringed spaces. However, we will not need this here.

A graded manifold is a supermanifold, in which the coordinates are assigned an additional weight in \(\mathbb{Z}^{n}\) and the changes of coordinates respect the parity as well as the additional weight. In general the parity and weight are completely independent.

A Q-manifold is a supermanifold (or a graded manifold ) that comes equipped with a homological vector field, usually denoted by Q. That is we have an odd parity vector field that “self-commutes” under the Lie bracket,

\([Q,Q] = 0\).

Note that as the homological vector field is odd, this is a non-trivial condition. Sometimes, if the supermanifold is a graded manifold then conditions on the weight of Q can be imposed.

Now we can describe \(L_{\infty}\)-algebroids. The best way to describe them is as follows:

A vector bundle \(E \rightarrow M\) is said to have an \(L_{\infty}\)-algebroid structure if there exists a homological vector field, denote as \(Q\) on the total space of \(\Pi E\), thought of as a graded manifold.

That is the pair \((\Pi E, Q)\) is a Q-manifold. We call this pair an \(L_{\infty}\)-algebroid.

The weight, in this case just in \(\mathbb{Z}\) is assigned by equipping the base coordinates of \(E \) with weight zero and the fibre coordinates with weight one (or some other integer). The “\(\Pi \)” is the parity reversion functor. It shifts the parity of the fibre coordinates. So, a coordinate that is originally even\odd get replaced by a coordinate that is odd\even. It does nothing to the weight. Note that this shift is fundamental here and not just for convenience.

It is very easy to see that the original vector bundle \(E \rightarrow M\) is equivalent to the graded manifold \(\Pi E\). Stronger than this, the equivalence is functorial. That is we have equivalent categories.

Further note that there is no condition on the weight of the homological vector bundle in this definition, nor is there any “compatibility condition” with the vector bundle (or graded) structure.

An \(L_{\infty}\)-algebroid is said to be strict if the restriction of Q to the “base manifold” \(M \subset \Pi E \) is a genuine homological vector field on \(M \).

This does not sound very invariant at first, but simply put restriction of Q to the weight zero “part” of \(\Pi E\) should still be homological.

For those that know Lie algebroids and \(L_{\infty}\)-algebras, the question is why call them \(L_{\infty}\)-algebroids? An \(L_{\infty}\)-algebroid is to an \(L_{\infty}\)-algebra what a Lie algebroid is to a Lie algebra.

It also turns out that some of the main constructions relating to Lie algebroids carry over to \(L_{\infty}\)-algeboids, see [1]. (I may say more another time.) This may also be of use for \(L_{\infty}\)-algebras. I am currently also pondering this.

So maybe I should end for now on a little motivation as to why such things are interesting. First, if we insist on the homological vector field being of weight one we recover Lie algebroids. If we insist on the vector bundle being over a point we recover \(L_{\infty}\)-algebras. (I should post on these later) Also, very similar things appear in quantum field theory via the BV and BFV formalisms (again I should post on these another time). However, at the moment it is not exactly clear how \(L_{\infty}\)-algebroids fit in here. One “barrier” is that Q is inhomogenous in weight, in the BV and BFV formulations the homological vector field is homogeneous in “ghost number”. It would also be interesting to see if these structures can be used in the BV-AKSZ formalism.


[1] Andrew James Bruce, From \(L_{\infty}\)-algebroids to higher Schouten\Poisson structures. Submitted for publication, available as arXiv:1007.1389v2 [math-ph]

Is "three" the new "two"?

The number “two” may appear to be very special in theoretical physics, but maybe it has had it’s day…

By this we mean that much of physics is described in terms of “binary objects”: Lie brackets, commutators, metrics, rank two curvature tensors, quadratic Lagrangians, two dimensional world sheets of strings, Poisson structures, symplectic two forms, Laplacians and I am sure the list goes on. However, it has become increasingly clear in recent years that “higher objects” play an important role in theoretical physics as well as modern geometry.

For example, it has become increasingly clear that n-aray generalisations of Lie algebras play a role in physics. The Sh Lie algebras of Stasheff and the (not completely unrelated) n-Lie algebras of Filipov are great examples here. In one form or another, they can be found behind the BV-antifield formalism, Zwiebach’s closed string field theory, Kontsevich’s deformation quantisation of Poisson manifolds, Nambu’s generalised mechanics and the Bagger–Lambert–Gustavsson (BLG) description of multiple
stacked M2 branes

The last one has been of interest to me lately.

So, M-theory was introduced by Witten in 1995 as a non-perturbative unification of the various superstring theories. Here, the fundamental objects are not strings but extended membranes of dimension 2 and 5, the so called M2 and M5 branes. Since then progress has been slow. No-one really knows what M-theory is and there is no proper understanding of the dynamics of interacting branes.

Then in Bagger & Lambert [2] in 2006 and independently Gustavsson [3] in 2007 construct and effective action for the low energy description of a stack of two M2 branes. The novel feature here is that 3-Lie algebras play a role here. A 3-Lie algebra should be thought of as a “Lie algebra” but with a tribracket not a bibracket. Details should not worry us.

The theory has the fields take their values in a 3-Lie algebra and their is a novel gauge symmetry. However, the original BLG-model can be recast as a conventional gauge theory, the ABJM theory [1]. So it starts to look that maybe 3-Lie algebras are some weird artificial artefact of M2 branes.

(There are many, many papers on the arXiv dealing with modifying the original BLG model. This usually can be understood in terms of 3-Lie algebra. I won’t say any more right now.)

But then very recently, Lambert & Papageorgakis [4] provided evidence that the effective description on M5 branes would also require 3-Lie algebras. However, they have not yet produced an action, which would be essential if the more or less standard methods of quantisation were to be applied.

This is fascinating. M-theory seems to be deeply tied to the theory of n-aray algebras, and in particular 3-aray algebras. There are may open questions here, both from a physics and mathematics point of view. In all it looks like n-aray algebra are here to stay.


[1] Ofer Aharony, Oren Bergman, Daniel Louis Jafferis, and Juan Maldacena. N=6 superconformal Chern-Simons-matter theories, M2-branes and their gravity duals. JHEP, 10:091, 2008.

[2] Jonathan Bagger and Neil Lambert. Modeling multiple M2’s. Phys. Rev., D75:045020, 2007.

[3] Andreas Gustavsson. Algebraic structures on parallel M2-branes. Nucl. Phys., B811:66–76, 2009.

[4] Neil Lambert and Constantinos Papageorgakis. Nonabelian (2,0) Tensor Multiplets and 3-algebras. 2010,
arXiv:1007.2982 [hep-th].

Can an "amateur" today make useful contributions to theoretical physics or mathematics?

This was a question I posed to a friend of mine. We decided to define an “amateur” as someone without a PhD in physics, mathematics or something close.

We came to the conclusion that it is very unlikely that some one with out a PhD could in fact make a real contribution. This is despite the fact that things are far more open today than they ever have been. I mean, we have the arXiv and open access journals online. Almost everyone has the internet at home these days, and if not the local libraries do.

From time to time undergraduate students can contribute, but these are with close supervision. The supervisor will guide the student and nurture the natural ability. This was really the closest to an “amateur” that could contribute we agreed on.

So, why can’t “amateurs” contribute? Here are my thoughts…

1) Without spending some time in academia, “amateurs” are not aware of the culture and what is expected of anyone wishing to contribute to mathematical science. They do not know how to do research.

2) “Amateurs”, although interested and very keen at times do not often realise just how much of a prerequisite can be required to conduct research. They can often lack the mathematical skills to contribute. Claims like “I can solve the Riemann hypothesis using high school mathematics” only suggests that they don’t understand the hypothesis correctly in the first place. Trying to rewrite particle theory using high school maths is also redundant. We have a great construct for doing particle physics, it is called the standard model.

3) Theoretical physics, mathematical physics and mathematics as a whole is split up into smaller sections. One can only hope to get acquainted properly with a small subset of what is out there. Without specialising to a large extent, it is unlikely that one can discover something new and interesting. Trying to find smaller, specialised problems to work on is usually the way forward: unless you are a genius and can discover a whole new branch of mathematics! “Amateurs” seem to be focused on very well-known and published open questions. In number theory the Riemann hypothesis is a great example of this. In physics, a theory of quantum gravity is an example.

4) Because the individual does not understand it, it must be wrong. “Amateurs” fall into this mind set quite often. Finding a simpler more elegant approach to things is a large part of the mathematical sciences. However, trying to show that special relativity or quantum mechanics are mathematically inconsistent or do not agree with nature is futile. This also includes the desire to use nothing but high school maths to explain all of physics.

Not that I want to discourage anyone from thinking about mathematics and physics. I encourage it, but with a caveat: reading Wikepedia and popular accounts of science are not enough for one to start to do research.

UPDATE (15th May 2014)

Please do not post about your pet theories in the comments here. If you have something to say related to this post about the role of amateurs in science then please by all means share it here. Thank you.