##
Two fun problems *November 26, 2008*

*Posted by Noah Snyder in fun problems, group theory, linear algebra, subfactors.*

trackback

trackback

One of the points of this blog is for us to share the little problems we’d be discussing at tea if we were all still in Berkeley. Here are two that came up in the last couple weeks.

As we all know, you can never know too much linear algebra. So here’s a fun little linear algebra exercise that Dave Penneys asked us over beers on friday: “Which matrices have square roots?”

The second question I don’t know the answer to, but I haven’t looked too hard. The other week Penneys and I were trying to compute an example in subfactors and stumbled on the following interesting question about infinite groups (somewhat reminiscent of this old post). When can you find a group G and a proper inclusion G->G such that the image is finite index?

There’s the obvious example Z. But once you start adding adjectives it starts getting tricky. We were looking for a finitely generated group all of whose nontrivial conjugacy classes are infinite. If only I knew more geometric group theory…

## Comments

Sorry comments are closed for this entry

Which matrices have square roots?I see a necessary condition: to be a square, it must be square.

For the linear algebra question, it matters whether you’re allowed to use Jordan normal form. With it, it’s cute work. Without, I wouldn’t know what to do. Except of course quoting Knutson’s pun.

Regarding your second question, I don’t know much about creating groups where every nontrivial conjugacy class is infinite. The only examples I can think of are things like free groups and surface groups. How do I find more groups with this property?

An intuition as to why the second problem is hard for me. I like to think of groups as acting on spaces. In order to get large conjugacy classes, I want my spaces to have negative curvature.

If I am to have G as a finite index subgroup of itself, then that “should” mean that a fundamental domain for the smaller G is “similar to”, but larger than, a fundamental domain for the larger G. But in a space with negative curvature, I can’t have two similar domains of different sizes.

That’s a very sloppy intuition, but it turns into a rigorous proof that I can’t build examples using Coxeter groups of finite, affine or hyperbolic type (here the spaces are spheres, Euclidean planes and hyperbolic planes, respectively) or using free groups (here the spaces are graphs). It would be interesting if there were some general group-theoretic description of curvature.

My guess is that I just have to think a little more exotically. For example, what about the symmetry groups of Lorentzian manifolds?

Another problem (much simpler, but still interesting when you have not seen it yet) is the following: find (or prove that cannot exist) groups H and G such that G contains an isomorphic copy of H, and H an isomorphic copy of G, but H and G are not isomorphic.

Gromov has a notion of word-hyperbolicity for a group, which is the condition that the Cayley graph is delta-hyperbolic for some delta. A metric space is delta-hyperbolic if for any geodesic triangle, any point on one of the edges is at most distance delta from one of the other edges. Trees (e.g., Cayley graphs of free groups) are zero-hyperbolic.

I don’t know how that would help with a construction, though. Maybe you want something more on the amenable/solvable side than the non-positively curved/free-ish side of the fp universe (not that I know what I’m talking about).

I have a feeling that I’m making a mistake (I just woke up, and have not had any coffee), but doesn’t the following work? Let G be the semidirect product of SL_n(Z) acting on Z^n. This is isomorphic to the subgroup SL_n(Z) semidirect (mZ)^n for any positive integer m, which is a finite-index subgroup. Also, I’m pretty sure (but don’t have time to write out a proof) that all conjugacy classes are infinite…

The example I have in mind is a so-called Baumslag-Solitar group, with presentation . The subgroup generated by and should be finite-index and isomorphic to the original group. I think.

It’s easy to check if conjugacy classes are infinite: the size of the conjugacy class of an element is just the index of the centralizer of that element in the whole group. So a group has all conjugacy classes infinite if and only if every finite-index subgroup has trivial centre. This will hold for all non-elementary (ie non-virtually cyclic) word-hyperbolic groups.

There are certainly no word-hyperbolic groups with the property that you want. This is proved in a paper called ‘Quasiregular self-mappings of manifolds and word hyperbolic groups’ by Bridson, Hinkkanen and Martin. It’s an extension of a hard theorem of Sela.

For your example, Andy, I suppose you need the action of to intertwine with the isomorphism ? Presumably it does. If so, your example’s more interesting than mine. Baumslag-Solitar groups are never linear (except for the obvious boring ones).

Oh, right – scaling by is central in ! D’uh!

@Maurizio Monge : “find (or prove that cannot exist) groups H and G such that G contains an isomorphic copy of H, and H an isomorphic copy of G, but H and G are not isomorphic.”

You can take free groupes of rank 2 and 3. This proves that the Cantor-Bernstein-Schroeder theorem does not work with groups.

For the linear algebra question, it matters whether you’re allowed to use Jordan normal form. With it, it’s cute work. Without, I wouldn’t know what to do. Except of course quoting Knutson’s pun.Without directly appealing to JCF, you could use the minimal polynomial of a square matrix to convert the problem to commutative algebra: If k is the ground field, then when does it happen that the generator x of the quotient ring k[x]/a(x) has or doesn’t have a square root? It’s not particularly a simpler argument though, and it’s also a slightly different question, namely when it happens that x has a square root which is a polynomial in x. I believe that it does answer the question when k = \C.

Were people assuming that the question is over the complex numbers? In characteristic 0, the answer in general will be that x must have a square root when each eigenvalue has a square root in whatever field extension of k it generates. But this criterion is clearly not sufficient in characteristic 2. The matrix [[1,1],[0,1]] does not have a square root in any field of characteristic 2, not even one which is algebraically closed.

As for the other question, there is a basic argument from hyperbolic geometry that many groups do not inject into themselves with finite index. A cocompact hyperbolic group in the classical sense, that is a Kleinian group, embeds uniquely into Isom(H^n) by Mostow rigidity. It thus has a well-defined hyperbolic volume. If you have a finite-index subgroup, the index is the ratio of the volumes, hence the subgroup cannot be isomorphic to its parent. I would suppose that the result for word-hyperbolic groups is a deeper version of the same story.

The (somewhat tautological) lesson is that if you want a group that does inject into itself with finite index, it cannot have any covolume invariant. Clearly \Z or \Z^n doesn’t; the classifying space is a torus and tori cover themselves. An infinitely generated free group is another example; all of its geometric classifying spaces have infinite volume or loop number or whatever.

Maybe the intent of the condition that all conjugacy classes are infinite is to get examples that “don’t come from tori”. Somehow Andy’s construction seems to skirt that intent and maybe the Baumslag-Solitar examples do too. It skirts the intention in the sense that if G embeds as a subgroup G’ of G, then the coset space G/G’ has a natural abelian group structure and the action of G on G/G’ is linear. Maybe you can make affine-linear examples, but can you make fully nonlinear (finitely generated) examples?

Over C, you don’t need Jordan normal form, just Jordan decomposition (every matrix is product of a diagonalizable matrix and a unipotent matrix which commute). This reduces to the question of diagonalizable matrices (obvious) and upper triangular matrices (obvious).

Maurizio, follow the link to the older post! It has lots of fun twists on Schroeder-Bernstein.

On question 1, Dave’s question was over C. It’s interesting to ask it not over C, but it’s still got a tricky point over C even when you use JNF.

Ben, your argument only applies to invertible matrices. I also think that Greg’s answer has issues for non-invertible matrices.

Ben, your argument only applies to invertible matrices.Shoot, you’re right. Too clever by half.

Every matrix can be written as U^\dagger T U for U unitary and T upper triangular. That reduces it to upper triangular matrices.

Example : [[0,1],[0,0]] has no square root, in any field. But its eighenvalue have a square root (in any field) :-)

Indeed, as PB says, I made a mistake in the case of nilpotent matrices; the result isn’t true. Every linear operator in finite dimensions is the direct sum of a nilpotent operator and an invertible operator. Its square root, if it exists, has the same decomposition. In characteristic 0, the invertible term has a square root in the case that I described. The nilpotent term has a square root iff Jordan blocks can be paired so that the two blocks in each pair are either the same size or adjacent sizes.

The nilpotent part of the answer has the property that sqrt(A) is not generally in the algebra generated by A. So the reduction to commutative algebra via minimal polynomials is not correct either in the nilpotent case. Indeed, a nilpotent operator of Jordan type 3+1+1 has the same minimal polynomial as one of type 3+2, but the former does not have a square root while the latter one does, namely Jordan type 5.

I really don’t see any clear way to do this problem except by writing down the JCF of sqrt(A) and computing the JCF of A from it. Maybe there are solutions that logically skip the JCF theorem, but at the moment they don’t seem worth it.

“Every linear operator in finite dimensions is the direct sum of a nilpotent operator and an invertible operator.”

Greg could you please explain that? I’m just a starting student so maybe I’m missing some obvious proof for that statement.. (for one, the set of invertible matrices is not a subspace, so I’m pretty much stuck already.)

This question about square roots of matrices is one I posed to myself when I was an undergraduate. I remember spending hours thinking about it. As Greg points about above it is quite subtle, even over the complex numbers! (By the way, I can’t imagine any reason not to want to use Jordan Canonical Form.)

Here is another problem from my undergrad days, which I recently posed to Alfonso since he is teaching group theory. Let G be a finite group, and H be a proper subgroup. Prove that it is not possible for each element of G to be conjugate into H. (If we take G to be a compact connected Lie group and H to be a maximal torus, then this property does hold — and is vitally important to the study of compact groups!)

An amusing result related to the question about square roots of matrices was recently posted to the arXiv. In http://arxiv.org/abs/0810.5036 , Margalit and Schleimer prove the (slightly counterintuitive) result that elementary matrices in SL_n(Z) have square roots. They also do the same thing for Dehn twists in the mapping class group, elementary symplectic matrices, and Nielsen transformations in Aut(F_n). Not bad for a 2 page paper!

Jungle,

“Every linear operator in finite dimensions is the direct sum of a nilpotent operator and an invertible operator.”This is basically the statement that every matrix has a Jordan normal form. The zeros on the diagonal and the junk above the diagonal are matrix elements of the nilpotent operator.

Jungle is right. Not every matrix is the sum of an invertible matrix and a nilpotent matrix. Try the zero matrix!

The Jordan-Chevalley decomposition is probably what people are thinking of. Every linear operator (read, matrix) is (conjugate to) the sum of a semi-simple operator (read, diagonal if complex) and a nilpotent one (read, upper triangular). There’s no reason to assume that the semi-simple matrix is invertible.

I can think of lots of reasons not to use the JCF. For one, it’s not canonical. For two, it’s about matrices.

Thank you Tolland and Andrew!

@Tolland: I see it now. I just realized that every nilpotent matrix is similar to some (nilpotent) matrix with main diagonal all 0’s and 1’s above the diagonal:x and that will let us split a JCF into a diagonal matrix + a nilpotent matrix.

@Stacey: If we deal with the complex numbers, the semi-simple operator will be invertible right? I guess the complex numbers is of more interest here, as Noah pointed out in comment #13.

—-

Back to Dave’s question: Now the barrier is that not every nilpotent matrix has a square root? So not every matrix can be rooted by decomposing into invertible + nilpotent and then rooting the components individually. Am I right?

Greg could you please explain that?Okay, first a point of terminology. A direct sum is not the same as a numerical sum. In linear algebra, a vector space is the direct sum of subspaces means that the subspaces span and that the intersection of any two is trivial. It is a “sum” first in the sense that dimensions add, and second in the sense that it is a restricted form of addition in set arithmetic. (Set arithmetic in general works like this. If $ is some operation on elements, then a $ of two sets is defined by taking the $ of every pair of elements.)

There is an analogous explicit direct sum of numerical vectors, concatenation, and an analogous direct sum of matrices. To take the direct sum of matrices you make them into diagonal blocks of a bigger matrix.

Anyway, unlike Jordan canonical form, my statement about expressing any matrix as a direct sum of nilpotent and invertible (after a change of basis I meant) works over any field, and it can be proved more quickly than the full JCF theorem. If a linear map A acts on a finite-dimenisonal vector space V, then it has a

stable imagesim A and astable kernelsker A. (It’s my own notation at this step because I don’t know the most standard notation.) sker A is defined as the union of kernels of A^n for all n, while sim A is the intersection of images of A^n for all n. Exercise: V is the direct sum of of sim A and sker A. Both are preserved by A. A is nilpotent on sker A and invertible on sim A.This same argument can be used in a nifty proof of Jordan canonical form, over an algebraically closed field. V is the direct sum of the stable eigenspaces of A, by definition sker A-lambda for each eigenvalue lambda. Since A-lambda is nilpotent on sker A-lambda, you can then use that to make the Jordan blocks.

It’s also true that a matrix is the numerical sum of a nilpotent matrix and a diagonal matrix. But that’s not what I meant.

I think I have an answer to Joel’s question. First by the Sylow theorems we reduce to the case that G is a p-group. (Take H_p a sylow in H, extend it to G_p a Sylow in G. If g in G_p is conjugate to H, it’s cojugate to an element of a Sylow p in H, and since all Sylows are conjugate it’s conjugate to something in H_p.)

Now we take abelianizations. Every element of Ab(G_p) is conjugate to an element in Ab(H_p). Hence Ab(G_p) is trivial, but p-groups never have trivial abelianization.

There’s probably a better argument…

sorry I botched the end. By Ab(H) I meant the image of H in Ab(G).

I’m not sure about better, but here is a different one:

Let X be the set G/H, with the obvious G action. For any g in G, g is conjugate into H if and only if g has a fixed point in its action on X. Let n(g) be the number of fixed points of g acting on X. We have

1/|G| sum_{g in G} n(g) = 1

because the action of G on X is transitive.

Each summand is a nonnegative integer, and the term corresponding to the identity contributes |H|>1. Hence, there must be some term which is zero.

Typo: the equation |H|>1 in my last post should read |X|>1.

By the way, I am missing how the last steps in Noah’s argument work. The implicit claim is that, if G is a p-group and H a proper subgroup, then H does not surject onto Ab(G). Why is this?

Hrm, David’s right I skipped over some stuff there at the end and need to use a stronger theorem than I meant to. Instead of quotienting out G by [G,G] quotient out by the Fratini subgroup (generated by commutators and pth powers, this quotient is the universal elementary p-group quotient). If H surjects onto this quotient, then by the Burnside basis theorem it is already all of G. (The Burnside basis theorem is the p-group equivalent of Nakayama’s lemma.)

For Joel’s question, let k be the number of subgroups of G conjugate to H. Clearly k is at most |G/H|. By assumption k > 1.

Since each subgroup contains 1, the number of elements which can be conjugated into H is at most 1 + k(|H|-1) <= 1 + |G| – k < |G|.

This is probably not so far from David’s proof, though.

See

this nice paper of Serre for more about the this property of finite groups (no proper subgroup intersects all conjugacy classes) and various interpretations.

This is used quite a bit to compute Galois groups because one gets Frobenius conjugacy classes modulo primes, and with luck after some time they determine the Galois group…

It’s funny how my thought process started looking for an argument like David’s and then ended up with my argument:

1) It involves finiteness in an important way, so it must involve some sort of counting arguments probably counting orbits.

2) Usually those sorts of arguments involve finding fixed points by using the fact that 1 is the only power of p not divisible by p (this is where my logic went a little screwy) so I should probably reduce to the p-power case.

3) Oh, once I’m in the p-power case everything should be easy because p-groups = commutative rings, and this statement is obvious for commutative rings. So why bother with a counting argument at all?

Neat! I’d never seen the Fratini subgroup before; it’s kind of a group theoretic analogue of the Jacobson radical in noncommutative rings. (Or, as you say, the Burnside Basis Theorem is analogous to Nakayama’s lemma.)

Are p-groups really like commutative rings? So far, this analogy is convincing me they are more like Artinian rings. In either case, you could probably make a nice blog post by fleshing this out.

I haven’t thought about this stuff in years, so I don’t know that I’ll be able to cook up a blog post. But basically the analogy between p-groups and commutative ring theory is a one that Lenstra was very fond of. His confidence that he could probably solve a random question involving p-groups rubbed off a bit, which is why once I reduced to the p-group case I felt home free.

Here’s a paragraph from Poonen’s math/0608491, a paper on moduli spaces of commutative algebras:

“The approach towards both those results is to adapt the proof (begun in [Hig60] and completed in [Sim65]) that the number of p-groups of order is . As suggested to us by Hendrik Lenstra, there is an analogy between the powers of the maximal ideal of a local ﬁnite-rank k-algebra and the descending p-central series of a p-group. Although there seems to be no direct connection between ﬁnite-rank k-algebras and ﬁnite p-groups, the combinatorial structure in the two enumeration proofs are nearly identical.”

[…] the gang at the Secret Blogging Seminar, for posts and comment threads like this one; […]

Noah’s remark reminds me of an interesting, and arguably quite important, problem in computational group theory: Find a universal algorithm to multiply elements of finite groups, or at least p-groups. In other words, in the p-group case, find a description of each group G of order p^n which is polynomial in n, and such that two vectors of length n over Z/p can be multiplied in polynomial time in n using this description as a third input, to produce a group law isomorphic to the structure of G.

For instance, if G is metabelian, then it is easy to do this.

I learned about the program of finding an “explicit model” for every finite group came up in a discussion with Laszlo Babai. He was the editor for a paper that I had with Scott Aaronson. We rather loosely referred to explicit models for finite groups, and he wisely pressed us to explain what we meant.

Henry Wilton,

I don’t think it’s so obvious as not to need spelling out: BS(1,n) is linear, fitting in 2×2 matrices over Z[1/n]. In particular, your group is the semidirect product of Z acting on Z[1/2] by multiplication by 2. From this point of view, it’s quite similar to Andy’s example, but, in some sense, smaller. A minimal manifold example is a 3-dimensional solvable group, Z acting on Z^2 by a hyperbolic matrix.

Does negative curvature in the form of word hyperbolicity rule out this phenomenon? If so, this brings us towards the question of Gromov of whether Baumslag-Solitar subgroups are the principal obstructions to a group being hyperbolic.

Is anyone else totally weirded out by the fact that Z[1/2] fits inside this group with two generators? I had to write down an explicit isomorphism to make sure I wasn’t going insane.

Ben Wieland,

Good point! isn’t linear for distinct both bigger than 1, because it’s not even residually finite. I hadn’t spotted that is linear,

As I mentioned above, there are no (torsion-free, non-elementary) word-hyperbolic examples, so from that point of view it might be a curvature thing. It would be nice to know if there are any CAT(0) examples.

Scott,

Every countable group embeds in a 2-generator group!

[…] Ben Webster in Uncategorized. trackback Oddly enough, it seems that after all the discussion at this post, no-one actually wrote down a criterion for a nilpotent matrix to have a square […]

Henry,

Now that you mention it, it’s not so hard to prove. I still think it’s cool that you only need one relatively small relation instead of say, infinitely many of them. Is there a corresponding statement for finite (or bounded) presentation?

Scott,

Yeah, the fact that it’s one-relator is a bit remarkable.

I’m not sure if a ‘boundedly presented embedding result’ exists – but I doubt it. Higman’s Embedding Theorem says that any recursively presented finitely generated group can be embedded into a finitely presented group, but I doubt it will give you a nice bound on the number of relators. For instance, I think there are no known examples of groups with unsolvable word problem and a small number of relators.

Just a random comment about decomposing matrices. It turns out that any endomorphism of a vector space (of any dimension) is the sum of a unit and an idempotent.

So, to unify Henry and Andy’s examples: let be an abelian group, a group of automorphisms of and an integer such that multiplication by is injective and is finite but nontrivial. We take to be the semidirect product of and and take the subgroup to be the semidirect product of and .

Then we need to require that, for any in , the subgroup is infinite and, for any in , the orbit is infinite. If is torsion free, then the former is automatic (as any nontrivial subgroup of is infinite.)

Andy and Henry’s examples are and respectively.

Henry Wilton,

oops…I guess we can see where I stopped reading the thread.

Do nonlinear Baumslag-Solitar groups have such endomorphisms?

Ben Wieland,

I think so. I’m too tired to do the calculation right now, but I think a subgroup of generated by and for any relatively prime to both and will be isomorphic to the whole group.

[…] algebraically closed field has a square root so that . Over at the Secret Blogging Seminar, they discussed this without using the Jordan normal form, and they dealt with the nilpotent blocks later, which has a […]

I#25: Very nice this analogy of p-groups and local commutative rings, I was unaware of it.

But I’m not sure about your reduction step to p-groups. What you show is that every element in G_p is conjugate *inside G* to an element of H_p. I think that’s not enough. Here’s an example.

Take G = S_4, p = 2 and G_p the stabiliser of {1, 4}, {2, 3} (if you like it can be thought of as the Weyl group of Sp_4). Take the subgroup H = H_p = (a Klein 4-group). Then g = (12)(34) is in G_p; it’s conjugate inside G to (14)(23) in H_p, but not inside G_p since g is central in G_p (the longest Weyl element).

Yeah that’s embarassing. It’s quite easy to find groups where all elements of order a power of p are conjugate. For example if V is a finite dimensional vector space over a finite field then the semidirect product of Aut(V) acting on V generalizes your example.

Good point, but does it generalise the example? V is not a Sylow subgroup in the semidirect product if its dimension is > 1.

You could however e.g. let S_n act on V of dimension n by permuting a fixed basis to get another example (for n < p).