The 20 questions seminar Pablo Solis and I started two weeks ago (see below) is now happening every Tuesday afternoon. We’ll be updating the following wiki each week (on Tuesdays or Wednesdays, if we can manage) with a fresh batch of hopefully interesting questions:

http://scratchpad.wikia.com/wiki/20qs

Many generous responses from users here at the secret blogging seminar have already been incorporated there. The wiki is free for mathematicians everywhere to post questions and answers of their own — there is a section for questions from outside the seminar. Our inspiration is the following:

Once in a while, we’ll all come upon a question that screams “this is easy to someone else, but who?” On the other hand, we like to hear questions that are easy for us but hard for others, so we can be useful. So why not trade, and trade fast? Rule #1 at the seminar is:

“1. Avoid asking questions that require a lot of background, like from your own research. Stick to questions you think others are at least as prepared for as you are.”

When people follow it, this rule results in questions that are fun and accessible, and you often end up with answers, too. But since usually the “poser” is the only one really interested in the answer — at least at first — we have adopted the following:

“2. Wait until the end of the seminar to discuss solutions, unless of course you can say the answer very quickly :)”

This keeps the seminar from degenerating to a dialogue about one question, which I think is the natural tendency. And just so everyone feels welcome, we have:

“3. You do not have to ask or answer a question! Just hearing what lots of people are thinking in a short time is reason enough to attend.”

I think these three ideas, someone recording the questions, and a critical mass of curious mathematicians are all that’s needed for a successful “Questions Seminar”, though time will be the only test.

In the meantime, here are last week’s questions, to be followed by next week’s (on Tuesday):

====1 Mike D (1)====

Suppose that ”K” is a field that is complete with respect to a non-discrete valuation ”v”, and (R,m) its valuation ring. Complete means that v-cauchy sequences convergerge. A v-cauchy sequence is a sequence f_{n} such that for every large g in the value group, there exists N such that for i,j > N, v(f_i-f_j) > g.

Let R_m be the completion of ”R” at ”m”. Is the map R \to R_m a ring isomorphism? Is it a homeomorphism? (R_m has the m-adic topology.)

====2 Mike V====

Let A be an NxN “Scottish flag matrix”:

*the jth diagonal entry is 2sin(2pi.j/N)

*1’s on the super-diagonal, -1’s on the subdiagonal,

*1 in the bottom left corner, and -1 in the top left corner

*all other entries 0.

What are its eigenvalues? Do they lie inside a square in the complex plan with corners \pm 2 \pm 2i?

====3 Andrew D (1)====

Say K is a field of characteristic 2, algebraically closed.

Geometric version: is the zero set of z(xy-z^2) in A^3 normal at (0,0,0)?

Algebraic version: if f,g in A[x,y] have no common factors,

must xf^2+yg^2 be squarefree?

====4 Pablo S====

If N is an integer such that N^2 has only 0’s and 1’s in its base 10 expansion, must N be a power of 10?

Scott: Just checked it’s true for N up to 100 billion…

====5 Mike H====

The Mandelbrot set is usually defined over C. It can also be defined in the quaternions or hypercomplexes. What does it look like? If we visualize this 4D set as a time-varying 3D set, will it change smoothly transitions, or will it just flicker in and out of existence?

====6 Critch====

A finite topological space with nontrivial fundamental group:

Let S be the unit circle in the complex plane. Identify the open top half of the circle, {”z” in ”S” | im(”z”) > 0}, to a single point T, and the open bottom half {”z” in ”S” | im(”z”) < 0} to a single point B. Let X={+1,T,-1-B} be the resulting 4-point space (the open sets are {}, X, {B}, {T}, X\{+1}, X\{-1}).

Can anyone see directly why the map S -> X is not contractible, without fancy theorems?

====7 Dan H-L====

If V and W are vector fields on a manifold M, each of which defines an “eternal flow” (a flow that is defined for all times t), does V+W define an eternal flow?

***

Enjoy!

These seminar reports are nice, but please use latex in the posts! You just need to do $”latex” $ without the “.

3 There must be a typo in the geometric version. In a normal scheme, all of the irreducible components are disjoint. But and both pass through .

But I agree with the answer to the algebraic question on the wiki.

Thanks, Joel. The wiki is already latexed; I didn’t know I could latex here!

I don’t have any answers to 2, but I’ll point out that your matrix is the sum of a Hermitian matrix (the diagonal) and an anti-Hermitian one (the off diagonal). Each of these have very easily computed eigenvalues. I wonder if the ideas used to attack Horn’s problem could be adapted for use here.

David, I know LOTS of matrices that are a sum of a Hermitian matrix and an anti-Hermitian one.

But here is an apology for my fatuous comment: For simplicity let n = p be prime. The Heisenberg group is the strictly upper-triangular subgroup of SL(3,p), of order p^3. It has a standard p-dimensional representation, and it happens that the matrix is a linear combination of four group elements in this representation.

For question 5, the analogue of the Mandelbrot set for any normed algebra contains an open ball of radius 1/4 around zero. If |z| < 1/2 and |c| < 1/4 the triangle inequality yields |z^2 + c| < 1/2, so the process iterates in a bounded domain.

Yeah. The first draft of my post read “is the sum of a Hermitian matrix and an anti-Hermitian one, each of which have very easily computed eigenvalues.” When I split the sentence in two, I worried about your objection, but I decided to ignore it.

Seriously, though. Fix and . The set of Hermitian matrices with eigenvalues is compact, and the same for . So there must be some bound for the eigenvalues of , where is Hermitian with eigenvalues and is Hermitian with eigenvalues . What is that bound?

re Q2 from Mike V: is there a typo here? the matrix is supposed to have 1 on the superdiagonal but then you say the top left entry is -1. Should that be top right?

I suspect the bounds David Speyer alludes to would be far from sharp in this particular case. A rather crude answer to post 7 above, could be got by going to the Hilbert-Schmidt norm I’m guessing, but then the estimate would depend on the dimension of the matrix.

You don’t really need the sophistication of hives to bound eigenvalues. This doesn’t quite give you the box you want, but I doubt your box contains all of the eigenvalues.

My remark above about the Hilbert-Schmidt norm was daft, apologies. For the norms of X and Y (in the usual sense, as operators on fin-dim l^2) are $\max_i |\lambda_i|$ and $\max_j |\mu_j|$ respectively; and then the spectral radius of X+iY is bounded above by the norm of X+iY, which is bounded above by the norm of X plus the norm of Y.

We can do better when X and Y commute, of course, because then they have a common eigenbasis with respect to which the matrix X+iY will be diagonal, and all the diagonal entries lie in the rectangle $\{ z | \abs{\Re(z)} \leq \max_i \vert\lambda_i\vert \;,\; \abs{\Im(z)}\leq \max_j \vert\mu_j\vert\}$. This gives us a $\sqrt{2}$ improvement and a sharp bound. However, for the example in question, I don’t think its real and imaginary parts commute…

I suspect the calculation that follows is one that the questioner’s already done, or similar to it, and that its failure to answer his question is the reason for posing the question. So in moderating this comment, please feel free to edit out the waffling that follows…

So if we take X to be the diagonal part of this Scottish flag matrix and Y to be circulant with top row equal to (0, -i, 0, …, i), then X + iY is almost what is described in Q2, possibly what was meant. Then:

– the norm of X is at most 2 (and will be 2 if N is a multiple of 4))

– the norm of Y is calculable using the fact that a circulant matrix with top row (a_1, …, a_N) is unitarily equivalent to a diagonal matrix with diagonal entries

$d_k = a_1 + a_2 w^k + … + a_N w^{k(N-1)}$

w a primitive Nth root of unity. So

Y is unitarily equivalent to diag(d_1,…,d_N) where

$d_k = -i \exp(2\pi i k / N) + i \exp(2\pi k(N-1)/N ) = -2i \cos (2\pi k/N)$

from which we get $\max( | d_k| ) = |d_N| = 2$, and hence Y has norm 2.

Thus X+iY has norm bounded above by 4, so in particular has spectral radius less than 4.

aargh, LaTeX fail. Also that sentence about the \sqrt{2} improvement is rubbish, please ignore. Apologies for the salvo of posts and the incoherence therein (it’s 1 in the morning here and I was giving an evening class followed by decompression in the pub, so my usual inaccuracy’s even worse at the moment).

David: I see your point. The original question asked whether the spectrum lies in the square +-2+-2i.

Proposition: Let A and B be two Hermitian matrices. Then the spectrum of A+iB lies in the rectangle formed by the first and last eigenvalues of A and B.

I could give the proof, but I think that it’s a fun exercise. It’s much easier than the fancy methods for Horn’s problem. Obviously you can find A and B so that this rectangle is the convex hull of the spectrum; in that sense the bound is sharp.

Greg: I had thought your proposition needed the two to commute; but on further reflection, of course you’re right. It is indeed a nice resultlet & proof, assuming we’re thinking of the same approach. The result should then extend to infinite Hilbert spaces, no?

I’m now rather curious what the eigenvalues of that matrix are…

Yes, certainly the result holds for Hilbert spaces.

Dang, so much for my bold hypothesis.

Scott: Gershgorin circle theorem gives exactly the result we need, if we apply it twice! Direct application puts us inside

.

Now, conjugate the matrix by

.

Conjugation does not change the eigenvalues. Applying the circle theorem to the new bound, we get the same shape rotated 90 degrees. The intersection is the desired square.

Greg: Still thinking about your exercise. It’s a very nice result!

UPDATEFixed typos.I wrote up the argument above on the wiki, but Matehmatica computations are making me think that I have made some sort of error. I thought I could show that the set of units was taken to itself under multiplication by , but I’m not finding this numerically.

I’ll look for the error, and would appreciate anyone else who wants to look searching as well.

UPDATEThe math was correct, my code was buggy.Hmm, there is a lot more to prove here than we have found so far. Here is a picture of the eigenvalues for . To the accuracy of Mathematica’s computation, their arguments are precisely and . They appear to be very evenly spaced, although I don’t think that they are actually in an arithmetic progression.

I don’t think that they are in arithmetic progression either. One question is whether they converge to an arithmetic progression.

I suspect that the matrix, if you first multiply it by zeta_8, is some combination of a Hermitian matrix and an anti-Hermitian matrix. But I don’t know that this combination is simple as a direct sum after a change of basis. It appears not to be that simple, taking the example N=4.

Anyway, here is the argument for the rectangle theorem mentioned above. Key Lemma: If A is positive semidefinite and B is Hermitian, then any eigenvalue of A+iB is in the right-half plane. Proof: If v is an eigenvector of length 1, then the real part of <v,(A+iB)v> is non-negative, and it is the real part of the eigenvalue.

You can generalize the original lemma with the same argument: If M = A_1+…+A_n is a sum of normal matrices, then the spectrum of M lies in the Minkowski sum (the set-arithmetic sum) of the convex hulls of the spectra of the A_k’s.