5
$\begingroup$

Let $x_1,\ldots,x_n\in [-1,1]^n$ and define the function $$f(x_1,\ldots,x_n):= \prod_{i=1}^n\prod_{j=i}^n\left(1-\prod_{k=i}^j x_k\right).$$ This is a positive function, and actually coincides with the determinant of a $(n+1)\times (n+1)$ matrix $M:=(a_{ij})_{i,j=1}^{n+1}$ where: $$a_{i,i}=1, \hspace{0.5cm}1\leq i\leq n+1,$$ $$a_{i+1,i}=1, \hspace{0.5cm} 1\leq i\leq n,$$ $$a_{i,j}=x_i^{j-i} \cdot x_{i+1}^{j-i-1}\cdots x_{j-1}, \hspace{0.5cm}j>i,$$ $$a_{i,j}= x_{j+1}\cdot x_{j+2}^2\cdots x_{i-1}^{i-j-1}, \hspace{0.5cm}j<i-1.$$ As an example, when $n=3$ the matrix $M$ has the form $$\left(\begin{matrix} 1 &x_1 &x_1^2x_2 & x_1^3x_2^2x_3\\ 1 & 1 & x_2 & x_2^2x_3\\ x_2 & 1 & 1 & x_3\\ x_2x_3^2 & x_3 & 1 &1 \end{matrix}\right).$$

Thanks to the definition of $M$, we immediately get $f\leq (n+1)^{(n+1)/2}$ by Hadamard's Lemma. But actually it is possible to get sharper estimates.

In 1977 M. Pohst https://www.sciencedirect.com/science/article/pii/0022314X77900075 proved $$f(x_1,\ldots,x_n)\leq 2^{[(n+1)/2]}\hspace{0.5cm}\forall n\leq 11,$$ where the brackets denote the floor integer part. His result was improved in 1996 by M. J. Bertin http://matwbn.icm.edu.pl/ksiazki/aa/aa74/aa7444.pdf who proved $$f(x_1,\ldots,x_n)\leq 2^{[(n+1)/2]}\hspace{0.5cm}\forall n.$$ However, while I have no problems with Pohst's proof, I have some troubles with Bertin's one.

The key idea of her proof is the following: the maximum of the function $f$, seen as determinant of $M$, is estimated by the maximum of determinants of matrices similar to $M$ where the $x_i's$ are in $\{-1,0,1\}.$ In other words, the maximum is attained pushing the $x_i's$ to the boundary of their defining intervals.

My problem is that I am not convinced by this argument: I would agree that the maximum of the determinant is attained by pushing the elements of $M$ to the boundary if all these elements were independent of each other (the determinant would be an harmonic function of its variables). But this is not the case for $M$: once you have settled $x_1,\ldots,x_n$ you immediately have all the remaining elements of the matrix. Just look at the first line, formed by $1, x_1, x_1^2x_2, x_1^3x_2^2x_3$ and so on.

In the end, I do not get an exhaustive explanation of the estimate from the paper: is there some step or detail that I am missing? Any suggestion is well accepted.

$\endgroup$
3
  • $\begingroup$ typos? 2nd line of definition "i=1≤i≤n", and next line should be $x_i^{j-\color{red}i}$? It would also be helpful to include M explicitly for n=3, that is easier to see than reading a formula with cases! $\endgroup$ Commented Jan 25, 2019 at 11:39
  • $\begingroup$ Thank you for the suggestion, I fixed the typos and put the example $\endgroup$ Commented Jan 25, 2019 at 12:47
  • $\begingroup$ For the given matrix M, I share your skepticism, as f evaluates to 0 at the extreme points. Can you say anything about the matrices similar to M? It is possible that locally the similar matrices might do the job. Gerhard "Monomials Can Seem Very Independent" Paseman, 2019.01.25. $\endgroup$ Commented Jan 25, 2019 at 17:31

1 Answer 1

1
$\begingroup$

We solved this by using a different approach: we put the preprint on the ArXiv. https://arxiv.org/abs/2101.06163

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.