1
$\begingroup$

Given only the coordinates of $A,B,C,D$, the incenter can be recovered purely by reflections across diagonals and perpendicular constructions, without ever touching angle bisectors.

  1. Take diagonal $AC$.
  2. Reflect $D$ across the line $AC$ to obtain $D'$.
  3. Intersect the line $BD'$ with $AC$; call this point $H_1$.
  4. $H_1$ is exactly the foot of the perpendicular from the incenter $I$ to $AC$.
  5. Therefore, the line through $H_1$ perpendicular to $AC$ passes through $I$.
  6. Repeat symmetrically with the other diagonal $BD$:
  7. Reflect $C$ across $BD$ to get $C'$.
  8. Intersect $AC'$ with $BD$ to obtain $H_2$.
  9. The line through $H_2$ perpendicular to $BD$ also passes through $I$.
  10. Finally, intersect these two perpendicular lines; their intersection is the incenter $I$.

enter image description hereenter image description here

This is a rational map $\text{Gr}(1,2)^4 \dashrightarrow \mathbb R^2$ and it is symmetric under permutations of the four input lines.

import numpy as np

# Generate random tangential quadrilateral using random tangency angles
def tangential_quad():
    thetas = np.sort(np.random.uniform(0, 2*np.pi, 4))
    verts = []
    for i in range(4):
        t1, t2 = thetas[i], thetas[(i+1)%4]
        A = np.array([[np.cos(t1), np.sin(t1)],
                      [np.cos(t2), np.sin(t2)]])
        v = np.linalg.solve(A, [1,1])
        verts.append(v)
    return np.array(verts)

def intersect(P, Q, R, S):
    # intersection of lines PQ and RS
    A = np.array([[Q[0]-P[0], R[0]-S[0]],
                  [Q[1]-P[1], R[1]-S[1]]])
    b = np.array([R[0]-P[0], R[1]-P[1]])
    t,u = np.linalg.solve(A,b)
    return P + t*(Q-P)

def test_once():
    A,B,C,D = tangential_quad()
    I = np.array([0.0,0.0])      # incircle center before shift
    O = intersect(A,C,B,D)       # diagonal intersection

    # shift so O is origin
    A,B,C,D = A-O, B-O, C-O, D-O
    I = I-O

    xA,yA = A; xB,yB = B; xC,yC = C; xD,yD = D

    denom = (xA*yB - yA*xB)
    xI = (-2*(xA*xB + yA*yB)/denom)*((xA*yC)/(xA+xC) - (xB*yD)/(xB+xD))
    yI = ( 2*(xA*xB + yA*yB)/denom)*((xA*xC)/(xA+xC) - (xB*xD)/(xB+xD))

    return np.linalg.norm([xI-I[0], yI-I[1]])

# run multiple tests
errs = [test_once() for _ in range(20)]
print(max(errs))

Let the coordinates of the vertices be $A=(x_A, y_A)$, $B=(x_B, y_B)$, $C=(x_C, y_C)$, and $D=(x_D, y_D)$.

Let the diagonals $AC$ and $BD$ meet at the origin. This implies the relations: $$x_A y_C - y_A x_C = 0 \quad \text{and} \quad x_B y_D - y_B x_D = 0$$ By performing the construction of reflecting $D$ across $AC$, finding the foot $H_1$, and repeating for $H_2$ on $BD$, we derive the lines whose intersection is $I$. Solving the resulting linear system and simplifying using the collinearity conditions, we obtain the coordinates of $I = (x_I, y_I)$:

$$x_I = \frac{-2(x_A x_B + y_A y_B)}{(x_A y_B - y_A x_B)} \left( \frac{x_A y_C}{x_A + x_C} - \frac{x_B y_D}{x_B + x_D} \right)$$

$$y_I = \frac{2(x_A x_B + y_A y_B)}{(x_A y_B - y_A x_B)} \left( \frac{x_A x_C}{x_A + x_C} - \frac{x_B x_D}{x_B + x_D} \right)$$

These formulas represent a rational map from the configuration space of the four vertices to the plane. For any tangential quadrilateral, these coordinates coincide with the incenter. In the case of a kite (where one diagonal is an axis of symmetry), the construction involves a limit as the diagonals are perpendicular ($x_A x_B + y_A y_B = 0$) and the harmonic terms become singular ($x_A+x_C = 0$), but the product remains finite and identifies the incenter.

In vector notation, let $u = A \cdot B$ and $v = A \times B$ (the 2D cross product $x_A y_B - y_A x_B$). The formulas can be concisely written as: $$I = \frac{2(A \cdot B)}{A \times B} \left( \frac{x_B x_D}{x_B + x_D} - \frac{x_A x_C}{x_A + x_C}, \frac{x_A y_C}{x_A + x_C} - \frac{x_B y_D}{x_B + x_D} \right)$$


Why this is surprising:

The definition of an incenter is “equal distance to all four sides.” That sounds quadratic and square-rooty. If you try to compute $I$ directly from distance-equality equations, you immediately hit square roots.

Corollary: if the coordinates of the vertices $A, B, C,$ and $D$ are rational numbers, then the coordinates of the incenter $I$ (or its algebraic continuation) must also be rational.


Question.
When $ABCD$ is a tangential quadrilateral, how do you prove $I$ is exactly the incenter?

$\endgroup$
3
  • $\begingroup$ Please double-check your $(x_I,y_I)$ formulas (and, ideally, hit them with a Factor operation in some computer algebra system). They don't seem to work as written. Consider: Position the quad so that its diagonals meet at the origin. Then $x_A y_C-y_A x_C=x_By_D-y_B x_D=0$, eliminating chunks of the expressions. But, then, everything else immediately cancels as well (without even having to substitute $x_{D'}$, etc), so that $(x_I,y_I)=(0,0)$ (the intersection of the diagonals), not your incenter. $\endgroup$ Commented Jan 2 at 14:08
  • $\begingroup$ It's worth noting that when $B$ and $D$ are mutual reflections in diagonal $AC$ your $D'$ and $B$ coincide, making line $BD'$ (and thus $H_1$) undefined. So, your construction is a bit problematic for the simplest of tangential quadrilaterals: rhombi and kites. Nevertheless, I believe that —setting aside these kinds of degeneracies— your $I$ does indeed coincide with the incenter when the quad is tangential. $\endgroup$ Commented Jan 2 at 14:10
  • 1
    $\begingroup$ @Blue I have simplified the expressions as you suggested. $\endgroup$ Commented Jan 3 at 0:56

2 Answers 2

2
$\begingroup$

I'll start with an alternative renderings of OP's coordinates for $I$.

Let the diagonals of $\square ABCD$ meet at $Q$, and take $\angle AQB=2\theta$ (oriented counter-clockwise), and suppose that the ray making angle $\phi$ with positive $x$-axis bisects $\angle AQB$. We can write $$ A = Q + a\left(\cos(\phi-\theta),\sin(\phi-\theta)\right) \quad B = Q + b\left(\cos(\phi+\theta),\sin(\phi+\theta)\right) \\ C = Q + c\left(\cos(\phi-\theta),\sin(\phi-\theta)\right) \quad D = Q + d\left(\cos(\phi+\theta),\sin(\phi+\theta)\right)$$ Then, after following OP's construction of generalized incenter $I$, we find $I=(x_I,y_I)$ where $$\begin{align} x_I&\;=\;x_Q\;+\;2\cot2\theta\;\left( -\frac{ac}{a+c}\sin(\phi-\theta) + \frac{bd}{b+d} \sin(\phi+\theta)\right) \tag1\\[4pt] y_I &\;=\;y_Q\;+\;2\cot2\theta\;\left( \phantom{-}\frac{ac}{a+c} \cos(\phi-\theta) - \frac{bd}{b+d} \cos(\phi+\theta) \right) \end{align}$$ So, $$\begin{align} I-Q&\;=\;2\cot2\theta\;\operatorname{rot}_{\pi/2}\left( \frac{A-C}{\dfrac{a}{c} - \dfrac{c}{a}} - \frac{B-D}{\dfrac{b}{d} - \dfrac{d}{b}}\right) \tag2\\[4pt] &\;=\;2\cot\angle AQB\;\operatorname{rot}_{\pi/2}\left( \frac{A-C}{f\left(\dfrac{|\triangle ABD|}{|\triangle CBD|}\right) } - \frac{B-D}{f\left(\dfrac{|\triangle BCA|}{|\triangle DCA|}\right)} \right) \tag3 \end{align}$$ and thus $$\begin{align} \overrightarrow{QI}&\;=\;\frac{\overrightarrow{AC}\cdot\overrightarrow{BD}}{\overrightarrow{AC}\times\overrightarrow{BD}} \;\operatorname{rot}_{\pi/2}\left( -\frac{\overrightarrow{AC}}{ f\left(\dfrac{\overrightarrow{AB}\times\overrightarrow{AD}}{\overrightarrow{CB}\times\overrightarrow{CD}}\right) } + \frac{\overrightarrow{BD}}{ f\left(\dfrac{\overrightarrow{BC}\times\overrightarrow{BA}}{\overrightarrow{DC}\times\overrightarrow{DA}}\right) } \right) \end{align} \tag4 $$ where

  • $\operatorname{rot}_{\pi/2}(x,y)=(-y,x)$, counter-clockwise rotation by $\pi/2$ about the origin.

  • $|\triangle PQR|$ is the signed area of $\triangle PQR$ based on the orientation of the path $P\to Q\to R$. Consequently, the ratio $|\triangle ABD|/|\triangle CBD|$ (and its reciprocal) is negative when vertices $A$ and $C$ are on opposite sides of diagonal $BD$.

  • $u\times v := x_u y_v - y_u x_v$

  • $f(x) = \dfrac12\left(\dfrac{x}{1}-\dfrac1x\right)$.

Note that, when $Q$ aligns with the origin, $(1)$ can be written $$I\;=\;2\,\frac{x_A x_B + y_A y_B}{x_A y_B - y_A x_B}\;\left( -\frac{x_Ay_C}{x_A+x_C} + \frac{x_By_D}{x_B+x_D}, \frac{x_Ax_C}{x_A+x_C} - \frac{x_Bx_D}{x_B+x_D} \right) \tag{1'}$$ which agrees with OP's revised formula. (I probably would've opted to use all-$y$s in the first coordinate. :)


Complex Harmonics. The appearance of $\operatorname{rot}_{\pi/2}$ suggests migrating from the Cartesian plane to the Argand plane, where the operation amounts to multiplication by $i$. So, let's take our quadrilateral vertices to be complex numbers $$ A = Q+a\exp i(\phi-\theta) \quad B = Q+b\exp i(\phi+\theta) \\ C = Q+c\exp i(\phi+\theta) \quad D = Q+d\exp i(\phi+\theta)$$ where $a$, $b$, $c$, $d$ are allowed to be negative, so that we have $\dfrac{a}{c} = \dfrac{A-Q}{C-Q}$ and $\dfrac{b}{d}=\dfrac{B-Q}{D-Q}$. Also, $$\cot2\theta= -i\lambda \qquad \lambda := \frac{ (A - C) \overline{(B - D)}+\overline{(A - C)} (B - D)}{ (A - C)\overline{(B - D)}-\overline{(A - C)} (B - D)} \tag{5}$$

Consequently, we can rewrite $(2)$ as

$$\begin{align} I-Q &\;=\;\lambda\; \left( \frac2{\dfrac1{A-Q} + \dfrac1{C-Q}} - \frac2{\dfrac1{B-Q} + \dfrac1{D-Q}} \right) \\ &\;=\; \lambda\;\Bigl(\, H(A-Q,C-Q) - H(B-Q,D-Q) \,\Bigr) \tag{2'} \end{align}$$

where $H(p,q)$ is the harmonic mean of $p$ and $q$. Nifty! $\square$


Tangential Quads. Unfortunately, the above expressions don't make it at all "obvious" that the generalized incenter $I$ reverts to the incenter-proper when $\square ABCD$ is tangential. Here's a complex-algebraic argument invoking $(2')$. (The intermediate calculations are very messy, and are best handled with a computer algebra system. That said, the relative simplicity of final forms of various elements suggests that there should be a cleaner way to them.)

Let the angles of $\square ABCD$ be $2a$, $2b$, $2c$, $2d$ (to save myself from having to type $\alpha$, $\beta$, $\gamma$, $\delta$; I won't be using $a$, $b$, $c$, $d$ as lengths here); note $a+b+c+d=\pi$. Taking the quad to be tangential to the origin-centered circle of radius $r$, we can assign complex values to the vertices thusly

$$ A = \frac{r}{\sin a} \quad B = -\frac{r \exp(-i(a+b))}{\sin b} \quad C = -\frac{r \exp(-i(b-d))}{\sin c} \quad D = -\frac{r \exp(i(d+a))}{\sin d} $$

Then we find that diagonals $AC$ and $BD$ meet at $$Q = \frac{r}{\sin(a+c)} (\cos c - \cos a \exp(-i(b-d))) \tag6$$ The harmonic means required by $(2')$, and parameter $\lambda$, evaluate thusly $$\begin{align} H(A-Q,C-Q) &= \phantom{i}2 r \frac{\cos a \cos c}{\sin(a+c)\sin(a-c)} ( \sin c + \sin a \exp(-i(b-d)) ) \tag7\\[4pt] H(B-Q,D-Q) &= 2 i r \frac{\cos b \cos d}{\sin(b+d)\sin(b-d)}( \cos c + \cos a \exp(-i(b-d)) ) \tag8\\[4pt] \lambda &= -\frac12i\;\frac{\sin(a-c)\sin(b-d)}{\sin(a+c)\sin(a+b)\sin(b+c)} \tag9 \end{align}$$ Then, one can verify that the right-hand side of $(2')$ reduces to $-Q$, so that $I = 0$: OP's generalized incenter coincides with the incenter-proper of a tangential quadrilateral. $\square$


Note. Revising/extending a comment to the question ...

In the algebraic argument about tangential quads, everything "just cancels" in the end, neatly disguising that individual elements can be problematic (specifically when they involve zero-denominators). It's worth noting that, in some cases, OP's geometric construction itself inherently problematic.

For instance, if vertices $B$ and $D$ are mutual reflections in diagonal $AC$, then OP's $D'$ coincides with $B$, making line $BD'$ (and the point $H_1$ where it meets $AC$) indeterminate; if they aren't reflections in the diagonal but are reflections in intersection point $Q$, then $BD'$ is parallel to $AC$, sending $H_1$ to infinity.

Also, limiting behavior is a bit tricky. For example, consider the universe of (non-degenerate) quadrilaterals whose diagonals are perpendicular and whose opposite vertices are not equidistant from $Q$. By the geometric construction, $H_1=H_2=I=Q$; likewise, as $\cot2\theta=\cot\angle AQB=\cot\frac\pi2=0$, the equations above imply $I=Q$. This remains the case even as, say, $C$ approaches the reflection of $A$ in $BD$ (while keeping the quad within our universe of consideration). However, the limiting figure of that approach is a (non-rhombus) kite, a tangential quad whose incenter-proper (by the previous section) coincides with generalized incenter $I$, yet $I\neq Q$. It's not clear (to me) how to resolve this issue intuitively.

I'll stop typing now!

$\endgroup$
2
  • $\begingroup$ For a kite, the formula's result is path-dependent. A limit of tangential quadrilaterals (AB+CD=AD+BC) converges to the incenter. A limit of ex-tangential quadrilaterals (AB+BC=AD+DC) converges to the excenter. A limit of orthodiagonal quadrilaterals converges to diagonal intersection. For a generic sequence, the limit is unstable and can lie anywhere. $\endgroup$ Commented Jan 5 at 12:09
  • $\begingroup$ @hbghlyj: "For a kite, the formula's result is path-dependent." Indeed. This kind of thing seems a bit problematic for an "incenter map", but it's also fascinating. :) ... Now I'm curious about cases of non-kites that have one diagonal bisecting the other. (The ones that send $H_1$ to infinity.) ... Anyway, neat problem! I'm delighted to have come across the "complex harmonic" representation. I wonder if the right-hand side of $(2')$ evaluating to $-Q$ might amount to a (novel?) characterization of tangential quads. That's in puzzle for another day. :) ... Cheers! $\endgroup$ Commented Jan 5 at 14:32
0
$\begingroup$

When the system of two quadratic equations in $x$ has exactly one common root, the 1D kernel of the Sylvester matrix provides a rational expression for the root. \begin{cases}x^{2}+a_{1}x+a_{0}=0\\ x^{2}+b_{1}x+b_{0}=0\end{cases} Subtracting,$$(a_1-b_1)x+a_0-b_0=0\implies x=\frac{a_0-b_0}{b_1-a_1}$$


For two quadratic polynomials, $P(x)=x^{2}+a_{1}x+a_{0}$ and $Q(x)=x^{2}+b_{1}x+b_{0}$, the Sylvester matrix is a 4x4 matrix: $S = \begin{pmatrix} 1 & a_1 & a_0 & 0 \\ 0 & 1 & a_1 & a_0 \\ 1 & b_1 & b_0 & 0 \\ 0 & 1 & b_1 & b_0 \end{pmatrix}$

If $P(x)$ and $Q(x)$ have exactly one common root (which is a simple root of at least one polynomial), the resultant is zero, and the Sylvester matrix is singular. Its kernel will be one-dimensional. The vectors in this 1D kernel are related to the powers of the common root. Every column of the adjugate matrix $\text{Adj }S$ is proportional to the same vector $[x^3,x^2,x,1]^T$. Consecutive ratios evaluate to the same scalar$$\frac{(\text{Adj } S)_{1,i}}{(\text{Adj } S)_{2,i}}=\frac{(\text{Adj } S)_{2,i}}{(\text{Adj } S)_{3,i}}=\frac{(\text{Adj } S)_{3,i}}{(\text{Adj } S)_{4,i}}=x$$for every row $i=1,2,3,4$.


Statement being verified:

For the Sylvester matrix $$S= \begin{pmatrix} 1 & a_1 & a_0 & 0\\ 0 & 1 & a_1 & a_0\\ 1 & b_1 & b_0 & 0\\ 0 & 1 & b_1 & b_0 \end{pmatrix},$$ let $$x=\frac{a_0-b_0}{\,b_1-a_1\,}.$$ Claim. For each column $i=1,2,3,4$, $$\frac{(\operatorname{Adj}S)_{1,i}}{(\operatorname{Adj}S)_{2,i}} = \frac{(\operatorname{Adj}S)_{2,i}}{(\operatorname{Adj}S)_{3,i}} = \frac{(\operatorname{Adj}S)_{3,i}}{(\operatorname{Adj}S)_{4,i}} = x$$ in the quotient ring $$\mathbb{Q}[a_0,a_1,b_0,b_1]\big/\bigl(\det S\bigr).$$ What was computed: For each column and each consecutive ratio, we checked the cross-multiplied form $$(\operatorname{Adj}S)_{k,i}(b_1-a_1) - (\operatorname{Adj}S)_{k+1,i}(a_0-b_0),$$ and reduced this polynomial modulo the ideal $(\det S)$ using a Gröbner basis.

# Verify symbolically that (Adj S) column ratios equal x modulo det(S)
# by reducing the differences with a Groebner basis containing det(S)

import sympy as sp

# symbols
a0, a1, b0, b1 = sp.symbols('a0 a1 b0 b1')

# matrix
S = sp.Matrix([
    [1, a1, a0, 0],
    [0, 1,  a1, a0],
    [1, b1, b0, 0],
    [0, 1,  b1, b0]
])

Adj = S.adjugate()

# determinant
detS = sp.factor(S.det())

# target x
x = (a0 - b0)/(b1 - a1)

# polynomial ring generators
G = sp.groebner([detS], a0, a1, b0, b1, order='lex')

results = {}

for j in range(4):
    c = Adj.col(j)

    # three consecutive ratios
    diffs = []
    for (num, den) in [(c[0], c[1]), (c[1], c[2]), (c[2], c[3])]:
        # cross-multiplied difference: num/den - x = 0  <=>  num*(b1-a1) - den*(a0-b0)
        poly = sp.expand(num*(b1 - a1) - den*(a0 - b0))
        # reduce modulo det(S)
        reduced = G.reduce(poly)[1]
        diffs.append(sp.factor(reduced))

    results[j+1] = diffs

detS, results

SymPy returns:

{1: [0, 0, 0], 
 2: [0, 0, 0], 
 3: [0, 0, 0], 
 4: [0, 0, 0]}

Similarly, when the system of three quadratic equations in $x,y$ has exactly one common root, the 1D kernel of the Macaulay Matrix (the multivariate generalization of the Sylvester matrix) provides a rational expression for the root.

Let $L_i(x, y) = A_i x + B_i y + C_i = 0$ be the side lines and $N_i = A_i^2 + B_i^2$ be the squared side lengths. Define three quadratics with rational coefficients: $$f_1 = N_2 L_1^2 - N_1 L_2^2, \quad f_2 = N_3 L_2^2 - N_2 L_3^2, \quad f_3 = N_4 L_3^2 - N_3 L_4^2$$ Each $f_k$ has the form $a_{k1}x^2 + a_{k2}xy + a_{k3}y^2 + a_{k4}x + a_{k5}y + a_{k6} = 0$.

The Macaulay Matrix

We construct a matrix $M$ of size $9 \times 10$ by shifting each polynomial $f_k$ by $x$ and $y$. The 10 columns correspond to the monomials $[x^3, x^2y, xy^2, y^3, x^2, xy, y^2, x, y, 1]$. For each $f_k$, we generate 3 rows:

  1. Row 1 ($x \cdot f_k$): $[a_{k1}, a_{k2}, a_{k3}, 0, a_{k4}, a_{k5}, 0, a_{k6}, 0, 0]$
  2. Row 2 ($y \cdot f_k$): $[0, a_{k1}, a_{k2}, a_{k3}, 0, a_{k4}, a_{k5}, 0, a_{k6}, 0]$
  3. Row 3 ($f_k$): $[0, 0, 0, 0, a_{k1}, a_{k2}, a_{k3}, a_{k4}, a_{k5}, a_{k6}]$

Rational Formula

For a tangential quadrilateral, the matrix $M$ has rank 9, meaning its null space (kernel) is one-dimensional. Let $\mathbf{k} = [k_9, k_8, \dots, k_1, k_0]^T$ be a non-zero vector in the kernel. By Cramer’s rule, the components $k_i$ are the $9 \times 9$ minors of $M$. The incenter $(x_I, y_I)$ is: $$x_I = \frac{k_2}{k_0} = \frac{\text{Minor}_{x}}{\text{Minor}_{1}}, \quad y_I = \frac{k_1}{k_0} = \frac{\text{Minor}_{y}}{\text{Minor}_{1}}$$

Numerical Verification

import numpy as np
from scipy.linalg import null_space

rng = np.random.default_rng(0)
Ix, Iy = rng.uniform(-3,3), rng.uniform(-3,3)
r = rng.uniform(0.5,2.0)
thetas = np.sort(rng.uniform(0, 2*np.pi, 4))

lines = []
for t in thetas:
    A, B = np.cos(t), np.sin(t)
    C = -(A*Ix + B*Iy + r)
    lines.append((A,B,C))

def get_coeffs(l1, l2):
    A1, B1, C1 = l1
    A2, B2, C2 = l2
    return [
        A1**2 - A2**2,
        2*(A1*B1 - A2*B2),
        B1**2 - B2**2,
        2*(A1*C1 - A2*C2),
        2*(B1*C1 - B2*C2),
        C1**2 - C2**2
    ]

f1 = get_coeffs(lines[0], lines[1])
f2 = get_coeffs(lines[1], lines[2])
f3 = get_coeffs(lines[2], lines[3])

def get_macaulay_rows(f):
    r1 = [f[0], f[1], f[2], 0, f[3], f[4], 0, f[5], 0, 0]
    r2 = [0, f[0], f[1], f[2], 0, f[3], f[4], 0, f[5], 0]
    r3 = [0, 0, 0, 0, f[0], f[1], f[2], f[3], f[4], f[5]]
    return [r1, r2, r3]

M = np.array(get_macaulay_rows(f1) + get_macaulay_rows(f2) + get_macaulay_rows(f3))
print("Matrix Rank:", np.linalg.matrix_rank(M))

K = null_space(M)
print("Kernel dimension (nullity):", nullity)

if K.shape[1] == 1:
    k = K[:, 0]
    xi = k[7]/k[9]
    yi = k[8]/k[9]
    print(f"Recovered Incenter: ({xi}, {yi})")
    print(f"Actual Incenter:   ({Ix}, {Iy})")

3D Generalization

For 5 planes (4 quadratic equations) in 3D, the system becomes uniquely determined (nullity 1) when expanded to degree 4. This requires 35 columns (all monomials from $x^4$ down to 1) and 40 rows (each of the 4 equations shifted by the 10 possible monomials of degree $\leq 2$).

The following numerical verification confirms in 3D, the point equidistant to 5 planes can be expressed as a rational function of the coefficients of the planes.

import numpy as np
from scipy.linalg import null_space

def get_monomials(max_deg):
    """Generates all 3D monomials (i,j,k) s.t. i+j+k <= max_deg"""
    mons = []
    for d in range(max_deg, -1, -1):
        for i in range(d, -1, -1):
            for j in range(d - i, -1, -1):
                mons.append((i, j, d - i - j))
    return mons

# 1. Generate 5 random planes tangent to a sphere
rng = np.random.default_rng(1)
Ix, Iy, Iz = rng.uniform(-2, 2, 3)
r = rng.uniform(0.5, 1.5)
normals = [n/np.linalg.norm(n) for n in rng.normal(size=(5, 3))]
planes = [np.append(n, -(n[0]*Ix + n[1]*Iy + n[2]*Iz + r)) for n in normals]

# 2. Define the 4 base quadratic equations f_i = L_0^2 - L_i^2
eqs_base = []
L0 = planes[0]
for i in range(1, 5):
    Li = planes[i]
    f = {
        (2,0,0): L0[0]**2 - Li[0]**2, (0,2,0): L0[1]**2 - Li[1]**2, (0,0,2): L0[2]**2 - Li[2]**2,
        (1,1,0): 2*(L0[0]*L0[1] - Li[0]*Li[1]), (1,0,1): 2*(L0[0]*L0[2] - Li[0]*Li[2]), 
        (0,1,1): 2*(L0[1]*L0[2] - Li[1]*Li[2]), (1,0,0): 2*(L0[0]*L0[3] - Li[0]*Li[3]),
        (0,1,0): 2*(L0[1]*L0[3] - Li[1]*Li[3]), (0,0,1): 2*(L0[2]*L0[3] - Li[2]*Li[3]),
        (0,0,0): L0[3]**2 - Li[3]**2
    }
    eqs_base.append(f)

# 3. Build the Macaulay Matrix (Degree 4 expansion)
# Basis: 35 monomials. Rows: 4 eqs * 10 shifts
mons_deg4 = get_monomials(4)
mon_to_idx = {m: i for i, m in enumerate(mons_deg4)}
shifts = get_monomials(2) # 10 shifts

M = []
for f in eqs_base:
    for s in shifts:
        row = np.zeros(35)
        for m, val in f.items():
            target = (m[0]+s[0], m[1]+s[1], m[2]+s[2])
            row[mon_to_idx[target]] = val
        M.append(row)
M = np.array(M)

# 4. Extract Incenter
K = null_space(M)
print(f"Matrix: {M.shape}, Rank: {np.linalg.matrix_rank(M)}, Nullity: {K.shape[1]}")

if K.shape[1] == 1:
    k = K[:, 0]
    idx_x, idx_y, idx_z, idx_1 = mon_to_idx[(1,0,0)], mon_to_idx[(0,1,0)], mon_to_idx[(0,0,1)], mon_to_idx[(0,0,0)]
    res = (k[idx_x]/k[idx_1], k[idx_y]/k[idx_1], k[idx_z]/k[idx_1])
    print(f"Recovered: {res}\nActual:    {(Ix, Iy, Iz)}")

This raises the question:

Can you simplify the rational expression for the point equidistant to 5 planes, expressed in a suitable coordinate system, to reveal a geometric construction analogous to the 2D case?

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.