For example, we have a quadratic polynomial $F(x,y)$ and the graph of $F(x,y)=0$ are the union of two intersecting lines. Now, how do we show that $F(x,y)=k(a_1x+b_1y+c_1)(a_2x+b_2y+c_2)$ ?
My textbook suggests proving the case where the lines are the $x$ and $y$ axis and then using affine transformation. I understand this method well.
Yet the problem is, it shouldn't work only in this case. If the lines don't intersect, this method doesn't work. If we have the union of $n$ lines as the roots of $P(x,y)$ of degree $n$, the method also fails. To be more general, I think if the set of roots of the polynomial contains a line, it seems very likely for the line to be a factor - or we can even generalize them to more dimensions. And this thing should have a completely algebraic proof.
My hypothesis be like this: Suppose we have polynomial (all with the same set of variables) $P,L_1,\cdots,L_n, \deg(P)\geq 1, \deg(L_i)=1$ then $(\forall i L_i(\cdots)=0\implies P(\cdots)=0)\iff L_1\cdots L_n\mid P$
Division with remainder requires Euclidean rings, so I can't apply it here.
(Freshman in university studying analytic geometry. )
There is a theorem that states that any homogenous polynomial of degree $n$ can be factored into a product of $n$ homogeneous linear factors. I think the reasoning behind the question is that if the two lines intersect, it means that there is an invertible linear transformation which makes each line homogeneous under the transformed coordinates.
So, for $n$ dimensions, I think your hypothesis is true under the additional constraint that all lines have a common intersection, so that you can find a transformation that maps the common intersection to the origin. This makes the transformed set of lines homogeneous, so that their product must divide $P$.