I have a few coordinates that form a triangle. I have a relative point to that triangle. if the coordinates get translated to a new triangle I want to calculate the new relative point. How do I do this generally not only for 2 dimensions but for larger ones too?
# triangle translation
(5, 2) -> (2, -3)
(2, -3) -> (-3, 6)
(-3, 6) -> (6, 5)
# relative point
(5, -7) -> (x, y)
how do I solve for x, y?
EDIT: as I've done more research into basic geometric transformations I can see that I'm asking for a generalization of all possible transformations of the space. This is not merely a rotation, or a dilation, etc. This particular example rotates the space then stretches it in various ways. So what is the generalized solution for calculating spatial transformations? What are the steps to solve for X?
EDIT AGAIN: thinking about this intuitively I think there might be two answers to the question: one where the 2d space is not rotated and one where it is.
I think the first, straightforward solution is the one I'm looking for. But I don't know how to compute it. But if the case can be made that the second rotational solution is more generalized maybe I'd rather use that, idk. I think I'd be happy with either solution.
Can you help me?
Additional Edit:
Here is the starter code I have produced from John Hughes linear transformation calculation below.
def extrapolate(domain_coordinates, result_coordinates, point):
'''
given a set of input coordinates and their resulting
coordinates post-transformation, and given an additional
input coordinate return the location (coordinate) in the
post-transformation space that corresponds to the most
logical linear transformation of that space for that
additional point. "extrapolate where this point ends up"
'''
import numpy as np
# Add the number 1 to the coordinates for each point
domain = [(x, y, 1) for x, y in domain_coordinates]
# Do the same for the "target" coordinates too
result = [(x, y, 1) for x, y in result_coordinates]
# Put these coordinates, arranged vertically, into a 3×3 matrix
domain = np.array(domain).T
result = np.array(result).T
# D^−1 is the "matrix inverse"
inverse = np.linalg.inv(domain)
# Let M=RD^−1
matrix = result * inverse # why do I need M?...
# Do the same for the extrapolation point
xpoint = [(x, y, 1) for x, y in [point]]
xpoint = np.array(xpoint).T
# extrapolate ???
extrapolated_point = matrix * xpoint # this isn't right...
return extrapolated_point
extrapolate(
domain_coordinates=[(5, 2), (2, -3), (-3, 6)],
result_coordinates=[(2, -3), (-3, 6), (6, 5)],
point=(5, -7))
This code is not working right, for instance, if I insert this test just before the return...
print(domain * np.array([[1],[0],[0]]).T)
print(domain * np.array([[1],[0],[0]]).T * matrix)
it prints...
[[5 0 0]
[2 0 0]
[1 0 0]]
[[ 1.73076923 0. -0.]
[-0.57692308 -0. 0.]
[-0.34615385 0. 0.]]
whereas I would expect it to print...
[[5 0 0]
[2 0 0]
[1 0 0]]
[[ 2 0. 0.]
[-3 0. 0.]
[-1 0. 0.]]
as per this statement:
Then multiplication by the matrix R will take A to A′
Can you show me where I've gone wrong?



Assuming that you're referring to linear transformations (informally: those that take straight lines to straight lines), and that you only want to consider those that are invertible (i.e., that don't send everything to a point, or to a single line), here's your answer. It also works for $n+1$ points in $n$-space (here you have three points -- A, B, C -- in 2-space).
Step 1: add the number 1 to the coordinates for each point, so you have
Do the same for the "target" coordinates too. Put these coordinates, arranged vertically, into a $3 \times 3$ matrix:
And call that matrix $D$ (for "domain"). If you multiply D by the three standard coordinate vectors, the results will be your three (rewritten) original points; for example, $$ \pmatrix{ 5 & 2 & -3 \\ 2 & -3 & 6 \\ 1 & 1 & 1 } \pmatrix{0\\1\\0} = \pmatrix{2 \\-3\\1}. $$ By the way, the three standard coordinate vectors are usually called $e_1, e_2, e_3$, where $e_i$ has a "1" in position $i$, and zeroes elsewhere. This generalizes nicely to higher dimensions.
So now we have $De_1 = A, De_2 = B, De_3 = C$, where $A,B,C$ denote your original points, but with the extra "1" at the bottom.
Do the same thing for the "target" points to get a matrix $R$ (for "result"): $$ R = \pmatrix{ 2 & -3& 6 \\ -3 & 6 & 5 \\ 1 & 1 & 1} $$
By the same argument, we have $Re_1 = A', Re_2 = B', Re_3 = C'$, where $A'$ denotes the "new" location for $A$, again with a $1$ appended, i.e., $$ A' = \pmatrix{2\\-3\\1}. $$
Now comes the fun part: Let $$ M = RD^{-1}. $$ Then multiplication by the matrix $R$ will take $A$ to $A',$ and $B$ to $B'$, and $C$ to $C'$. You can also multiply $X$ (your "extra point", but with a "1" appended) by $R$ to get $X'$ (the new location of the "extra point", from which you'll want to delete the extra "1" at the bottom).
It's possible you'll find yourself asking "What does $D^{-1}$ mean?" and the answer is "it's the matrix inverse," and you can read all about it in any linear algebra book. If you're writing software, a great many matrix libraries have an "inverse" operation. Small warning: computing $D^{-1}$ is generally expensive. IF you're hoping to transform a bunch of "X" points, compute $RD^{-1}$ once, and then use it over and over, rather than recomputing each time.