In exams and schoolwork, teachers allow us to take unequal units on the $x$ & $y$ axes if their values are really far apart for ease of drawing and scalability. For example, if the $x$ values are $1,2,3...$ and the $y$ values are $100,200,300...$ then it is encouraged to take like $1$ small square as a unit in the $x$-axis, but like $10$ small squares as a unit in the $y$-axis.
My question is, how mathematically rigorous is this method? Aren't we distorting the graph? The length of $1$ unit on the $x$-axis & the length of $1$ unit on the $y$-axis are unequal. The length of a unit on the $y$-axis is $10$ times the length of $1$ unit on the $x$-axis in the above example. So, aren't we drawing wrong graphs?
There is no problem with drawing graphs with different scales. The key is that you can't always use the same interpretations of what the graph MEANS.
Following your example, if I draw a line through the origin at a 45 degree angle up from the positive x-axis, I get the line $y=10x$. This will look like the line $y=x$ in standard cartesian coordinates. So the intuition of "how steep is the slope" by visualization needs to shift.
The thing you have to grapple with is what do you mean by "wrong" or "right". In math, we look for our tools to be internally logically consistent (Different mathematical setups lead to different outcomes, just like your different scales means the angles for slopes of lines change, but within their own system they are fine) and then at least one of the following: Useful, beautiful, or interesting. Nowhere is there a "universally objectively correct way" to define math.
To give a simple example, there are lots of ways we define numbers that can lead to $1+1=0$! These aren't the usual real numbers you are used to, but they follow all the arithmetic laws (AKA they form a field).