Primary/Elementary Pedagogy: What is the rationale for the absent '+' in mixed fractions?

338 Views Asked by At

Why are elementary students taught to represent one and a half as 1 1/2 rather than 1 + 1/2?

This mode of expression seems standard throughout at least North America. I think it is bad pedagogy for a couple of reasons:

First, doing it the 'correct' way would give students tonnes of personal experience equating the English word 'and' with the mathematical symbol '+'. This is a good thing, since it fosters the notion that mathematical statements (or in this case expressions) have tangible meaning. Students who understand what a half is do well with understanding what 3 and a half is, and I expect that representing it as 3 + 1/2 can do a great deal to cement the 'true meaning' of addition. Consider that the previous experience of students at this age is dominated by calculating 8+4 either counting 9-10-11-12 or by rote, neither of which is all that connected to the physical reality of addition.

Second, students will reach a point where they are expected to abide by the convention that ab represents a * b. Strong students will do OK with this other than a few early mistakes during an adjustment period. But students struggling in math, especially those experiencing phobia or anxiety around the subject, will have little chance but to understand this shift in notation as yet another in a seeming unending string of indications that what's expected from them in math class is entirely arbitrary, changes from one teacher to another, and is some sort of arcane magic. The horrible thing is that in this case they're correct to interpret it this way!

I have to be missing something. What advantages does the current scheme provide?

4

There are 4 best solutions below

0
On

In my opinion, there is no good reason to write "mixed numbers" without a "$+$" sign. When I teach college students or tutor high school students and notice that notation in their work, I insist that they retire that habit for both of the two reasons that you mention in your question.

Furthermore, it fosters the sense that an expression like $\frac{17}{5}$ is not a "real" fraction and that you always need to perform division to write it as $3 + \frac{2}{5}$. This is detrimental, as it obscures the obvious fact that $$ 5 \cdot \frac{17}{5} = 17 $$ by making it look like the less obvious $$ 5 \cdot \left( 3 + \frac{2}{5} \right) = 17. $$

0
On

Discounting practical aspects and tradition, as mentioned in the comments, there are no advantages.

On the other hand, ambiguity is simply just a fact of life. Even within Mathematics.

http://en.wikipedia.org/wiki/Ambiguity

http://www.xamuel.com/ambiguous-math/

2
On

Well since we're being pedantic, $1 + \frac 1 2 \text{ grams } \ne \frac 3 2 \text{ grams } = \left(1 + \frac 1 2\right) \text{ grams }$. Also, the expression $1 + \frac 1 2$ is not the same as the expression $1 \frac 1 2$, the first is an addition of 2 rational values and the second is just a single rational value written differently. In algorithmic formal logic this sort difference can be quite significant.

Since this is an opinion question, I'll vote for the actual culprit being the common omission of the multiplication sign, which leads to plenty of other ambiguities.

0
On

Here's one (small) advantage: the mixed number $$-3\frac{1}{2}$$ implicitly means $$-\left(3+\frac{1}{2}\right),$$ so the notation $-3 \frac{1}{2}$ allows you to omit a pair of brackets.

(For what it's worth, I don't really think this is enough to justify the notation, and your second objection is imo a very strong one.)