This question showed up in an exam for 11-year olds:
Alice bought some chocolate and vanilla muffins. $\frac{3}{4}$ of her muffins are chocolate and the rest are vanilla. She then bought another 60 vanilla muffins. In the end, $\frac{1}{3}$ of her muffins are chocolate. How many muffins did Alice have at first?
I'm sure everyone attacks this problem with algebra, but the exam is aimed at 11 years old, and my understanding is at that age, students use more elementary methods involving drawing figures. However, one student came up with the following method that nonetheless gives the correct answer:
- Originally $\frac{1}{4}$ of Alice's muffins are vanilla. Finally $\frac{2}{3}$ of her muffins are vanilla.
- The difference is $\frac{2}{3} - \frac{1}{4} = \frac{5}{12}$
- 60 muffins were added, so we divide 60 by 5 (the numerator above) and multiply by 4 (the denominator in Alice's original muffins) = 48
The reasoning looks clearly bogus to me, but the answer is correct. I'm trying to figure out why it is correct, and possibly its domain of validity. What I've figured out so far:
- The method appears to be robust to changing $\frac{3}{4} \rightarrow \frac{3}{5}$.
- It appears to be robust to changing $\frac{3}{4} \rightarrow \frac{3}{7}$.
- It appears to not be robust to changing $\frac{1}{3} \rightarrow \frac{2}{3}$
- It also appears to not be robust to simultaneously changing $\frac{1}{3} \rightarrow \frac{3}{5}$ and $\frac{3}{4} \rightarrow \frac{7}{10}$.
It seems like changing the second number breaks the method, but not the first number, which makes it seem like the method has partial validity. That's a surprise, since bogus methods ought to fail almost all the time. Why?