I really like this question, because it provides a great example of the ills and perils of data transformation.
The OP says that he transformed the data because the DV “is heavily skewed...(approximately log-normal)” and “it is a common procedure in the econ literature”. These are about some of the poorest reasons to transform data. Let’s take then 1 by 1:
The data is heavily skewed? Ok, so where is the problem; that is the reality of the data. Why torture the data to erase this reality? Let’s instead deal with it. The knee-jerk reaction of “oh, my data does not look good, so let’s change it” is simply poor practice.
Everyone else does it? I will simply quote rear admiral Grace Hopper “The most dangerous phrase in the language is: We’ve always done it this way.”, and leave it at that.
The only valid reason to transform a DV is because you are in fact not interested in the untransformed DV (in other words, you were going to transform it even before seeing what the data looked like, and regardless of what others do, or not). A good example of this scenario is the Beer-Lambert law, where we measure the intensity of a radiation (light, sound, etc.), but are really interested in the distance it travelled; we measure intensity because it is practical, pragmatic, and then transform it to get to the distance (because the distance itself is not directly measurable). Or measuring the performance of an amplifier in dB (where we care about relative amplification).
So, if the OP really is interested in the rate of change in sales, then the DV should have been transformed (regardless of skewness, or “tradition”). But since he is now asked to look at the absolute change, it does not seem that the rate of change was really of interest, or maybe his audience is unclear about what they are interested in...
Now, testing on the log transformed sales will allow to test the geometric mean (against 1), but testing on the actual sales will allow you to test the arithmetic mean (against 0). So there is nothing inconsistent about the different results; you are testing 2 different null hypotheses, on 2 different data sets, so indeed you could get 2 different results...
Why could this be? Hard to say for sure w/o seeing the data, but a likely reason would be the change in standard deviation. With the skewed original data, the sd is likely quite large; by log transforming the data, you have greatly compacted your data range (e.g. instead of ranging from 1 to 100, it now ranges from 0 to 2), considerably reducing the value of the sd. So it should be no surprise that a test on the log transformed data could be significant, while one on the untransformed data would not be. Just examine the values of the sd...
Morale of the story? Decide what is of practical importance for you (and your audience). The net average change in sales (difference in arithmetic means)? Then do not transform anything. Or the mean relative change in sales (difference in geometric means)? Then log transform away, and tell your audience not to worry about what the arithmetic means say.
Or maybe both? Then a) ask your audience to make up their mind b) handle the mixed result you got (e.g., how non-significant was the result on untransformed sales? $p=.06$, or $p=.5$?. Did you use a single sided or double sided test?. Etc...)
Last, a caveat about relative changes. As xkcd shows us, a large relative change on a small quantity is still a small quantity. So examine your (untransformed) data to see the magnitudes of your before and after sales; are there a lot of rather small values, or are they all very substantial numbers? That is, is your data mostly like the one xkcd deals with (in which case the geometric mean is distorting the reality; large number of large relative changes but on values which have small business importance), or not (large relative changes on values which are of business importance).