The role of model assumptions on statistics is often taught and communicated in a misleading way. No real data are ever normally distributed, so if it were the case that for applying ANOVA and other methods data would really have to be from a normal distribution, these methods could never be applied.
In fact the p-values that you get out of an ANOVA are computed under the normality assumption assuming the null hypothesis. What these tell you is whether, regarding potential differences in means, data are compatible with a model with homogeneous variances (homoscedasticity) and equal means, i.e. whether the data give you a strong indication that this is not the case. In case p-values are insignificant, it means you don't have indication against the model with equal means in the data, but it doesn't mean means are really equal, and neither that the homoscedastic normal model is realistic.
This means that the test can absolutely be computed and interpreted for data for which you don't know whether the data are normally distributed or homoscedastic, or even where you know (realistically) that this is not the case. If you don't find significance, you don't have evidence against the null model, but this doesn't mean the null model is true anyway. No interpretation problem here. If you find significance, the null model looks wrong. At this point you may be interested in what exactly is wrong with the null model. If indeed data were normally distributed and homoscedastic, what is wrong with the null model is that means are apparently not equal. However you don't know this, and you may wonder whether you get significance for deviations from the null model other than unequal means. At this point I'd just look at the data and see what they look like, i.e., whether rejection was apparently caused by differences in means rather than deviations from normality or homoscedasticity. Note that the latter is not likely, as ANOVA is quite robust, meaning that it is hard to reject even with deviations from normality/heteroscedasticity if underlying means are in fact equal. But in any case you can say that (a) the equal means null hypothesis was rejected and (b) that the data clearly seem to indicate that means are indeed different (if that is how the data look like), be they normal/homoscedastic or not. So I don't see a big problem with straight ANOVA here.
There are sometimes reasons to use methods that have weaker assumptions such as Kruskal-Wallis or Welch's ANOVA, particularly because the power of the ANOVA may become smaller with, e.g., outliers or some forms of heteroscedasticity. With three observations per time point however I don't see much if any advantages in using these unless you have some extreme outliers.
The thing is that Kruskal-Wallis reduces information (from raw data to ranks), which in itself loses some power, and this is not a good idea if your information is already very weak because of the small sample size. Welch ANOVA probably won't do any harm, but also has somewhat less power in some situations because weaker assumptions mean that the information going into computation of the p-values is weaker; it may help a bit in some situations (where we are not close to the standard ANOVA model assumptions) and may do a bit of damage in some others, and chances are it's hard to figure out with this small amount of data whether we are in the first or the second situation (it may also be that it agrees with standard ANOVA often or always in your situation).
That said, of course the weak amount of information that you have will be a problem with whatever you do. There is nothing in statistics that can extract something strong from weak data.