Part of my job is measuring the effect of marketing interventions using experiments when possible, or estimating their effect when it's not possible to experiment.
I know relatively little about causal inference, beyond the fact that it's a very dense area of applied statistics, and I typically use Google's Causal Impact package to infer causal effect sizes from observational data.
However, one thing that I do know is that all methods of causal inference - even seemingly simple aggregate-level Diff-in-Diff approaches - require adherence to strong assumptions in order to draw valid results.
A (much more senior) colleague has recently proposed a method of estimating the effect of an upcoming intervention using a simple forecast-based approach. Essentially, they intend to forecast the behaviour of a response time series from the point of intervention using the historical (pre-intervention) behaviour of that time series and one or two covariates corresponding to marketing-promotions and holidays. They will then infer the effect by looking at the difference between observed data and the predicted counterfactual (what the response would have done if we'd not made an intervention).
This is all very familiar to anyone who has worked with causal impact or other extensions of DiD, however the part that is troubling me is the specification of the model. In the reading around the subject that I've done, I've never seen such a naively simple approach; synthetic controls/counterfactuals are always carefully constructed using contemporaneous covariates that seem to impart causal estimation power to the model.
Quite frankly, if this is valid, then why bother going to the extent of other causal inference approaches? Why not just build a brilliantly accurate predictive model and forecast a counterfactual without concern? Alternatively, if there are key considerations being overlooked here then can someone please help with identifying them as I feel there is something amiss but I lack the knowledge to know what.