For building the model, it doesn't make a difference. Causation is more a conceptual explanation of the data rather than being directly related to the properties of the data itself - you can have perfectly correlated variables without no causative relationship, or have a causative relationship with no correlation whatsoever, or anywhere in between. That one variable causes another doesn't actually imply anything at all about their statistical relationship. And if we observe all the data anyway, their upstream causes don't really matter - making inferences about X2 given some upstream cause X1 isn't necessary if you have X2 directly.
Suppose you take some set of X's and Y's, and build a predictive model. That model is no less valid if I tell you that some of the X's cause one another, or that none of them cause anything, or even that it's actually Y that causes X. The predictive model encapsulates the observed statistical numerical relationship between variables, which is not affected by the conceptual relationship among them. It's possible to observe the exact same data, and build the exact same model, with the exact same statistical performance, in scenarios with causation and in scenarios without. Any observed relationship among variables could have happened equally well under causation, or not.
What might change is your interpretation of the model. If you have, say, two variables with equal importance in the model, you may prefer to focus on the one with a causal relationship. Causation may give you explanatory power of what the model does, but knowing causal relationships among the input variables won't change how the model performed statistically.