An extreme toy example might help in understanding the issues involved. Suppose the best estimate of $y$ knowing only $x_1$ is $y \approx 4 x_1$. Now we introduce an additional variable $x_2 \approx 2 x_1$. Note that $x_2$ is almost perfectly correlated with $x_1$. Now we have infinitely many equally good estimates of $y$. For example:
- $y \approx 4 x_1$
- $y \approx 2 x_2$
- $y \approx 2 x_1 + x_2$
- $y \approx -2 x_1 + 3 x_2$
- $y \approx 10 x_1 - 3 x_2$
There is now no good estimate of the coefficient of $x_1$ (is it $4$, $-2$ or $10$?) nor of $x_2$ (is it $0$, $3$ or $-3$?).
But if all that we are interested only in estimating $y$, it does not matter which of the alternative specifications we use as they all give approximately the same answer.
What happens if we use these estimates in an optimization model? What we get is multiple optima. For example, suppose we are trying to minimize $y^2$. Clearly, we need to set $y \approx 0$. We could achieve that in many different ways. For example:
- $x_1=x_2=0$
- $x_1=10, x_2=-5$
- $x_1=-4, x_2=2$
Depending on your problem, you might be indifferent between these different combinations of input values that optimize the output and you might be happy to just find one of them. In that case, you could plug any estimate of $y$ into the optimization problem and solve for the optimum.
Now your situation is much milder than the extreme toy problem that I described, but the general principle remains the same. You could run your regression, plug the equation into your optimization problem and you would likely find a decent optimum which might be one of several equally good optima.
Also, you mention "notable pairwise correlations", but do not mention the magnitude. In practice, we usually find that pairwise correlations in the range of $0.5$ to $0.7$ do not cause much problem in linear regression provided the VIFs are less than $5$ or even $10$.