This is generally a fairly elaborate topic, and may require more reading on your part for better understanding, but I will try to answer a couple of your questions in isolation and leave references belowfor further reading.
Confounding
Consider the example below:
Controlling for the confounding variable Gender"Gender" gives us more information about the relationship between the two variables Drug"Drug" and Recovery"Recovery". You can, for example use, control for the confounder Z as a covariate (by conditioning) in regression analysis, and this will reduce your bias – as you know more about the effect of X on Y.
Colliding
As mentioned here, conditioning on a collider can actually increase bias. Consider the example below
If I know you have a fever, and don't have the flu, but I control for the colliding effect between Influenza and Chicken Pox, knowing you have a fever actually gives me more evidence that you might have Chicken Pox (I recommend you read more about this, the link above should be useful).
Mediation
Controlling for intermediate variables may also induce bias, because it decomposes the total effect of x on y into its parts. In the example below, if you condition on the intermediate variables "Unhealthy Lifestyle", "Weight", and "Cholesterol" in your analysis, you are only measuring the effect of "Smoking" on "Cardiac Arrest", and not through the intermediate path, which would induce bias. In general, it depends on your research question when you want to control for an intermediate path or not, but you should know it can induce bias, and not reduce it.
Backdoor Path
Backdoor paths generally indicate common causes of A and Y, the simplest of which is the confounding situation below. You may want to look at the backdoor criterion [Pearl, 2000] to see whether eliminating the confounding variable in this case is reasonable for a particular case.
Regularization
I also wanted to mention that the algorithms for statistical learning on DAGs also reduce bias through regularization, see (this) for an overview. When learning on DAGS you can end up with highly complex relationships between covariates which can result in bias. This can be reduced by regularizing the complexity of the graph, as in [Murphy, 2012, 26.7.1].
Hope this provides you with enough to chew on for now..



