EDIT: my question is not how to derive the formula below (I think the derivation is more or less what I guessed, like the answer below supports), but whether it can be made valid for the case where the arguments $x_i$ are large and therefore $\delta x_i>1$.
If we have a function $f(x_1,x_2,\ldots,x_N)$ and the arguments of $f$ come with errors $\delta x_1,\delta x_2,\ldots,\delta x_N$, an estimate for the error of $f$ itself is
$\delta f=\sqrt{\sum_{i=1}^N\Big(\frac{\partial f}{\partial x_i}\delta x_i\Big)^2}\ .$
The way I understand it this comes from the Taylor series and the assumption that the $\delta x_i$ are small (here for a single argument):
$\delta f=f(x+\delta x)-f(x)=f(x)+f'(x)\delta x+\frac{1}{2}f''(x)(\delta x)^2+\ldots-f(x)\approx f'(x)\delta x\ ,$
and then some argument using a Gaussian distribution of errors to get the RMS version for several arguments above.
What makes me wonder is the fact that $\delta x_i$ has to be small (definitely smaller than $1$) for this to be valid. But if for example $x\sim 10^{20}$ an error less than $1$ would be extraordinary. I've tried to rewrite the argumentation so that it is the relative error $\frac{\delta x}{x}$ that has to be small, with no luck.
Is the formula above really only valid for small (as in not huge) input arguments to the functions, or is it possible to rewrite the argumentation in terms of the relative error? Or am I misunderstanding everything completely?