1- If I understand correctly, what you are referring to here is data discretization: in your example, instead of representing the concentration of zinc as a continuous function of the position (left - I assumed a concentration represented over a 1D line, although I guess your data are represented on a 2D map), you would bin them to obtain a discrete distribution (right). In the later case, the first bin represents the concentration of zinc in the $[0m-10m]$ segment, the second bin represents the concentration of zinc in the $]10m-20m]$ segment, and so on. The goal of discretization is generally to simplify your data for visualization, e.g. to be able to represent them using an histogram as on the right figure.
Note that there are also ways to estimate continuous distributions from data (i.e. without discretizing them), using e.g. a Kernel Density Estimator.

2- EM is meant to work with continuous variables, i.e. without binning them. In your example, if you want to represent the zinc concentration as a function of the position using a mixture of Gaussians, and to fit it on data using the EM algo, one of the parameters of interest would be the position (i.e. the mean) of each components in your mixture, which can be optimized as continuous values and do not need to be binned.
Note that it is also possible to perform EM when some parameters to be estimated are discrete. In this case, a common practice is to loop over the different possible discrete values of this parameter, to perform classical EM for all other parameters (while holding the value for the discrete parameter constant), and to pick the discrete value that yields the highest likelihood, see for instance the following papers using EM:
Barri, A., Wang, Y., Hansel, D., & Mongillo, G. (2016). Quantifying repetitive transmission at chemical synapses: a generative-model approach. Eneuro, 3(2).
Gontier, C., & Pfister, J. P. (2020). Identifiability of a binomial synapse. Frontiers in computational neuroscience, 14, 558477.