Skip to main content
added 4 characters in body; edited tags; edited title
Source Link
toolic
  • 16.4k
  • 6
  • 29
  • 221

Correctly sum pixel values in tointo bins of angle relative to center

But, I think I have to divide that array by r, since the pixels near r=0 get expanded to occupy many more "pixels" in the rectangular array than those at large r, and I am not sure in the end how to implement that quantitatively correctly.

The script below generates some fake data, and implements a binning into 360 slots centered on integer degrees.

It works, and for modest data sizes, the looping over individual pixels is not too slow.

But, I feel that there must somehow be a more "elegant" way, and one that puts the looping back into compiled code, using numpy slicing perhaps?

I've added the numpy tag; even if there may be some high level library that happens to have a feature like this, I like sticking to numpy and scipy implementations whenverwhenever possible.

Correctly sum pixel values in to bins of angle relative to center

But I think I have to divide that array by r, since the pixels near r=0 get expanded to occupy many more "pixels" in the rectangular array than those at large r, and I am not sure in the end how to implement that quantitatively correctly.

The script below generates some fake data, and implements a binning into 360 slots centered on integer degrees.

It works and for modest data sizes the looping over individual pixels is not too slow.

But I feel that there must somehow be a more "elegant" way, and one that puts the looping back into compiled code, using numpy slicing perhaps?

I've added the numpy tag; even if there may be some high level library that happens to have a feature like this, I like sticking to numpy and scipy implementations whenver possible.

Correctly sum pixel values into bins of angle relative to center

But, I think I have to divide that array by r, since the pixels near r=0 get expanded to occupy many more "pixels" in the rectangular array than those at large r, and I am not sure in the end how to implement that quantitatively correctly.

The script below generates some fake data and implements a binning into 360 slots centered on integer degrees.

It works, and for modest data sizes, the looping over individual pixels is not too slow.

But, I feel that there must somehow be a more "elegant" way, and one that puts the looping back into compiled code, using numpy slicing perhaps?

I've added the numpy tag; even if there may be some high level library that happens to have a feature like this, I like sticking to numpy and scipy implementations whenever possible.

Became Hot Network Question
added 4 characters in body
Source Link
uhoh
  • 605
  • 4
  • 11

Correctly sum pixel values in to bins of angle (relativerelative to center)?

It works and for modest data sizes the looping over individual pixels is not too slow.

Correctly sum pixel values in to bins of angle (relative to center)?

It works and for modest data sizes the looping over individual pixels is not slow.

Correctly sum pixel values in to bins of angle relative to center

It works and for modest data sizes the looping over individual pixels is not too slow.

Source Link
uhoh
  • 605
  • 4
  • 11

Correctly sum pixel values in to bins of angle (relative to center)?

A quick way to try to do this would be to use scipy.ndimage.map_coordinates, mapping radius to one axis of a rectangular array, and angle to the other, then summing along the radius axis.

But I think I have to divide that array by r, since the pixels near r=0 get expanded to occupy many more "pixels" in the rectangular array than those at large r, and I am not sure in the end how to implement that quantitatively correctly.

The script below generates some fake data, and implements a binning into 360 slots centered on integer degrees.

It works and for modest data sizes the looping over individual pixels is not slow.

for i, value in zip(itheta.flatten(), p.flatten()):
    hist[i] += value

But I feel that there must somehow be a more "elegant" way, and one that puts the looping back into compiled code, using numpy slicing perhaps?

I've added the numpy tag; even if there may be some high level library that happens to have a feature like this, I like sticking to numpy and scipy implementations whenver possible.

Fake data:

fake data

Summed into bins:

fake data summed into angle bins

Script:

import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter 
from scipy.special import erfc

np.random.seed(4241) # repeatable simulation for everyone

# establish cartesian and polar coordinate arrays
yx = np.mgrid[-250:251, -250:251]
y, x = yx
extent = [x.min(), x.max(), y.min(), y.max()]

r = np.sqrt(y**2 + x**2)
theta = np.arctan2(y, x)

# generate some fake data
p0 = np.cos(3*theta)**6 * np.sin(r**0.5)**6
cleanup = (1 - np.exp(-r**2 / 100)) * erfc((r - 200) / 8)
lumpy_1 = gaussian_filter(np.random.random(p0.shape), sigma=1)**2
lumpy_2 = gaussian_filter(np.random.random(p0.shape), sigma=60)
lumpy_2 -= lumpy_2.min()

p = p0 * cleanup * lumpy_1 * lumpy_2
p /= p.max()

plt.imshow(p, origin='lower', cmap='gray', extent=extent)
plt.show()

# assign a bin index to each pixel, CENTERED on integer degrees
# (remember that Python's .astype(int) by itself rounds both
#  +0.99 and -0.99 to zero, so we have to use floor first.
#  cf. https://scicomp.stackexchange.com/a/40779/17869)

itheta = np.floor(np.degrees(theta) + 0.5).astype(int) + 180
hist = np.zeros(361)

for i, value in zip(itheta.flatten(), p.flatten()):
    hist[i] += value

# clean up the fact that -180+/-0.5 is split into two bins
hist[-1] += hist[0]
hist = hist[1:]

fig, ax = plt.subplots(1, 1)
ax.plot(hist)
ax.set_xlabel('angle (deg)')
plt.show()