As I already mentioned in a comment below your question, scipy has contribution guidelines that tell you what they expect from your code before even thinking about including it into scipy. The major keywords are:
- Unit tests
- Documentation
- Code style
Testing
The first point of this list would also help you to answer your question about the algorithm's correctness automatically. This answers to What data should I use to test an FFT implementation, and what accuracy should I expect? on the Signal Processing Stack Exchange can help you to get an idea on how that might look like. The documentation of the relevant functions (e.g. numpy.fft.fft) in the scipy stack and their associated tests can provide further hints. Since my knowledge on FT, DFT, FFT, WTF (;-) ), and the likes is a bit "rusty", you maybe have to look for ressources more appropriately matching what you intend to do.
Documentation
The documentation part is comparatevely simple. Again, there are guidelines on how to document code associated with the scientific Python stack (everything surrounding numpy/scipy). An example:
def polar_to_complex(radii, angles):
"""Return r-phi coordinates to complex numbers
Parameters
----------
radii, angles: array_like
radial and angular component of the polar coordinates. Both input have
to have the same shape
Returns
-------
out: ndarray
complex representation of the given r-phi coordinates
"""
return radii * np.exp(1j * angles)
This is also commonly called numpydoc oder NumPy-style documentation. It's a very feature rich style and I highly recommend to have a look at the ressource linked above to learn more.
Regarding type annotations: they are just that, annotations or hints. They are not checked during runtime, i.e. the Python interpreter does not care about your carefully crafted type hints. Static code analysis tools like mypy and pylint are the tools that would make use of them to tell you about flaws that can be checked without actually running your code, hence the name "static code analysis". If you really want to make sure that all the inputs are how you expect them to be, you will have to write your own validation code raising appropriate exceptions at runtime to tell whoever uses your library incorrectly that they're doing so.
Style
The contribution guidelines clearly describe that your code is supposed to conform to PEP 8 conventions. There is also a hint to use automatic tooling in order to check your code in that regard. In case you don't like there suggestion on what tool to use, fear not! This answer on Code Review meta lists quite a few other tools that can help you in your endeavours to write beautiful Python code. Among others, they should tell you to use lower_case_with_underscore
names for function and variable names, avoid whitespace around =
when used in keyword-arguments, and that there should be two blank lines between functions at the module level.
Performance
I have not actively checked the performance of your implementation compared to what's already implemented in numpy/scipy, but from my experience and since Fourier transformations are a very widely used tool, I highly suspect that such a simple implementation will be no match for current state-of-the-art implementations likely present in numpy/scipy. As I said, I did not check, so I might be wrong here.
If I were to guess, sorting is likely the single most costly operation in your algorithm, but again I have not measured it. If you are willing to look into this, cProfile
and timeit
are likely the modules to look at.