Skip to main content
edited tags; edited title; edited title
Link
200_success
  • 145.7k
  • 22
  • 191
  • 481

Big O Algorithm to transform a string according to occurrence count of this algorithmeach character

Source Link
redixhumayun
  • 481
  • 1
  • 6
  • 14

Big O of this algorithm

I have this algorithm that counts the frequency of character occurrence in a string, and outputs a new string based on that.

For example,

input = 'aabbcccaaa'
output = 'a5b2c2'

Here is my implementation in python

def compression(string):
    string = string.lower()
    freq_count = {}

    for index, char in enumerate(string):
        if char not in freq_count:
            freq_count[char] = 1
        else:
            freq_count[char] += 1

    return_string = ''
    for key in freq_count:
        return_string += key + str(freq_count[key])
    print(return_string)

    return return_string

compression('aabccccaaa')

My question is, am I making this algorithm less efficient by using dict to memoize values.

Also, I know that creating a new string takes up memory allocation, so is there a way to improve on that?