from collections import deque
def generate_cache():
graph = {0:[1, 3], 1:[0, 2, 4], 2:[1, 5], 3:[0, 4], 4:[1, 3, 5], 5:[2, 4]}
init_state = '123450'
results = {init_state: 0}
queue = deque([[init_state, 0]])
while queue:
current_state, step = queue.popleft()
empty = current_state.find("0")
for candidate in graph[empty]:
tmp_state = list(current_state)
tmp_state[empty], tmp_state[candidate] = tmp_state[candidate], "0"
new_state = ''.join(tmp_state)
if new_state not in results:
queue.append((new_state, step + 1))
results[new_state] = step + 1
return results
cache = generate_cache()
def sliding_puzzle(board):
"""
:type board: List[List[int]]
:rtype: int
"""
init_state = "".join(str(c) for c in board[0] + board[1])
return cache.get(init_state, -1)
```This got me:
Runtime: 32 ms, faster than 100.00% of Python3 online submissions for Sliding Puzzle.
Memory Usage: 13.3 MB, less than 15.86% of Python3 online submissions for Sliding Puzzle.
Hardcoded cache
This would probably be a right place to stop but for some reason, after a few days, I got a bit curious of the performance gain we'd have by having the cache hardcoded:
Runtime: 32 ms, faster than 100.00% of Python3 online submissions for Sliding Puzzle.
Memory Usage: 13.1 MB, less than 80.58% of Python3 online submissions for Sliding Puzzle.
I must confess that I am a bit disappointed.
Additional note: at this level of details, resubmitting the same solution can lead to different performances.